November 2013 â€˘ Published by Decisive â€˘ A CommsDay publication
G.Fast Just how fast will it really be?
US Ignite In search of the next generation internet
West to East The subsea fibre race to link Perth to Southeast Asia Customer experience Telcos search for that competitive edge Satellite Surprise ICO Global failure gives M2M startup a boost The art of selling 700MHz Telcos want it, but at what price?
VECTORING The tech, the potential, and commercial reality
ABOUT COMMSDAY MAGAZINE Mail: PO Box A191 Sydney South NSW 1235 AUSTRALIA. Fax: +612 9261 5434 Internet: www.commsday.com COMPLIMENTARY FOR ALL COMMSDAY SUBSCRIBERS AND CUSTOMERS.
Published several times annually. CONTRIBUTIONS ARE WELCOME
5 VECTORING: The tech, the potential, and the commercial reality
GROUP EDITOR: Petroc Wilton FOUNDER: Grahame Lynch COVER DESIGN: Peter Darby WRITERS: Tony Chan, Geoff Long, David Edwards, William Van Hefner, Grahame Lynch, Dave Burstein COVER DESIGN: Peter Darby ADVERTISING INQUIRIES: Sally Lloyd at firstname.lastname@example.org EVENT SPONSORSHIP: Veronica Kennedy-Good at email@example.com ALL CONTENTS OF THIS PUBLICATION ARE COPYRIGHT. ALL RIGHTS RESERVED CommsDay is published by Decisive Publishing, 4/276 Pitt St, Sydney, Australia 2000 ACN 13 065 084 960
Features 13 G.Fast Just how fast will it really be? 21 West to East The subsea fibre race to link Perth to Southeast Asia 30 Customer experience: Telcos search for that competitive edge 33 US Ignite In search of the next generation internet 36 The art of selling 700MHz Telcos want it, but at what price?
FIRST Customers leading the way for AT&T’s M2M business AT&T has embarked on a major push into the machine-tomachine space – in response to customer demand. According to AT&T director of M2M business development Sean Horan, a big part of the development process behind the operator’s M2M strategy is now based on customer requests. “Traditionally, we were looking at M2M, and looking to just be the data pipe to enable the applications,” Horan said. “[But] what we are seeing is that more and more of our enterprise customers are wanting to leverage AT&T’s expertise in managing the devices, building the applications. It is being driven by our enterprise customers coming to us, and saying ‘what can you do if we want to provide an environment for that innovation?’” This move beyond just network transport and into the application layer is reflected in AT&T’s investment into its Foundry network of research and development centres. The operator’s two new Foundry sites, located in Atlanta, Georgia and DallasForth Worth, Texas, both focus on applications that have to do with connecting things, instead of people. While the Atlanta site is focused more on consumers, or what AT&T calls ‘lifestyle’ applications, the Dallas site is exclusively dedicated to M2M and connected devices. The Texas site “will bring world-class expertise, platforms, infrastructure and tools
to the field of machine-tomachine and connected device technology,” the operator said. “Commonly referred to as ‘the “More of our enterprise customers are wanting to leverage AT&T’s expertise in managing devices, building the apps”
internet of things,’ M2M/CD involves sensors and devices that are connected to remote assets, machines and other computers to share information and help people and businesses make smarter, faster decisions.” NEW REVENUE: Enterprises, according to Horan, are now realising that M2M is something that can not only drive efficiency into their business, but also result in new revenue streams by monetising information that they already have in their systems. “We see customers, mostly enterprise customers, coming to us, and saying ‘we have the data’ – or maybe ‘I don’t have the data, [but] how do I get it?’ That is the first entrance into a M2M application,” he said. “They come to AT&T asking for our expertise on how to do
that. That is the first stage. The second piece is ‘now that I have the data, what can I do with it… how do I aggregate and display that data in a useful manner?’” This realisation that their data is of potential value to their processes also means that enterprises are now starting to look for ways to offer that value to their customers. “[Enterprises] are taking that data and adding new and innovative applications on top of that to look at driving and enabling new opportunities for revenue,” Horan explained. “The interest starts as a cost saving measure to become more efficient in their business processes leveraging our ability in M2M.” “Then, once they have the data, it turns into what can they do to innovate and provide differentiation to their customers, and differentiate their company in the market to gain more revenue.” According to Horan, AT&T now has certified close to 1,500 connected device types on its network and counts some 14.7 million connected devices – not including consumer handsets – on its
network, making it “by far the biggest M2M carrier.” While not able to disclose specific revenue numbers, Horan said that AT&T’s sales funnel for M2M applications and solutions has grown 3-fold year-on-year.
Capping energy costs on mobile networks When it comes to energy consumption, less is usually more, especially when mobile operators are facing surging demand from customers to expand their networks. That is why Nokia Solutions and Networks has set a goal for itself to “flatten” the energy profile of mobile networks while continuing to enhance capacity and coverage. While modern mobile networks naturally yield energy savings due to more efficient platforms, demand continues to put pressure on mobile operators’ operational expenditures, according to NSN head of mobile broadband energy solutions Mark Donaldson. “Our strategy is about flattening out the energy consumption profile of our customers,” he said. “I think most people accept that new technologies allow them to lower power consumption, and we have seen that in our products, which have provided significant energy consumption reduction over the last few years. But countering that is that we are seeing huge demand on mobile networks, so we are trying to balance that, so we have some level of control over the growing consumption… to flatten the profile.” One obvious direction for NSN is to make its equipment
more energy efficient, which it has done with its base stations – but there are many other opportunities to further cut power costs. “There are a number of things that we have done and… are doing. Our own BTS equipment has seen major reductions in power consumption in the last 5 to 6 years – up to 70%. We believe there is still some way to go there. The RF part with the macro base stations in particular, we believe there are efficiency gains to be had there, so hopefully we can drive that consumption down even further,” Donaldson said. “The kind of technologies that are “Our own BTS equipment has seen major reductions in power consumption in the last 5 to 6 years – up to 70%”
available, for example the introduction of small cells, the ability to have more spectrum efficiency, they all contribute. Adding certain software features to the technology again allows you to cut consumption. Couple this with optimising the operational dynamics of a site, and you can start to help the customer flatten that profile out as the network grows.” One example is NSN’s site energy management solution, which now gathers energy consumption information in real time from cell sites. “We’ve got a new site energy management system, which essentially provides insight into what is happening at a site with energy consumption. That in itself is not new, but what we are doing is linking that into our network manage-
ment system,” said Donaldson. “This allows you to do smart things because not only do we have visibility of the site from an energy consumption perspective, we also have visibility into things like traffic profiles, so we can start to do correlation of traffic and power consumption.” LOW HANGING FRUIT: More importantly, he pointed out, operators don’t have to do a complete revamp of their network to take advantage of energy savings. “I think there are opportunities in most networks to drive efficiency. There is low hanging fruit where we can do things very easily. For example, if an operator has two generators on an off-grid site, we can simply change that operational dynamic to one generator, or a hybrid solution by adding some solar. We can get a return on investment in less than a year for that, so it doesn’t require particularly high levels of capex and [yields] a major benefit to opex,” he said. “You don’t have to modernise the network. We have a lot of software features for example on our 2G network, things like discontinuous transmission, switching off capacity when the traffic demand isn’t there… there are a number of software features from 2G to LTE. We can implement those software features that help conserve power for certain periods of time.” Some simple procedures, like swapping out older power rectifiers at sites, could also yield significant savings. “Simply by modernising some older technology within a network can have a major impact on consumption,” said Donaldson.
Dawn of a new copper age? Operators around the world – as well as some governments – are on the brink of giving their old copper networks a new lease of life via a noise reduction technology known as vectoring. But what’s the real extent of the commercial commitments in play, and when and where will the tech actually see service? Petroc Wilton reports.
hirty years of telecoms history, says Dr John Cioffi – known as the ‘father of DSL’ – hasconsistently shown that one size does not fit all, at least in fixed broadband access networks. Operators and governments globally may still envisage ubiquitous fibre-to-the-premises deployments, first mooted in the 1980s, as their eventual Nirvana. But right now, capital and time constraints are obliging most of them to wring as much value as possible out of their existing copper networks – particularly in the last mile, where the civil costs of upgrading to fibre spike up sharply. That’s what has been driving telcos around the world – including AT&T, BT, Deutsche Telecom, Swisscom, Belgacom and Telecom Austria – to make large commercial commitments to a technology known as vectoring in the last couple of years. An upgrade for existing DSL systems, vectoring offers hefty performance improvements on copper in the right conditions; it was coinvented by Cioffi back near the beginning of the millennium but, with vendors now having shipped millions of lines’ worth of vectoring-ready hardware, it’s right on the brink of hitting the big time. WHAT IS IT? Vectoring is essentially a transmission method that
virtually cancels crosstalk noise on copper wire; the corresponding International Telecommunications Union standard was ratified in 2010. As Alcatel-Lucent fixed access marketing director Dr. Stefaan Vanhastel explains, crosstalk gets worse at the higher range of frequencies used to drive higher speeds on copper, and has been a key factor holding back VDSL deployments from actually achieving their design speed – and thus competing with HFC and even first-generation FTTH networks. “VDSL2 is fantastic compared to ADSL; you get 100Mbps downstream if you test it in the lab, at 400m,”says Vanhastel. “[But] unfortunately, once you actually deploy VDSL2 and take it out of lab conditions, you don’t get that 100Mbps, because of the crosstalk between the telephone lines in the same binders – you maybe [get] 30-60Mbps, a significant drop. It depends, basically, on how lucky you are; your line might be in the centre of the binder, you’re surrounded by other lines, and you get interference from all of them. Your neighbour might be on the outside of the binder, and he’s only getting interference from a few lines. [And] the traffic patterns on individual lines change all the time.”
Vectoring, though, all but removes that crosstalk – pushing VDSL speeds, at least on short loops, up close to their theoretical levels. “Noise cancelling headphones are the typical analogy; you measure the crosstalk, and then you generate an antiphase signal. When you add the noise and the anti-noise you end up with… near zero noise on every line,” says Vanhastel. “But it’s way more complicated than the headphones [analogy would suggest] because you have to measure the crosstalk from each line into all of the other lines, you have to do that for four thousand frequencies, and you have to do that for upstream and downstream… multiple times per second in real time. And then, for every signal that you transmit on each of the lines, you actually have to generate [an additional] anti-noise signal – because that signal that you sent on Line 1 is going to create [its own] noise signal on Line 2, 3 and Line 400.” “So it’s very complex and, while I hesitate to say that [the result] is zero [noise], it’s as good as zero –you cannot afford to let one bit of noise slip through, because it will wreak havoc on the other lines.” WHERE CAN IT BE USED? It’s important to note that, from
an end-user perspective, the absolute quantum of improvement from vectoring will depend on the degree to which crosstalk was impeding performance in the first place. In particular, at longer loop lengths (and to a much lesser extent, lower diameter cables), where attenuation rather than noise starts to become the limiting factor, the vectoring gain will be smaller – even though noise on the lines will still be eliminated. Vanhastel says that while customers who’ve tested vectoring at 1,200m have still seen 1020Mbps improvements, it’s at the loop lengths of 300-500m that they’ve been hitting the important 100Mbps bar, with the most impressive gains out to the 800m mark. Other key vectoring vendors report similar experiences. “What we’re seeing for vectoring is that it’s being used anywhere between 500 and 100 metres,” says Huawei Australia CTO Peter Rossi. "BT, [for example], has deployed in such a way that it's within that 500 metres.” “Between 400 and 550m is a nice sweet spot,” adds ADTRAN carrier networks product marketing manager Kurt Raaflaub. “Customers are going after the 100Mbps on the downstream… to compete with DOCSIS 3, or just keeping in line with – if you're in North America – the FCC mandate for the year 2020, or the digital agenda in Europe where 100Mbps has been quoted.” That means vectoring can work well in fibre to the cabinet, fibre to the node, or fibre to the basement deployments, dramatically increasing line speeds across the short copper loops within apartment blocks or from cabinets to homes. “It's very conducive for large apartment buildings, because you can put it down to the basement, utilising existing infrastructure, and minimise as much as possible the cost,” says Huawei’s Rossi. “But it's [also] being used in fibre-to-the-cabinet type deployments, because that's where.... crosstalk really causes a
lot of problems." “Something that’s a misnomer is that many customers can’t be reached with [vectoring effectively], because a loop length of 500m is quite short,” adds Raaflaub. “But there have been quite a few studies where, depending on the nature of the country, between 65-80% of homes can be tackled with that 100Mbps – they're inside that 500 metres. And in our studies of countries like Australia, they're actually leaning towards the upper bound.” WHO’S DEPLOYING IT? Vectoring is at least a few months away from household use at any scale, but is expected to ramp up
“There have been quite a few studies where, depending on the nature of the country, between 65-80% of homes can be tackled with that 100Mbps – they’re inside that 500 metres.” sharply in the near future. Broadbandtrends forecast last year that 27% of all VDSL2 ports would be vectored by 2017. In terms of
Assia’s Dr John Cioﬃ
spend, the inflection point may already have been passed. Cioffi – a pioneer in broadband over copper since the 1970s and now CEO of ASSIA, which
holds some of the basic patents on vectored VDSL and is helping investors AT&T and Deutsche Telecom with their deployments – notes that AT&T has made a US$6 billion commitment to vectored copper, with Deutsche Telecom committing EUR6 billion as well. “Those are real commitments – that part of it’s there, the money, and now it’s a matter of following the process to actually make it reality!” he says. “[But] you will not see large volume until we get into 2014 and beyond; a lot of the volumes being quoted right now are actually ‘vectorready’ DSLAMs. They’ve sold this many ports…but the equipment that’s going into the field at the fibre-fed nodes is ‘vectorready’ in that they can stick in a new card and it coordinates between the other cards and does vectoring in the future, but there is no actual vectored VDSL in service quite yet.” Alcatel-Lucent says it’s shipped 1.3 million vectoring lines as of Q2 this year. It has sixteen vectoring customers, with the most advanced having deployed over half a million vectoring lines; only Belgacom and Telecom Austria have thus far gone public. “In our case, you can still choose to buy regular VDSL line cards or vectoring line cards… so if [operators] didn’t plan to do vectoring, they wouldn’t buy the line cards!” says Vanhaastel. “Operators are a bit reluctant to go public at this moment in time; one of the reasons is that many of them want to build out their footprint and achieve a certain coverage before they start advertising higher bitrates, in order not to disappoint…a number of other operators are still waiting for the final regulatory approval to activate vectoring services.” “One way to estimate how close operators are to activating vectoring services is to look at the number of vectoring processors that are being shipped… we’ve shipped enough system-level vectoring processors to cover about
430,000 lines. So if you compare that to the 1.3 million vectoring lines shipped, it means that at the moment, about 1/3rd of those are pretty close to being activated, and in some cities, operators have already activated lines in vectoring mode – but without advertising the service.” Huawei, for its part, has shipped over a million vectoring lines, with public customers including BT and Swisscom. “The biggest challenge in vectoring is the CPE... the availability of CPE for vectoring today is minimal. The capability will be there, it will just be the CPE itself. It's easier to upgrade the exchange... than it is to upgrade every individual single user end-point. But that will occur,” says Rossi. “Now that we’ve come out with vectoring and G.fast I’d expect it to accelerate, and I’d expect to see [some] operators solely purchasing VDSL and vectoring – because it’s backwards compatible to ADSL fallback, it makes no sense not to go forward.” HOW MUCH DOES IT COST? One of the key draws of FTTX VDSL and vectoring deployments is the lower capital cost compared to fibre, particularly since they use existing last-mile copper rather than installing a new lastmile fibre plant at tremendous cost. “In terms of capex plus installation costs, we typically use ADSL as a reference point, so ADSL from the central office has a cost of 1. Of course this differs from country to country, but as a rule of thumb, we use a factor of 15 for FTTH, so it’s 15 times more expensive per subscriber. FTTN with vectoring is about 45, so it’s three times cheaper than FTTH,” says Alcatel-Lucent’s Vanhaastel. “Some large telcos have thrown around 3x and 3.5x as the [capex] delta between 100% FTTH and a mix of FTTH/ FTTX,” agrees ADTRAN’s Raaflaub. Cioffi is even more aggressive. “The costs really are a few hundred dollars, or euros,
per customer to get them to this [50-100Mbps] speed with FTTN … it’s about 10-20% of what FTTH costs,” he says. The opex discussion is slightly more complicated. Huawei’s Rossi, for example, suggests that opex on an FTTX build with vectoring
“You get all these consumers that are hung off the same fibre, and you don’t know which one has the problem ... that is causing the issue.” would come in around the same or slightly higher than an FTTH build – but says that Huawei is still doing tests in this area. A frequent criticism of broadband plans using legacy copper plant is that maintenance costs on older copper can be high; there’s also the ongoing power cost to contend with. But Cioffi offers another perspective. “I have seen two other large customers in the world who had been doing FTTH, and are also
Huawei’s Peter Rossi
ASSIA customers for the DSL part of their network. We have been privy to the maintenance costs in both cases, and these are large deployments – at least a million of fibre and DSL each, and in most cases much more than that,” he says. “If the system is being managed by ASSIA, we make a reduction in operator costs, it’s one of our selling points; we reduce calls, trouble tickets, reduce
dispatches, churn and so forth. If you look at those measures for the fibre network and you look at them for the DSL network, after we’re managing it it’s actually lower on the DSL side,” he says. “It depends on the situation – but it’s roughly 30% lower on the operating costs.” “Part of that is that operators tend to offer more service on fibre, so there are more things that go on… but there’s no truth that the passive fibre is somehow less maintenance. It is passive, but it has additional problems because it is passive - particularly, if there’s a problem but they don’t know where, it’s a very expensive endeavour to find out where the problem is. You get all these consumers that are hung off the same fibre, and you don’t know which one has the problem at their home that is causing the issue. That’s part of it. And if you run fibre to the desktop, fibre is sensitive to movement, unlike copper – so if people move the box around, the optics can degrade. It depends on the fibre that you use how much it degrades…in Europe, for one of the major deployments they used the wrong fibre, and they have a very high call rate, double digit per month on the fibre network. And it’s because of people moving the boxes.” “Active [electronics] in the loop plant, which is what VDSL introduces at the node, do have a powering issue, and that creates its own set of issues,” concedes Cioffi. “But typically, if it’s a wellmanaged system, the copper networks are coming in at a lower number of calls, a lower number of dispatches, and a lower churn rate.” ADTRAN, which ships FTTH as well as FTTN gear, tells a similar story. “You're removing a lot of the repeaters, a lot of the bridge taps - and those are the parts of the copper plant that have the biggest issues - when you get down to a shorter loop,” says Raaflaub. “We’ve been deploying FTTN systems with carriers
How to compete in the era of ‘‘smart.’’
Data-driven maintenance of equipment will mean improved operational productivity for an Australian mining company.
Social networks shift value in the workplace from knowledge that people possess to knowledge that they can communicate.
For 5 years IBMers have helped cities and companies build a smarter planet. Forward-thinking leaders have reorganised their business around analytics, mobile technology, social business and cloud. Now, Australian business is on the verge of an even greater tectonic shift. The digital economy is being re-imagined via new utilities—including advanced analytics and high-speed broadband. How, what and who we sell to will never be the same. Decisions being made by today’s business leaders will significantly impact how they compete in tomorrow’s era of “smart.” Using analytics, not instinct. Executives have long relied on intuition to formulate strategy and assess risk. But now, when every customer is inter-connected, the cost of a bad call can be devastating. It’s why one Australian mining company has used equipment sensors and predictive analytics to manage critical machinery maintenance. The aim? More secure supply and reliable delivery.
The social network goes to work. The rise of social and mobile technology is shifting the competitive edge from having workers who amass knowledge to having workers who impart it. Cemex, a $15 billion cement maker, created its first global brand by building a social network. Workers collaborating in 50 countries helped the brand launch in a third of the anticipated time. From you as a “segment” to you as you. In the age of mass media, marketers served broad population “segments.” But the age of Big Data and analytics is revealing customers as individuals. And smarter enterprises deliver useful services to one individual at a time.
Effective marketing no longer aims publicity at broad demographic groups — it opens conversations with individuals.
Finding success on a smarter planet. An organisation invested in Big Data and analytics, social, mobile, and the cloud is a smarter enterprise. On a smarter planet, the next challenge is culture: changing entrenched work practices to make the most of these advances. To learn more, visit us at ibm.com/au/eraofsmart
LET’S BUILD A SMARTER PLANET.®
The customer stories are based on customer-provided information and illustrates how one organisation uses IBM products. Many factors have contributed to the results and benefits described. IBM does not guarantee comparable results elsewhere. TRADEMARKS: IBM, the IBM logo, smarter planet, Let’s build a smarter planet and the planet icon are trademarks of IBM Corp registered in many jurisdictions worldwide. Other product, company or service names may be trademarks or services marks of others. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml © Copyright IBM Australia Limited 2013 ABN 79 000 024 733 © Copyright IBM Corporation 2013 All Rights Reserved. IBMCCA1518/SMART/MT/FPC
IBMCCA1518_Smarter_Planet_MT_FPC.indd 1 IBMCCA1518_Smarter_Planet_MT_FPC.indd
Yellow, 2:45Black 16/07/13 PM
for almost a decade – with nearly 100,000 of them in place,” adds CTO Dr. Kevin Schneider. “Once you get down to the shorter loops... our experience is that there are fewer troubles there than there are with the very long cables from the central office." TECHNOLOGY MIX: None of the vendors are positioning vectored VDSL as a total replacement for FTTH in all cases. Rather, they all see higher-speed copper tech working along with fibre in an integrated network, with different access technologies deployed as necessary to meet specific needs. “Both economic and technical conditions are different from locality to locality,” says ADTRAN’s Schneider. “A different tool may be needed to bring the broadband delivery, and hit the price and availability dates, that each region needs. The collection of different technologies is definitely in the future for us as an industry.” “A lot of operators deploying FTTH, are looking at DSL again because of the time to market advance, but also because it allows it to connect more people within the same budget,” notes AlcatelLucent’s Vanhastel. "We have more fibre customers than VDSL customers, but the combination of the technologies is, in my opinion, a good way to deliver more broadband to more people quickly and cheaply." For Huawei’s Rossi, one of the key advantages of vectoring is that it enables copper last-mile access to keep pace with FTTH deployments, or close to it - avoid-
ing a 'digital divide' in a mixed rollout. "Fibre will give you 100Mbps [in early deployments]... and I have the ability with vectoring and VDSL2 to give you 100Mbps, so I have a unified environment between copper and fibre,” he says. "We have a true environment where we can look at a pure, heterogeneous network where we can pro-
“And that's the important part - we can sweat it without having to put out the huge amount of cost over a short period of time to rejuvenate or change that copper out. vide the same functions for all users - [one user] on the copper world can have the experience as another user on fibre." "And that's the important part - we can sweat it without having to put out the huge amount of cost over a short period of time to rejuvenate or change that copper out. For Cioffi, this kind of technology mix is in line with “the history of nearly everything that happens in telecom.” “This thing about running a fibre to everyone’s home has been there since the mid-1980s, and it hasn’t happened anywhere – even though several nations have committed to this, the first of which was the US with what was called the National Information Infrastructure Act of 1988, and they all went through the same process of learning that it costs too much. So they backed
off. Verizon stopped their fibre program two years ago, no more new homes passed because it cost too much. The situation in France has changed, they’re going to VDSL there because FTTH costs too much. The Germans didn’t even bother to try FTTH because they saw for everybody else that it cost too much,” he says. “Where fibre makes a lot of sense is in the network, even out to the node, where you’re sharing the cost of fibre – digging up streets and other things. It does make sense on very high-speed links where you’re going to 100Gbps, IP networks at 200Gbps, IP networks in the core – that’s got to be fibre, and even if it costs a lot of money, it’s shared over so many users that it’s got to be productive. But when it’s shared over just one user, it becomes very, very difficult to manage that,” continues Cioffi. “With copper, you don’t have that concentration problem; if you have 50 or 100Mbps, you’re not sharing it with anybody until you get way back into the core of the network, where you hit the fibre segments and you start to see aggregation occurring. But those are going to be much higher speed fibres, they’re not going to be running at 1-2Gbps like PON is.” “It’s fools’ gold to believe that anyone is going to sell you a [single] piece of equipment, whether fibre based or copper based, that suddenly solves all the problems.”
WE’RE BETTER WHEN WE WORK TOGETHER For everyone at Telstra Wholesale, improving the way we work with our customers is a priority. We’re focused on the quality of our working relationships and the service we provide. By putting our customers at the centre of everything we do, we’re determined to create positive, rewarding customer experiences.
Just how fast is G.fast? The ITU’s next big contribution to the networking world could be the standardisation of G.fast, a technology that promises gigabit access over today’s copper networks. But even before G.fast gets out of the starting gate, it is becoming clear that its speed boost comes with a price – distance, which will likely limit its application in the real world. Tony Chan reports.
here is undeniable excitement in the industry following the announcement by the International Telecommunications Union that it has started the standardisation process for an access technology that can deliver fibre-like speeds, but over copper networks. G.fast, as the technology is called, will deliver up to 1 gigabit per second of bandwidth over copper lines, matching the performance parameters currently envisaged for fibre access networks. Given that the majority of the world’s broadband access is still delivered over copper in the form of digital subscriber lines, a technology that offers a migration path for existing telephone lines to reach fibre-like speeds not only extends the life of copper, but could save billions of dollars in fibre deployment costs. “G.fast is the follow up to ADSL and VDSL. So where ADSL was about a few megabits to the end customer and VDSL was about a few tens of megabits, G.fast will be about a few hundreds of megabits to the end customers, so that should really enable much faster services,” says ITU Study Group rapporteur Frank van Der Putten, who is one of the leads in the G.fast standardisation process.
According to van Der Putten, the Group is scheduled to meet this December to finalise the G.fast standard, which by March 2014 at the latest, will become ratified. Initial G.fast chipsets are expected by 2015, with equipment and interoperability trials to follow and commercial equipment to arrive by 2016. In addition to faster speeds, G.fast also promises some other key improvements to copperbased broadband networks, including support for user installation. So G.fast will not only leverage today’s telephone lines, but also make copper broadband much easier for end-users to take up and operators to deploy. While it’s a good three years away, the promise of delivering 1Gbps over copper lines has already captured the imagination of operators. But like all new technology upgrades, the reality of G.fast is far from a simple equipment replacement. The physics governing communications networks mean that G.fast suffers from severe limitations when asked to deliver its full potential over today’s telephone networks. So while it can theoretically provide fibre-like speeds over copper lines, its actual benefit for operators is still very much under investigation.
NOT QUITE A GIGABIT: For starters, it actually doesn’t offer Gbps speeds in the conventional sense, according to AlcatelLucent director of fixed access marketing Dr Stefaan Vanhastel. Whatever speed G.fast claims actually refers to the aggregate bandwidth of the connection, and not, like previous references to copper-based technologies, the headline download speed. “All bit rates mentioned for G.fast are aggregate bit rates,” says Vanhastel. “There might be some confusion there, but for VDSL2, when I mention bit rate, it means that downstream bit rate – which is the headline bandwidth, like 100Mbps. For G.fast, 1Gbps, for example, [refers to] an aggregate – upstream and downstream combined.” Obviously, 1Gbps of aggregate bandwidth is still pretty impressive, especially when G.fast incorporates the ability for operators to design their own service plans. “The good news is that with G.fast, you actually have more flexibility and you will be able to split that bandwidth any way you want,” says Vanhastel. “So if you have 1Gbps for example, you could do 500Mbps symmetrical, or you could do 900Mbps down and 100Mbps up. You would have a lot of flexibility in decid-
B:9” T:8.75” S:8.25”
which triggered the transit authority to adjust bus schedules and train crossings and told all the alarm clocks in Chicago to go off eight minutes earlier.
dynamic scheduling Cisco Powered Cloud Services
CISCO Intelligent network
For more stories about how Cisco, our partners and the Internet of Everything are making cities more efficient, visit cisco.com/tomorrowstartshere
A20488_2h_Mag_D.indd 04.19.13 Epson BS
Client Cisco Job Number Ad Number Ad-ID Job Title File Name File Format Start Date Color /Media Materials Due 1st Insertion Vendor Pubs
CISGEN003P01 WO-089 The rush hour that wasn’t so rushed Rush Hour Mag-D-size CISCOGEN003P01_089_Rush Hour Mag_D_8.75x10.875.indd Adobe InDesign CS5 4-18-2013 7:09 PM 4/C Mag *see notes *see notes None Fast Company, Fortune
ISO 12647-7 Digital Control Strip 2009
100 100 60 100 100
100 100 60 100 100
B T L G S
9” x 11.125” 8.75” x 10.875” 8.25” x 10.375” None 1” = 1”
Notes *FC: Insert Date: 6.1.13, Mat. Date: 04.19.13 Fortune: Insert Date: 05.20.13, Mat. Date: 04.19.13
People Creative Director Assoc. Creative Director Art Director Copywriter Copyeditor Account Management Account / Operations Print / Int. Producer Art Producer Product Specialist Legal Production Arts Studio
Jon Randazzo None Sam Luchini Roger Baran Ryan M. / Jeff P. / Bob G. Kateri McLucas Liz Clark Alisa Latvala None None Kirsten Finkas @ 4-19-2013 3:20 PM
Printed at 100%
Released on 04.19.13
Prepared by The Production Arts Studio | Goodby Silverstein & Partners. All rights reserved. 415.392.0669
100 100 60 100 100
40 70 40
70 40 40
40 70 40
40 70 40
70 40 40
10 40 40
20 70 70
70 70 40
70 40 40
3.1 2.2 2.2 10.2 7.4 7.4 25 19 19
50 40 40
75 66 66 100 100 100 80 70 70 100
and asked the stoplights to detour cars to new routes,
One rainy morning, when the brakes engaged, the road listened
©2013 Cisco and/or its affiliates. All rights reserved.
The rush hour that wasn’t so rushed.
ing how you would like to allocate the bandwidth.” That said, to get a 1Gbps connection, even on aggregate, can still be problematic. THE FASTER YOU GO… The reality is that G.fast is not simply a new version of DSL, like VDSL, or VDSL2. While it works pretty much the same way, it represents a step change in what it can – and perhaps more importantly, what it can't – do. Because G.fast uses a much bigger chunk of the analogue frequency spectrum than previous DSL-based technologies, it actually comes up against the laws of physics, which govern the way spectrum frequencies behave over distance. “With ADSL, it typically uses the frequency spectrum between 0 and 2.2MHz, so a relatively narrow part of the frequency spectrum. With each part of the frequency, you can transmit a certain amount of bits, so the way you get more bandwidth – with VDSL2, we used more frequencies,” says Vanhastel. “That is exactly what VDSL2 did… we went to 0 to 17MHz. Actually VDSL2 proposes a number of frequency spectrums, so you can go to 8MHz, to 12MHz, 17MHz, or 30MHz, but the most widely deployed one today is the 17MHz profile.” The extra frequencies allow VDSL2 to push more bits down the line, allowing it to reach as high as 100Mbps depending on the length of the line. “ADSL could go literally thousands of metres – 2,000 metres, even up to 6,000 metres, you can still use ADSL. If you use VDSL2 and you want to achieve higher bit rates, which is the whole reason you would do VDSL2, then you are really looking at shorter distances,” says Vanhastel. “With VDSL2, you are typically looking at 40Mbps at 400 metres, and 20Mbps at 1km. If you go longer distances, the higher frequencies disappear because there is so much attenuation that the signal strength is 0, and you
end up with basically the ADSL frequencies. If you go far beyond 2 kilometres with VDSL2, you actually end up with ADSL frequencies and ADSL speeds – so VDSL doesn’t help you on the longer routes.” The same evolutionary process is being outlined for G.fast, which hikes the frequency range to between 0 and 106MHz in the first phase, and possibly to between 0 and 212MHz in a subsequent second phase. The problem is that the higher the frequency, the less distance
“The G.fast standard targets 500Mbps at 100 metres, 200Mbps at 200 metres, and 150Mbps at 250 metres – these are three target bit rates in the standard.” it travels before it fades out, resulting in G.fast effectively losing all its performance advantages over VDSL2 after a couple of hundred metres. “To give you some idea, the G.fast standard targets 500Mbps
Alcatel-Lucent’s Dr Stefaan Vanhastel.
at 100 metres, 200Mbps at 200 metres, and 150Mbps at 250 metres – these are three target bit rates in the standard. With VDSL2, we were looking at 400 metres, 800 metres, even 1 km,” says Vanhastel. “With G.fast, you are really
looking at those very short distances. I would say 200 metres or less. I would say you are looking typically at 100 metres or less if you want to get into those hundreds of Mbps speeds. If you want to get to the Gbps speeds, you are definitely looking at less than 100 metres, or a few tens of metres.” REAL WORLD SPEEDS: In this way, G.fast’s 1Gbps promise is much more theory than practice. And while the use of the “Gbps” marker in the ITU headline certainly created interest, actually delivering such speeds – even as aggregate bandwidth – is still a challenge outside the lab. In a trial with Telecom Austria, Alcatel-Lucent achieved 1.1Gbps over 70 metres and 800Mbps over 100 metres using a single copper pair on a “good quality cable.” In the lab, AlcatelLucent has achieved as high as 1.3Gbps, says Vanhastel. However, when the trials moved to real world conditions – on the kind of older and unshielded cables found in many existing buildings, as well as multiuser scenarios – the performance of G.fast degraded rapidly. When a single pair is used from an older cable, throughput dropped nearly half to 500Mbps over 100 metres. When a second G.fast line was added, crosstalk dropped G.fast speeds from 500Mbps to just 60Mbps. Like VDSL, G.fast is susceptible to crosstalk, but even more so because of the higher frequencies used. “With G.fast, if you do fibreto-the-front-door, then you only have a single line, so you don’t have any crosstalk, then you are fine,” says Vanhastel. “However, if you do G.fast in a, for example, fibre-to-thetelephone-pole, or fibre-to-thebuilding sort of deployment, then you will still have multiple customers collected to that same device – not many, you might be looking at 8 subscribers or 16 subscribers, something like that. But they will interfere with each
other, and you will get crosstalk.” While crosstalk on VDSL lines typically resulted in a drop from 100Mbps to between 30Mbps and 60Mbps, the G.fast trial resulted in a 90% drop in bandwidth. “With G.fast… when you go to these high frequencies, which G.fast uses, the crosstalk behaviour becomes a lot worse, really a lot worse – the impact of the crosstalk is really strong,” says Vanhastel. As with VDSL2, G.fast crosstalk can be rectified through the use of vectoring – noise cancellation technology that eliminates crosstalk. By using a prototype version of G.fast vectoring, Alcatel-Lucent and Telecom Austria were able to eliminate the crosstalk and bring the two adjacent G.fast lines back to 500Mbps over 100 metres. The problem is that vectoring isn’t even part of the ITU’s existing G.fast standard, which means that G.fast with vectoring is even further away in the future than G.fast itself. “Obviously the concept is the same – vectoring is just noise cancellation and we have been able to demonstrate the technology already, but you can’t just copy the vectoring technology from VDSL to G.fast because of the higher frequency range,” Vanhastel explains. “You need new algorithms, and you need to finetune the algorithms to the higher frequencies where the crosstalk behaviour is actually completely different to the relatively low frequencies of VDSL2. There is still some work to be done.” COST COMPARISON: All these requirements – the shorter distances, the need for good quality copper, the need for vectoring – in order for G.fast to achieve its maximum performance make the technology a temperamental beast. At the very least, the distanceverses-performance limitation puts pressure on cost of deployment. Due to the necessity of placing the equipment so close to
the end-user, an effective G.fast deployment scenario – one where it offers fibre-like throughput – would cost nearly as much as pulling fibre all the way into the home, Vanhastel explains. Using the cost of deploying ADSL from the central office as a
“G.fast gives operators “additional FTTx options… basically, we are adding additional options between the cabinet and FTTH” measurement unit of cost, Vanhastel says that a fibre-to-thehome deployment would have a cost of 15 units, while a fibre-tothe-x deployment with VDSL2 vectoring would have a cost of between 4 to 5 units. “There’s a difference in bit rate – you can do 1Gbps with FTTH, and you can do 100Mbps with FTTx and VDSL2 vectoring – but there is a very substantial difference in cost and in time to market,” he says. “[VDSL2] is 3 times cheaper than FTTH.” But because G.fast requires deployment models somewhere in between FTTH and FTTx, which Vanhastel dubs fibre-to-the -telephone pole, fibre-to-the-frontdoor, etc, the cost escalates. “You bring fibre closer and closer to the consumer, so the difference is not that significant any more. So if FTTH is 15, then fibre-to-thetelephone-pole might be 12 or 13 units, so it may be only 20% cheaper than FTTH.” As such, G.fast should not be mistaken for an upgrade to VDSL2, since a direct replacement of VDSL2 equipment with G.fast will not yield any meaningful performance improvement due to the deployment distances that VDSL2 is used at today. Instead, G.fast gives operators “additional FTTx options… basically, we are adding additional options between the cabinet and FTTH, you can do fibre-to-thetelephone-pole for example, and we have customers who are look-
ing at G.fast to deploy small nodes in manholes, in sewers even, so you can do fibre-to-theman-hole, fibre-to-the-wall, fibreto-the-front-door. We need to be careful, we talked about 1Gbps, but you are really looking at very short distances.” STAYING OUT OF HOMES: But that’s ok, Vanhastel adds, because it solves an actual problem that operators have today – entering the home. “Every operator has problem cases, so it’s a perfect fit for that,” he says. In fact, the main driver for G.fast may not be about extending the life of last mile copper, but simply to extend the “fibre experience” into areas where it is difficult or too costly to roll out the real thing. “One of the main drivers for this is actually operators who want to avoid going into the home. They are deploying FTTH, but entering the home is a really very time-consuming and expensive part of the roll out… each time is a truck roll. It gets very expensive, very quickly,” says Vanhastel. “You can see how attractive it would be if you had a technology that worked over the telephone line… something that could deliver very high speeds, fibre-like speeds, but where you don’t actually have to bring the fibre into the home.” Avoiding the inside of a home avoids many complications, and not just technical or commercial. “We have a customer in the Middle East where male engineers are not allowed to enter the home if the husband is not at home, so you have to make a new appointment.” “We have a customer in Europe, and 30% of their subscribers who sign up for FTTH change their minds when the engineer arrives and they suddenly realise that he is going to be drilling holes in the wall and installing new fibre,” Vanhastel says. “Another example from the Asia Pacific, where FTTH is being deployed: building owners
have realised that there are government targets, they know that the operators have to cover their building, so they have started to charge money to enter the building.” G.fast allows operators to avoid these situations, which according to Vanhastel, can be so time consuming and expensive that in some cases they can jeopardise the broader business case. “These operators are looking for a solution that allows them to stay out of the home. So the idea is to do fibre-to-the-front-door, fibre-to-the-curb, or fibre-to-thefront-yard, or fibre-to-thetelephone-pole,” he said. “You terminate the fibre there and over the last few tens of metres, you could use VDSL2 today – because it is quite urgent so operators are doing it with VDSL2 – but G.fast is the perfect evolution path for that sort of deployment models.” RIPE FOR ASIA: One region where G.fast might find a natural fit is Asia, where population density levels mean lots of housing packed together, says Informa Telecoms and Media senior analyst Tony Brown. “Firstly, even in the region’s leading FTTH markets such as South Korea, Japan and Hong Kong, there is still potential for G.fast to play a role in helping operators achieve ubiquitous high-speed broadband coverage. For example, in South Korea and Japan a substantial portion of fibre subscribers are still receiving fibre-to-the-building services with the last-mile services by VDSL – probably around 40% – rather than full FTTH services,” says Brown. “This means that operators are faced with the choice of either going in and connecting those subscriber residences – most of which are in multidwelling units – with FTTH or
waiting a while and, as long as the copper is in good enough condition, using G.fast.” In fact, any fibre roll out that might involve older MDU buildings could benefit from G.fast, including new fibre projects such as Australia’s National Broadband Network, he added. Other markets that might find G.fast an intriguing future pro-
“Even in the region’s leading FTTH markets such as South Korea, Japan and Hong Kong, there is still potential for G.fast to play a role.” spect include Malaysia, Indonesia and Thailand, where existing broadband services are still limited and operators continue to look at options for faster services. “Telekom Malaysia’s HighSpeed Broadband network also uses VDSL for much of its lastmile connectivity and currently offers speeds of just 5Mbps, 10Mbps and 20Mbps – making G.fast a good option. Deploying G.fast would allow Telekom Malaysia to substantially increase those speeds – probably to well in advance of 100Mbps – at a fraction of the cost of connecting subscriber homes to full FTTH,” Brown notes. “Meanwhile, operators such as True Online in Thailand and Telkom Indonesia also have good reason to look long and hard at deploying G.fast as they try and expand ultra-fast broadband services in their countries. Both operators are already installing FTTH in new-build apartment buildings – as are operators in other countries such as China – but brownfield deployments remain extremely difficult because of the time and cost involved.” On the other hand, Brown points out that advanced FTTH
deployments, such as networks in Singapore, New Zealand, and Taiwan, will rely much less on G.fast. “NBN’s being deployed in Singapore and New Zealand are now almost certainly far too advanced in their deployments to consider G.fast except in a small minority of homes where FTTH is problematic to deploy,” he said. “Similarly, Taiwan’s Chunghwa Telecom is also well down the track – regulatory problems not withstanding – of deploying near nationwide FTTH so would probably not consider G.fast deployment.” BEYOND TOPLINE SPEED: All up, there's a lot going for G.fast beyond its impressive theoretical speed, which in itself is a bit misleading since the techn actually delivers at best a gigabit of aggregate bandwidth. But under the right circumstances, it could excel – and make life just a little easier for operators. The fact that it supports selfinstallation by users means operators can achieve “first time right” deployment, says the ITU’s van Der Putten. “That means, when new users come along, the operator won’t have to dispatch technicians neither to the equipment, nor inside the premise to assist the end users with the installation.” Then there is the ability for operators to implement their own service plans, including both symmetrical and non-symmetrical services. At the end of the day, it might not be G.fast’s speed that ends up being its biggest contribution to the world of network, but its features. Capabilities like selfinstallation and flexible service planning would certainly come in very handy if they manage to trickle down to something like VDSL2.
Looking for a winning partner? Here’s your competitive analysis...
Heavy Reading Metro P-OTS 2.0 Survey
Dell’Oro Group Optical Transport Report Q113
Heavy Reading CEAP Quarterly Market Tracker
#1 ranked in metro packet-optical transport
Overture 8.3 %
#1 selling packet-optical solution worldwide
#1 in carrier ethernet access platform market share
Drive your network revenue with Ciena’s winning insights. Get proof at www.ciena.com/commsday
The curious tale of Sirion’s spectrum Australian start-up Sirion Global is planning a low-cost M2M satellite service after obtaining the rights to 30MHz of prime S-band spectrum – spectrum that was previously aimed at keeping the ICO MEO service in the air. Geoff Long reports..
hen ICO Global Communications was founded in the UK in 1995, low-earth orbit and medium-earth orbit satellite systems were all the rage. It was at a time when the likes of Iridium and Globalstar were still spruiking their global phone systems – just before they went spectacularly bust. ICO followed them into bankruptcy, but its system was arguably more spectacular in its failure than the other LEO and MEO systems – it was still being played out in the courts in a case involving Boeing last year. And now the story looks to be taking the most interesting twist yet: the spectrum that was granted to ICO Global all those years ago is now the key asset in a new Australian plan to create a lowcost M2M satellite network. The company behind the plan is Gold Coast-based startup Sirion Global, which recently announced that it had obtained the rights to 30MHz of S-band spectrum for a new satellite service focussing on low-cost data services in the machine-to-machine and short messaging sectors. The company has also brought on board high-profile satellite industry figure Peter Jackson as an advisor and early stage investor. Jackson is the former CEO of Hong Kong
-headquartered satellite operator AsiaSat. Sirion Global was founded by satellite industry veteran Keith Goetsch, who is getting quite used to long, drawn out battles over satellite spectrum. Goetsch was also one of the founders of KaComm Communications, an Australian satellite venture with financial backing from satellite manufacturer Loral.
The company's plans for Kaband broadband satellite services to remote and regional Australia were left stranded when government-owned NBN Co decided to launch its own Ka-band satellites as part of the national broadband network. However, the company also attracted a fair amount of media attention earlier this year
when it was revealed that NBN Co would have to negotiate with KaComm in order to get its own satellite slots officially approved. While KaComm is still being reworked in the background – it is currently believed to be working with Indonesian partners on a new satellite venture – Goetsch is more interested in talking up the prospects of Sirion Global. The company has actually been around for a number of years, while Goetsch originally made the filing for S-band spectrum back in 2004. With the failure of the ICO project, its spectrum was cancelled by the ITU in early 2012, with Goetsch's plans next in line, although that probably simplifies too much the series of events that occurred to allow Sirion to grab what is considered a major chunk of spectrum. As well as its court case with Boeing, ICO was also involved in a long-running legal dispute with British regulator Ofcom. Ofcom had wanted to cancel ICO's satellite filings with the ITU, but the legal challenges ran from 2009 to 2012. During this time Goetsch and Sirion were next in line to use the S-band spectrum but were unable to officially access it due to the court proceedings. The company was also able to satisfy specific
“bringing in to use” requirements by previously purchasing capacity on an early ICO satellite. Boeing also eventually won its legal case against ICO – it had been accused of breaching a contract to build a satellite communications network. The company's history also involves a bankruptcy case in 1999 before it re-emerged as the “New ICO”, while its very first satellite is notable for being lost due to problems with the Sea Launch launch service. Goetsch is not concerned about the ICO legacy, however – it's enough that its spectrum will be listed in the ITU Master International Frequency Register under Sirion. That spectrum covers a 1980-2000MHz uplink and 2170-2180MHz downlink. Sirion will use the spectrum to develop a two-way global nongeostationary orbit satellite system, utilising low-cost terminals that will be designed to work on both satellite and cellular networks. Goetsch has been working with a company in the US on the design of the terminals and will also now begin work on satellite system design following the spectrum confirmation. “We’ve been working underthe-radar for some time now but I think our plans and spectrum rights will surprise a few people out there,” he says, adding that the Australian regulator, the Australian Communications and Media Authority, had been supportive of the company's filing. Sirion plans to focus on the market for the remote tracking, monitoring, and controlling of fixed and mobile assets located in areas underserved by traditional satellite and terrestrial networks. Sectors targeted include transportation, communications, energy/ utilities, livestock/food supply and production, environment, natural resources, military, infrastructure, and emergency service sectors. Its also a market that is
currently served by a number of existing players, including pioneers Iridium, Globalstar and Orbcomm in the US. Goetsch's initial work on the project stemmed from a consultancy he did for the Australian government to look at tracking livestock. He originally said the Australian livestock sector would be a significant market for the company, but the system's uses are now envisioned to be much wider. The next stage of the company's plans will be to secure financing and to look for global partners, although Goetsch says that he intends for the company to remain Australian-based.
“We’ve been working underthe-radar for some time now but I think our plans and spectrum rights will surprise a few people out there” Former AsiaSat CEO Jackson says he’s impressed with Sirion's efforts in obtaining spectrum, which was critical to the project's development and success. “This clears the way for implementation of our business plan, including most significantly the final design, construction and development of the Sirion system leveraging this extremely large and valuable spectrum asset,” says Jackson. Jackson also notes that he will help structure the next capital raise to fund both infrastructure development and other critical activities for the company. The system is expected to comprise 10 operational satellites operating in two orbital planes, with two spare satellites. The ground segment will include earth station gateways and data processing and satellite control centers. According to Goetsch, the final system is expected to be able to monitor over 5 bil-
lion assets. He says the key will be to design the system to stimulate mass-market customer uptake and lower prices. “Sirion is delighted to have successfully brought into use such a large amount of global priority MSS spectrum,” says Goetsch. “This unique and nonreplicable spectrum is far more substantial than global spectrum allotments used by other NGSOM2M operators in any frequency band. With our spectrum now secured, we plan to begin final design and construction of the satellite and ground segments.” Despite the recent success, the story wouldn't be complete without one final twist. In the time between Goetsch filing his initial application for spectrum and now, the Australian regulator has been proposing changes to a number of frequency bands. In particular it is reviewing the 2.5 GHz band and has discussed the possibility of re-allocating the 1980-2010MHz and 2170-2200 MHz Mobile Satellite Service spectrum for access by television outside broadcast and electronic news gathering services. As Sirion wrote in a submission to the ACMA, its proposal could require TOB licensees and an MSS operator like Sirion to self-coordinate usage. “Further technical or spectrum constraints on Sirion to accommodate TOB services within the limited 19802000MHz and 2170-2180MHz spectrum bands could be enough to make the Sirion service unviable in Australia,” the company suggested. It concluded that it would be a shame that the country from which the global M2M service was originally conceived and developed could not participate fully in its benefits – but given the history to date, it's also possible there are more twists in the tale to come.
Big pipe dreaming The case for a new submarine cable linking Western Australia to Southeast Asia is stronger than it’s ever been. New cable proposals linking Perth to Singapore have come and gone before, so can one of the current crop of ASC International, SubPartners or Trident make it past the planning stage? Geoff Long reports.
bout every six months or so, someone will suggest a new submarine cable linking Perth to Southeast Asia. At least that's what it seems like to the locals. Now there are at least three firm submarine cable proposals on the table and the current thinking is that just one of them is likely to get up. Which one, however, is still open to debate. As has been seen across the other side of the country and over the ditch in New Zealand recently, these things are never easy to get off the ground. In the case of New Zealand's failed Pacific Fibre, even precommitment from multiple carriers was no guarantee of success. Of course, Western Australia has been fighting for more international capacity and a move towards a more diversified and knowledge-based economy for more than a decade. A report by the Western Australian ICT Industry Development Forum back in 2006 noted the emergence of four proposed submarine cable system plans that came and went. It lamented the lack of connectivity options and the high costs of links – in fact, the situation back then seems little different to today. “Recent attempts to remedy poor Western Australian interna-
tional connectivity have failed, and current proposals may suffer a similar fate without government
Mark de Kock and Chris Waters of proposed cable Trident
support,” the report from 2006 stated. “The rising importance of the China and India export markets and the proximity of the Singapore Hub emphasises the importance of Western Australia as a ‘Big Pipe’ exit point for Australia. Barriers to Western Australia’s connectivity are essentially a lack
of affordable international connections due to a lack of competition,” the industry forum wrote. Fast forward to today, and there is still broad agreement on the need for more capacity from the west of Australia into Asia. The west coast of Australia is still only served by the SEA-ME-WE-3 cable, which is now running almost at capacity, and another link directly to the commercial hub of Singapore appeals to carriers and corporates alike. The big question is which among the three current proposals is likely to get the financial backing and carrier commitment that will see them get the all-important first mover advantage? THE PLAYERS: NextGen Group's ASC International arm has had its proposal for a Perth to Submarine cable around the longest, but it has arguably made the least progress in the past year. The company has new owners following the partial buyout of Leighton's telecom assets by the Ontario Teachers' Pension Plan. While many had tipped that the ASC proposal might be shelved, the company maintains that it will press ahead with the project. Then there is SubPartners, which is planning a similar route to ASC from Perth to Singapore via Indonesia. It is being driven
by Bevan Slattery and Ted Pretty, both experienced telecom veterans with a string of prior successes. Slattery was a co-founder of Pipe Networks and knows what it takes to get submarine cable projects up against the odds. The project also has pre-commitment from Telstra. And finally there is the newest player, Trident, a more ambitious project that plans to link not just Perth to Singapore, but also to add terrestrial links to the mix via the resource-rich Northwest region of Western Australia. Trident is taking the more unconventional route of the three players, but it has also announced Chinese initial funding for the project and formed some significant commercial partnerships already, including with Fujitsu. ASCI: ASC International is part of NextGen Group, which is now 30% owned by Leighton Holdings and 70% by the Ontario Teachers' Pension Plan of Canada. Despite the ownership change of earlier this year, both ASC International CEO Errol Shaw and NextGen Group MD Peter McGrath claim that the planned 6.4Tbps 4,800km subsea link that will connect Perth to Singapore via the Sunda Strait in Indonesia is still on course. Leighton Group first announced the Australia-Singapore cable project back in 2011, however it has been held back by difficulties in gaining the necessary approvals for Indonesia. Peter McGrath, now head of NextGen Group – which as well as ASC includes NextGen Networks and the Metronode data centre business – says the key attraction for many international carriers and multinationals is that it will provide end-to-end connectivity from Singapore through to Sydney. He adds that sister company NextGen and ASC have negotiated an “arms-length” 25-year agreement that will provide the domestic bandwidth between Perth and Sydney. According to McGrath, ASCI’s launch could lead to a sig-
nificant drop in price for connections between Australia and Singapore as well as dramatically increasing the capacity available to multinational customers. “We think this is going to redefine the market. Singapore for us will be on-net, so it's a radical shakeup in terms of international capacity and the way we connect into Asia. Singapore will just be like any other Australian capital in terms of pricing,” McGrath predicts. “And it's not an accident we're targeting Singapore – it has dozens of cables to the rest of Asia.” Meanwhile ASC boss Shaw says that aside from the remaining Indonesian permits – which
“Singapore will just be like any other Australian capital in terms of pricing” had been delayed due to a revised ministerial determination – 95% of the key permits and licences have been negotiated, with permits and a carrier license for Singapore already obtained. The Australian landing point has also been finalised. SUBPARTNERS: The SubPartners plans to build a PerthSingapore link were first revealed in CommsDay in August last year, but in just over twelve months the new company has made rapid progress. In particular, the company has added Carlos Trujillo – the ex-Telstra exec who oversaw its Endeavour and AsiaAmerica Gateway cable investments – as its commercial director. It scored a key strategic win in March this year by signing a non-binding memorandum of understanding with Telstra for capacity on the planned PerthSingapore APX West cable. And more recently it has signed a supply agreement with experience cable builder TE SubCom. One of the key reasons it was able to pull in Telstra as a partner is also one of the key differentiators it will use to compete against the other schemes: its ownership
model. Rather than leasing a fixed amount of capacity at a fixed rate, customers will buy a portion of one of the fibre pairs themselves, as per the Telstra MOU. “For a number of the large players, the model that we’re providing them with is effective… infrastructure ownership on a cable build economic model rather than a capacity purchasing economic model,” Slattery says. “So for us to get someone like Telstra, and I dare say other providers, across the line, what we’re saying here is ‘there’s no need for you to build, we can… give you effective cable ownership with a significantly smaller portion of capital commitment’.” SubPartners is also in talks with a range of other prospective customers, which Slattery segments into domestic and international telcos, research and education players and major cloud and content providers. He’s bullish about getting the support he needs to go ahead with both APX -West and its sister cable, APXEast, which is planned to link Sydney and the US. TRIDENT: The newest player of the three competing schemes is Trident Subsea Cable, which was first revealed by CommsDay in July this year. The newly-formed company is headed up by former Vocus executive director of strategy Mark de Kock along with a team that has extensive carrier experience with the likes of Telstra and Verizon. While Trident is yet to name any carrier partners, it has announced an initial US$320 million financing deal with the Beijing Construction and Engineering Group, which has support from the China Development Bank. Further equity will need to be raised for the US$400 million project, with de Kock planning a roadshow to tap into Australian institutional investors for the remaining 20% of the project funding. It has also signed Fujitsu as a
Swissotel, Sydney: Monday 18 and Tuesday November 19 2013
OFFICIAL PROGRAM: A Communications Day/Communications Alliance workshop addressing national broadband policy implementation challenges and priorities for industry, government and other stakeholders Day 1 – Monday 18 November
Session 1 Key Objectives and Challenges
9.00: Opening Remarks – Mr Michael Lee, Chair, Communications Alliance 9.05: Keynote Address – The Hon. Malcolm Turnbull, MP, Minister for Communications (invited) 9.45 Keynote Address – Executive Chair of NBN Co Dr Ziggy Switkowski 10.15: Industry Overview - John Stanton, CEO, Communications Alliance
10.45: Break Session 2 11.10: Service Provider Panel
● Bill Morrow, CEO, Vodafone Australia ● Stuart Lee, Group Managing Director, Telstra Wholesale ● Jules Rumsey, CEO, CloudPlus ● Moderated by: Grahame Lynch 12.40: Keynote: Federico Guillen: President, Alcatel-Lucent Fixed Networks 13.00 Lunch Break Session 3 14.00: The Roll-out Challenge
● Phil Smith - Opticomm ● Gary McLaren – CTO, NBN Co ● Paul Brooks – Layer 10 ● Graham Mitchell – CEO, Crown Fibre (invited) ● Moderated by: Matt Healy – National Executive, Industry & Policy, Macquarie Telecom 15.20 Keynote:
● Mike Galvin – Managing Director, Network Investment, BT Openreach
15.45: Break Session 4 16.00: Commercial Constructs:
13.00: Lunch Break Session 3
● Informa Asia-Pacific analyst Tony Brown 14.00: Finance & Co-Funding Issues: ● John de Ridder – telecommunications economist ● Bronwyn Howell – NZ communications econo● David Kennedy - Ovum mist ● Mark McDonnell - BBY, ● RSP Representative - TBC ● Infrastructure Banking expert - TBD ● Moderated by: Phil Dobbie ● Moderated by: Andrew Barlow, PriceWaterhouse Coopers 17.20: Wrap-Up of Day One proceedings Evening: Cocktail Reception
Day 2 – Tuesday 19 November Session 1 9.00: Access Technology Issues
15.00: Over the Horizon - The Longer Term Evolution of the NBN: Bob James 15.20: Break Session 4 15.40: Closing Roundtable:
● Greg Tilton – Chairman, Dgit ● Sean O’Halloran – MD, Alcatel-Lucent Australia ● Kevin Bloch – CTO, Cisco Australia ● Dr John Cioffi – CEO ASSIA ● Moderated by: Petroc Wilton – Editor, Communications Day
● David Epstein – VP Corporate & Regulatory, Optus ● James Spenceley – CEO, Vocus ● Jane van Beelen – Chief Regulatory Officer, Telstra ● Steve Dalby - iiNet Chief Regulatory Officer – TBC ● Moderated by: Grahame Lynch
10.30: Shadow Minister for Communications (TBC)
11.00: Break Session 2 11.40: Optimising networks to meet regional challenges
● Tony Malligeorgos – Ericsson Australia ● Jason Ashton – CEO, Big Air ● Paul Sheridan (Optus Satellite) and Matt Dawson (NBN Co Satellite) ● Moderated by: TBD
BOOK YOUR SPOT ONLINE NOW AT
strategic partner for project management and delivery. Fujitsu was a contractor on the existing SEAME-WE 3 cable linking Perth to Singapore and also has major plans to drive its cloud networking strategy via the Trident cable, which is expected to run into its Perth data centre. Unlike ASC International and SubPartners, Trident plans to have a landing point for its network in the resources-rich Northwest of the state in Onslow and Karratha. This is intended to give it a much wider customer base to tap into. The other key differentiator is a link with existing cable network Matrix Networks, which already has connectivity between Jakarta and Singapore. According to de Kock, the tieup with Matrix Networks will save Trident both time and capital and give it a lead against potential competing cables. Matrix has been around since 2008 and has four fibre pairs between Jakarta and Singapore, with Trident to take two fibre pairs. He estimates it will save $70 million worth of capex and about 9 months worth of build time. “But more importantly,” says de Kock, “Matrix already has all of the Indonesian permits and approvals in place and this is the hurdle that some of the other guys trying to build a cable system are facing.” The link into the mining and energy areas of the Northwest has also been favourably received. De Kock said that the likes of the Pilbara Development Commission, the State Commerce Department and the Department of Regional Development have all been supportive of the project in
terms of helping with domestic permits and providing introductions to potential customers. Fujitsu Australia national oil and gas director and acting state manager Chris de Josselin says that resources companies se the
“The difference with the Trident project is that it is going to connect the Pilbara, so what we're hearing from the mining and oil and gas community is that it is going to be of more interest because it’s going to pick up that region” domestic links as important. “Every six months in the WA market people are talking about doing that Perth to Singapore link, but the difference with the Trident project is that it is going to connect the Pilbara, so what we’re hearing from the mining and oil and gas community is
that it is going to be of more interest because it’s going to pick up that region,” he said. THE VERDICT: Most observers agree that only one of the three potential Perth to Singapore cables is likely to get built. ASC International's strengths are its Leighton connections and the ability to provide Singapore to Sydney connectivity via NextGen Networks. SubPartners has the benefit of a team that knows how to get submarine cables built and initial backing from Telstra. Trident, on the other hand, has a headstart with the Indonesia to Singapore leg and a new approach that taps into the potential of the resources sector. Then again, Western Australia has been in this position many times in the past as it has sought to become Australia’s “front door” to Asia. Perhaps in six months time there will be another new submarine cable being presented.
TELECOMMUNICATIONS EXPERTS WITH BROAD NETWORKS
Technical Professionals Sales and Marketing Executive Search
Contact us on 1300 452 986
Or visit www.launchrecruitment.com.au
Customer experience: Australian telcos search for that competitive edge Several large Australian telcos are overhauling their customer experience strategy as they seek the next competitive edge. What are they doing, and how do their efforts stack up alongside those of their international peers? David Edwards reports
elecommunications operators have copped a lot of stick over the years over the years for its lack of human touch. And there’s no denying that millions of customers from all over the globe – this reporter included – have at one stage or another felt frustrated, helpless and at the complete mercy of their telco. Whether it’s the agony of dealing with an automated phone system – as opposed to a real-life call centre agent – or opening a phone bill only to experience unexpected ‘bill shock’, the telco customer experience in Australia has, historically speaking, frequently been a negative one. And in early 2011, following a particularly rough January-March quarter in which the Telecommunications Industry Ombudsman recorded 60,000 complaints, the Australian Communications Consumer Action Network branded the telecoms sector as that country’s “most hated industry.” But Australian telecommunications companies are finally cottoning onto the potential benefits that flow from providing a good customer experience, and customer care competition is hotting up. Telstra has placed a huge focus on improving customer service over the past few years. Optus has just rebranded its entire organisation to “connect with people in a way that brings service back to the very core of what we do.” iiNet, perhaps more than the rest of the industry, has proudly kept its own customer service credentials front and cen-
tre as a key brand strength for many years; it won the Commitment to Customer Service trophy at the 2013 telco industry ACCOMs awards. Telstra head of customer service Peter Jamieson tells CommsDay that over the past 1218 months, the firm has worked hard to improve the “advocacy” of its customers through the use of Net Promoter Score. The metric is now employed by each of Australia’s network operators, although iiNet claims to have the only positive score in the industry at 56.7%. “We’ve put a big focus on eliminating detractors and working on how we create more promoters at Telstra through the implementation of a new way of operating… and it enables us to capture real-time feedback from customers and do something with that to our frontline team,” says Jamieson. “It also creates opportunities for us to learn more broadly about how to redesign processes and systems so they’re more focused on customer outcomes.” Informa principal analyst Julio Püschel tells CommsDay that while the NPS is a good metric for measuring customer experience, it must be linked to other key performance indicators in order to understand what factors are driving the score. In the call centre domain, for example, he says that operators in both developed and emerging markets are facing the challenge of learning to treat call centres as an investment – rather than a cost – to
generate more loyal customers. “The way operators are evolving their call centre is to integrate the different KPIs coming from the different parts of the business (web self-care, retail stores, networks, etc). In this way, they will be able to have a holistic view of the client and understand, for example, whether the customer is a promoter or a detractor and how many call drops the client had over the last week period,” Püschel says. Alcatel-Lucent solutions marketing director Greg Owens says that telcos can boost their help desk efficiency by providing customer service representatives with access to the relevant information they need in a “single integrated interface. “Some CSRs need to access data from dozens of different systems, depending on the issue that needs to be resolved – for example, one system for billing queries, another for devicerelated questions. Having a single, unified view of every customer touch point, will result in shorter calls, fewer transfers/ escalations, reduced costs (for the service provider) and improved customer satisfaction,” he says. Owens adds that service providers can also provide consumers with self-service applications– a trend he says started with younger customers who would “rather do anything than call the help desk” – and that by cutting down calls to help desks, telcos are able to reduce overall costs. Owens says Alcatel-Lucent’s own research has shown that “every single point increase in self-
Are you a network operator looking for proven IP communications solutions? Symbio Networks is Australiaâ€™s largest supplier of VoIP managed services and wholesale carriage. Wholesale Carriage
VoIP Managed Services Hosted SIP End-Point Services
Call Termination for Voice & Fax
Hosted SIP Trunks
Toll Free & Free phone
ISDN Primany Rate Replacement
DIDs & Number Protability
We give you the ability to scale and evolve to make the most of this fast growing market. Why Symbio?
SIZE CAPACITY INNOVATION
Australiaâ€™s 5th largest voice interconnected network Carries over 3 Billion minutes of billed voice every year Our 10 years in-house R&D experience
Find out more Email: firstname.lastname@example.org Web: www.symbionetworks.com
service usage yielded a $250,000 value every month.” However, Amdocs APAC sales and business leader in customer management Erwann Thomassain tells CommsDay that despite the proliferation of devices worldwide, mobile devices are not yet being effectively used by telcos as a self-service tool. He quotes a recent Coleman Parkes consumer survey which found that 78% of mobile self-service app users found it “hard to use and difficult to find what they needed.” “[It also found that] consumers would prefer to use mobile self-service applications over other channels. 67% of consumers would prefer to use mobile selfservice than call the contact centre, while 63% would prefer to use mobile self-service than use web self-service from a computer,” he adds. In general, for Australia, there’s clearly still some way to go. A recent KPMG report on prepaid customer experience ranked Australia 18 of 25 countries. It indicated that Australian and US telcos are less likely than their emerging market counterparts to offer customers “advanced options” such as bank applications and micro-payment systems. Owens says that when it comes to micro-payments, such advancements are “much more pronounced in countries where formal or traditional financial establishments are less available.” “It’s not really a matter of falling behind, but about focusing on what’s important and has value for users in a certain market.” “One example in Australia, as in Canada, the US and Western Europe, is the emergence of shared data plans, providing customers with greater flexibility and transparency in how they manage their data use across devices or users,” he says. Telstra has introduced shared data plans this year to give customers the ability to spread data allowance across eligible devices on the same single bill. Jamieson
adds that helping customers understanding their data usage is also critical to maintaining quality customer experience. “[So we’ve] introduced data alerts on our mobile network to advise customers when they’re at 25%, 50%, 75% and 100% usage of their data allowance. And we’ll continue to look at other options to help customers manage their data usage… it’s clearly one of the areas where customers want help,” he says. The Australian Communications and Media Authority’s ‘Reconnecting the Customer’ inquiry – launched in 2010 following a groundswell of customer service and complaint-handling
67% of consumers would prefer to use mobile self-service than call the contact centre issues – led to the creation of an industry-wide Telecommunications Consumer Protection Code. Officially registered by the ACMA in September 2012, telcos must now meet certain standards in areas such as advertising, provision of information, publication of critical information summaries, unit pricing disclosure, credit management processes, complaint handling and bill shock. However, Jamieson says that the industry “clearly needs more than the regulatory stipulations in the TCP Code.” “What’s much more important to us is responding to customers and that goes well behind the codes of practice.” “Yes we meet all of our regulatory requirements, always, but making sure we respond to customers is far more important… we’ll provide them with many more alerts at different levels than the code because that’s what our customers say they need,” he says. But while cashed-up companies like Telstra can afford the costs required to comply with the TCP Code, it’s a different story
for the smaller players. Telcoinabox CEO Damian Kay believes that overly prescriptive regulation runs counter to effective regulation, with some of the conditions placed on small service providers under the TCP Code “just costs for the sake of costs.” He adds that the biggest cost for small service providers comes in the form of setting up mandatory notification systems. “You’ve got to pay to have the notifications around usage or whatever it is… [50%, 85%, etc] for the customer – and to do that in your systems, that can cost up to $20,000-30,000,” he says. Nonetheless, Amdocs’ Thomassain says that Australian service providers in general are now “well aware” of the issues around customer experience, with many having taken action over the past year to improve things. For example, in July Optus released an update to its OptusNow app to collect information from mobile users regarding network coverage and signal strength to determine which areas need upgrades, while Vodafone recently launched its ‘Red’ mobile plans, which offer reduced global roaming fees, increased data and a guarantee that Australian-based call centre staff will answer support calls. And while it will likely take time for this customer-centric culture to kick in, Thomassain believes that we will soon start to see a real improvement in Australia’s telecommunications customer experience as Australian service providers focus on differentiating the customer experience. Still, for many local operators, there could be an element of playing catch-up in global terms. “As we look internationally, most of the US telco operators have had customer experience programs running for a quite a while, and having a head start over Australia by at least five years,” he says.
In search of the next generation internet A public/private initiative backed by the US National Science Foundation is approaching the broadband age from the top down. Instead of focusing on building super fast networks, US Ignite wants to drive the development of next generation applications. Tony Chan reports.
S Ignite founder and CTO Dr Glenn Ricart is quick to point out that his new venture, which was formed as a result of a meeting organised by the National Science Foundation at the White House in 2011, is not exactly like other tech start ups. US Ignite is, he says, a publicprivate not-for-profit partnership. According to official company literature, its mission is to bring together community, industry and academia to “knit together cities and towns across the country with access to high-speed networks, creating a critical mass of individuals and organizations that can develop and experiment with next-generation applications that can’t run on today’s public Internet.” Funding for US Ignite comes through sponsorships from its partners, including big corporate names like Cisco, Ciena, Juniper, Extreme Networks, HP, NEC, and many others, as well as major US carriers like Verizon, AT&T, and Comcast. US Ignite’s ecosystem also extends to academia with partners such as Internet2, other non-profits like the Mozilla Foundation, and to US municipalities – San Francisco, Santa
Monica, Philadelphia, Kansas City, to name just a few. A lot of what it does involves coordinating access to cutting edge technology platforms for those working on developing next generation internet applications, facilitated by connections at the NSF and its partners. Ricart does draw a salary, but he says it’s basically enough to
Dr Glenn Ricart
keep him on airplanes going around the US and getting the message of US Ignite out. In other words, he is not in it for the money. What is undeniable is
that Ricart is passionate about US Ignite. So what gets someone who has spent more than 40 years in the industry and whose career includes the writing of the first batch of TCP/IP code and the setting up of one of the first internet exchange points (the Federal Internet Exchange, which eventually became MAE-East) excited? One area that Ricart is passionate about – and there are several – is the idea of the gigabit internet. But instead of looking to build a gigabit-enabled network, Ricart is turning the tables and starting with the fundamental question: what can you do with a gigabit of bandwidth? Behind US Ignite’s mission is the drive to push the development of gigabit level applications, demonstrate their benefit within gigabitenabled communities, then work to get other communities around the US onboard to adopt similar infrastructure strategies. 1 GBPS: So what can you do with a gigabit broadband connection? Ricart points to his left arm and moves his right hand up and down like a doctor examining a diabetic ulcer on the arm of an
elderly person. Except he is demonstrating a telemedicine application and his right hand is the camera the patient is using to show a remote doctor of his ailment. “If you do this remotely over today’s video conferencing technologies, even high definition telepresence, there will always be a delay between the doctor telling the patient to stop, and when the patient actually stops,” he explains. “What we found is that if we get rid of the compression on the video, and you can do that with a gigabit connection, then the latency goes away.” It is a concept US Ignite calls high symmetric bandwidth, which “allows for things like uncompressed high definition video transmission… as it minimizes delays in things like video conferencing… particularly in areas like healthcare and education.” Another application Ricart describes involves a combination of software-as-a-service and remote desktops being explored for Kansas City libraries. The “software lending library” application basically allows remote users to access high-end software on a time-share basis, much like borrowing a library book. “The idea is that expensive software, like Photoshop or video editing software for example, might be too costly, or is beyond the performance capabilities of end-user computers, or simply only needed for a short period of time that doesn’t justify a purchase,” he said. “This application allows these users to access those software packages when they need to use it, and take advantage of features and capabilities that would otherwise by very costly.” And it is not only students and inner city residents that the application is targeting. Entrepreneurs and researchers, who might also have budgetary constraints when it comes to high-end commercial software purchases, could also benefit from the model.
“This project simply would not be possible without gigabit fibre connectivity,” say the developers of the application, pointing specifically to Google Fibre’s network. LOCAVORE: Gigabit level connection is not the only technical parameter behind US Ignite’s idea of the next generation internet. Advancements such as software defined networking, distributed cloud resources, and virtual networks are also key elements to creating a networking environment that can enable next generation applications. In this respect, Ricart is pretty clear on what the next generation internet should look like, how it should behave, and the services that it needs to support. In addition to just more bandwidth, another critical element for Ricart is latency, not just on the network level, but also on for applications. “Today we think that more
“Today’s internet only works well because the utilisation is so low” bandwidth is better, and I think that’s going to change. The new version is going to be more responsive is better… and to get to more responsiveness, we need to decrease latency, decrease jitter, and a major step to be able to do those two things is to be responsive by having local facilities.” he explains. “I use the word locavore. That word is used in the food industry to mean eating foods grown near you, and I’m using it to mean consuming computer cycles and networking located near you.” Instead of massive data centres that centralise compute resources, he envisions a model where clouds will be distributed, with processing and content located much closer to end-users to improve the responsiveness of applications. “How soon will this happen? Well, interestingly enough, it’s already happened,” he says. “AOL has begun putting in what they
call micro data centres… to be closer to the end-user so they can be more responsive. And studies have shown that people buy more and that people will be able to stay with your service longer if you’re more responsive.” VIRTUALISATION: Another way to ensure application responsiveness is to make sure that each application is getting its share of bandwidth. Citing the multitude of connected devices in his home in Salt Lake City, Ricart points to the “tragedy of the commons,” because all those devices “are contending for the bandwidth on my internet connection.” “Today’s internet only works well because the utilisation is so low. If we had high utilisation, it wouldn’t work well at all,” he says. The situation gets worst when applications ‘cheat’ to get more bandwidth, which can degrade the quality of adjacent applications. “Things like Google Search will go and violate the rules of sharing of the road. It will go and grab more than the bandwidth that it’s entitled to. Voice over IP services will try to grab more than they’re entitled to. Things that think they are better and want to be more responsive than other applications will grab more than they’re entitled to,” says Ricart. “And there are entire companies, like Bideo and OpenClove, which are based on the proposition of making video work well even when they’re having to fight and claw for the bandwidth they need.” Here is where SDN and network virtualisation come in by mirroring what is happening within data centres, which now dedicate resources for each application. “They would go and have a virtual server per application. Instead of sharing this and putting them all on one server – we used to call that time sharing – they now go and they have a server per application, a virtual server per application,” says Ricart. “What
will happen on the network? I think it will also go virtual… instead of a virtual server per application, you have a virtual network per application. The network configuration is matched to your application. You allocate as many of these as you need to run your application dynamically, based on the load. It might not be the end customer that pays. It might be somebody who is bundling the service.” The ability to dynamically partition a network connection into clearly definable parts also enables different operating models, he points out. End-users could add highly reliable virtual networks to support critical applications like remote dialysis control. They could access bandwidth-onthe-demand for specific tasks such as a high-definition video consultation with their doctor. Service providers or governments might even want to partition out part of private Wi-Fi networks as a public resource – for example, to provide extra bandwidth to first responders during emergencies. “If you go driving down your street, is there any chance that you will not be within Wi-Fi range of at least one access point? No chance, right? Everybody’s home has one,” says Ricart. “But you may not be able to access it. What if there were a requirement for a second SSID which is used only for public safety purposes, which is bound securely, perhaps cryptographically, to an application that is provided as part of the community emergency response system that could attach to any Wi-Fi that it can see from any of the major providers in that neighbourhood, and therefore be able to do that access?” 60 APPS: Obviously, not all of
these network features are commercially available today. That is why a big part of US Ignite’s task is working with academia and research networks, as well as the research and development resources for its corporate partners, to serve as the foundation for interested application developers. “We have three goals at US Ignite. The first is to create 60 compelling, transformative applications for things you couldn’t do today, based on new technologies… software defined networking, local cloud computing, taking gigabit to the end-user, reducing latency, things that are going
US Ignite has identified 15 transformative applications that will change how Americans work, learn and play. to change the way the internet works today,” Ricart says. “Second, that we get 200 community test beds, 200 communities who are eager and willing to adopt these new technologies and their applications. And third, to coordinate best practices among these communities, among the industry partners, and to make sure that government, industry and communities are working together to make this goal happen.” According to US Ignite’s website, it has identified 15 “transformative applications that will change how Americans work, learn and play.” These include: “collaborative adaptive sensing of the atmosphere,” which aims to connect sensing radars to high capacity networks to improve hazardous weather warning and response; the self explanatory “virtual reality training for surgeons”; and
“big blue button” – an application that gives students multiple HD camera angles, high-quality audio, and synchronised slides. On top of that, there are another 20 or so applications resulting from an affiliated NSF initiative dubbed the Mozilla Ignite Challenge. These include the “software lending library”; “simulation-as-a-service for advanced manufacturing”; several 3D-enabled applications, and “KinectHealth,” which leverages Microsoft’s Xbox Kinect gesturebased control system to “achieve fitness goals with peers and trainers from anywhere,” among many others. At the same time, Ricart notes that at least 28 communities have joined the US Ignite program. It is a bit too early to gauge the success of US Ignite. So far, a big portion of the applications submitted to both US Ignite and the Mozilla Ignite Challenge have been developed by researchers and academics – although there are some actual start-ups involved. Most proposals are still in the preliminary stage of development and none has proven to be a major commercial success at this moment. So while Ignite might have all the ingredients for developing applications for the next generation internet, there is no guarantee that it will find the next generation Facebook, or Twitter. On the other hand, US Ignite’s approach of “what can be done over a gigabit connection,” instead of “build it and they will come,” is a refreshing reminder that all the bandwidth in the world doesn’t help unless you find a way to use it. As Ricart blogged following the launch of US Ignite, “it’s the applications, stupid.”
The art of selling 4G spectrum in ANZ The outcome of the 4G spectrum auction in Australia earlier this year shocked onlookers, with a billion dollars of MHz left on the shelf. Now, New Zealand’s government is trying to avoid the same result – but with LTE in its infancy, will it succeed? David Edwards reports.
uctioning off 700MHz spectrum – once widely billed as ‘waterfront property’ for the next generation of LTE mobile services in the ANZ region – can be harder than it looks. Just ask former Australian comms minister Stephen Conroy. The previous Australian Labor government failed to sell onethird of the 700MHz digital dividend spectrum it put up for auction earlier this year, with total revenues for the combined 700MHz and 2.5GHz spectrum auction coming in at A$1.96 billion. In the 700MHz band, which was sold at the reserve price of A$1.36/MHz/pop, Telstra claimed 2x20MHz spectrum, with Optus taking home 2x10MHz. However, the remaining 2x15MHz – worth around A$1 billion – went unsold. Vodafone Australia did not place a bid, preferring to rely on its existing 2x20MHz spectrum holdings in the 1800MHz band for its 4G rollout. Some say that the result reflected a recent shift in focus amongst operators towards capacity, for which high-band spectrum is better-suited; the 2.5GHz lots on offer, albeit priced at a fraction of the 700MHz allocations, all sold. In any case, for Conroy, who once bragged to a US audience in 2012 that he had “unfettered legal power” over spectrum auctions – and could even instruct bidders to “wear red underpants on your head” should he so desire – the outcome was
particularly sobering. But now it’s New Zealand’s turn – and at first brush it appears that it has learned from its trans-Tasman rival. The NZ government, which spent some NZ$157 million clearing the band, has announced a reserve price of NZ$22 million for each of the nine lots of 2x5MHz spectrum in its own 700MHz digital dividend. This figure works out at around NZ$0.50/MHz/pop – a figure that, when compared to Australia’s A$1.36/MHz/pop (NZ$1.60), seems more likely to stimulate competition among bidders and ensure that all the spectrum is sold when it goes under the hammer on 29 October. That’s certainly the opinion of Ovum analyst Nicole McCormick, who tells CommsDay that she expects all three mobile network operators – Vodafone NZ, Telecom NZ and 2degrees – to emerge from the NZ auction with 2x15MHz spectrum. “The Australian NZ$1.60 per MHz per pop price for digital dividend spectrum was more than three times the NZ$0.50 per MHz per pop price for spectrum in New Zealand. I do not envisage there being any unsold,” she says. IDC New Zealand senior telecoms analyst Glen Saunders explains that while the NZ$22 million reserve price per 5MHz is in fact lower than what the market had been expecting, it provides an overall balanced approach for the medium-term use of the spectrum. “By that, I mean that the
whole build cost, of which the spectrum purchase can be a big chunk, is not totally upfront and can be spread over a number of years,” he explained. Saunders says that in the case of New Zealand, the government has clearly balanced out the economic benefits of the auction (which it has estimated at around NZ$2.4 billion over the next 20 years) in terms of covering the costs of freeing up spectrum while encouraging further improvements in network infrastructure. The auction terms will provide successful bidders with a deferred payment option over five years, subject to payment of a commercial interest rate. Bidders will initially be restricted to a maximum of three lots of 2x5MHz spectrum, which can increase to four after the initial auction; those who purchase three or more lots will be required to build a “number of new cell sites” in areas that do not yet have any mobile coverage. Meanwhile, those bidders who do not have an existing mobile network will have five years to deploy 4G services to at least 50% of NZ. The conditions also require operators to upgrade 75% of existing 2G and 3G rural sites to 4G using the 700MHz spectrum over the same period, up to a maximum of 300 cell sites. Saunders says that the auction terms provide a “practical and arguably sensible approach that may speed up a broader invest-
ment in 4G technologies.” “The structure of the auction terms allows operators to make their own decisions as to how much spectrum they want. For 2degrees the decision they will need to make is one of whether they want the full allocation of 15 MHz pairs or whether 10MHz pairs will be enough. The result of this decision will then determine whether Telecom/ Vodafone (or another party) have the opportunity to bid for an extra 5MHz pair to take their share to 20MHz pairs (we are assuming Telecom and Vodafone will want their full 15MHz pairs allotment),” he says. “It is this extra 5MHz pair that is likely to be of greater value as it gives the operator with 20MHz pairs a small advantage in terms of potential capacity (though LTE -Advanced with TDD allows a greater range of options for managing capacity with existing spectrum resources).” LTE IN NZ: In February this year, Vodafone NZ became the first mobile network operator in the country to launch 4G services, giving the company a minimum eight-month head-start on its competitors, assuming that Telecom’s 4G launch is imminent. Vodafone now offers 16 4G-capable devices on its network, with entry-level handsets starting from NZ$399. A Vodafone spokesperson told CommsDay that the digital dividend spectrum will enable the firm to go “deeper into rural New Zealand.” In its June submission to the NZ ministry of business, innovation and employment on the 700MHz auction rules, Vodafone urged that the government introduce changes to ensure that 4G coverage extends to rural areas in a timely fashion. The firm told CommsDay that the recently announced auction rules recognise this need. While Telecom is yet to launch 4G services on its existing 1800MHz spectrum, the firm is planning to have close to half its national mobile network live
with 4G by mid-2014. It plans to launch later this year in Auckland and expand to Wellington and Christchurch before Christmas. A Telecom spokesperson tells CommsDay that 4G is “absolutely a priority for Telecom” and that the company will be providing customers with a “very competitive offer upon launch.” The spokesperson explains that the 700MHz spectrum is mostly relevant for rural areas where the economics of 1800MHz “don’t stack up,” as it would require the building of many additional and expensive cell towers.
“Every dollar we spend in the auction is a dollar that we don’t to spend on rolling out the 4G network” “We submitted to the government that we believe the wider economic benefits of a broad, competitive roll-out of 4G LTE services across New Zealand are much larger than the dollars that would be raised in any auction. Every dollar we spend in the auction is a dollar that we don’t have available to spend on rolling out the 4G network,” she says. But of course, not everyone is happy with the 700MHz reserve price. 2degrees CEO Stewart Sherriff describes the total cost (NZ$198 million) as a “premium” compared to the amount spent clearing the band – not to mention well ahead of the NZ$119 million valuation by Treasury. “[It’s] about double the price we paid recently for 15MHz of 1800MHz spectrum [from Telstra] – and that’s beachfront spectrum, which we can use now. Paying a premium right now – even with terms – for spectrum we can’t use for some time, is a challenge for all players, but particularly 2degrees as a late entrant,” he says. 2degrees has announced that
it will launch 4G services in early 2014; however, the firm is yet to detail any precise timing or rollout locations as yet. CEO Stewart Sherriff tells CommsDay that 2degrees’ 3G network, designed by Huawei, is “LTE-ready” to enable a relatively straightforward transition “involving card changes rather than major changes to cell sites and antennas.” Sherriff says that the 700MHz auction is all about maintaining three-player competition and providing “equal allocation of a scarce resource at a price and with terms that encourage investment by all players. However, he’s of the belief that there is already an existing imbalance in spectrum holdings, with Telecom and Vodafone each possessing a greater portion of the “crucial sub-1GHz spectrum than 2degrees.” Sherriff says that the worst result for New Zealand would be for the auction to go the way of the Australian outcome, in which some A$1 billion of 700MHz spectrum went unsold. “One of the challenges with the 700MHz auction is that the government is selling spectrum the industry can’t deploy right now. No one in the Asia Pacific region has deployed a 700MHz network and there are no 700MHz handsets available. Yes, we’ll all need the spectrum in future, but the focus now is on deploying 4G at 1800MHz. Selling something we can’t use at a time when we are investing in other spectrum is extremely challenging,” he says. Telecom believes that it is unlikely that global device manufacturers will have 700MHz-capable devices until “late-2014 at the earliest” – a view that is backed by most analysts. “NZ will have to ride on the back of devices built for other markets such as Australia. While Australia has already held their digital dividend auction, that spectrum doesn’t become available for use until January 2015,” the Telecom spokesperson says.
LTE IN AUSTRALIA: Putting the 700MHz auction hiccup aside for a moment, IDC Australia telco research manager Siow-Meng Soh tells CommsDay that 4G is nonetheless taking off in Australia – using the 1800MHz band, in the first instance – as the telcos expand their coverage across the country and keep prices largely in line with their 3G offerings. That, at least, should give those operators who’ve shelled out for 700MHz some reassurance that the pricey asset will be put to good use. “We saw a spike in 4G shipment in Q4 2012 with the launch of iPhone 5 (about 1.6 million 4G devices shipped in that quarter),” says Soh. “Whilst the sale of iPhone has slowed down a little, other vendors such as Samsung, HTC and Nokia have launched 4G phones. If we look at the telcos' reporting, there were about 3.9 million 4G devices/users in June 2013. Our forecast is that by 2017, over 60% of subscribers will be using 4G,” he says. Soh says that Telstra remains the clear leader in terms of 4G adoption given it was first-tomarket. “Optus is catching up, but Vodafone still has a bit of work to do to increase its share of 4G users. In the longer term, with all three operators offering 4G, the differentiation will be in network performance/quality.” “As more data is being carried across the network, having more spectrum will give competitors an edge - operators need to increase network capacity by using more spectrum, increasing the number of base stations, or using Wi-Fi offload,” he says. In Australia, Telstra is currently working with Ericsson in a bid to commercially launch LTEAdvanced later this year on a combination of 1800MHz and 900MHz spectrum; the company is also trialling integrated small cell networks – or hetnets – in a bid to expand its existing network capacity in heavily populat-
ed areas. Telstra’s 4G network now covers 66% of the population, according to CEO David Thodey, with the firm on track to reach 85% by the end of the year. A recent Deutsche Bank report estimated Telstra’s 4G mobile handsets base at around 2.8 million as of the June 2013 half, with Optus’ base at around 1.08 million. REGIONAL CONTEXT: Looking across the region, Ovum’s McCormick says that operators in Asia are migrating to LTE earlier than they had originally planned to reduce costs. She adds that all Asia-Pacific operators – with the exception of Australia – have priced their LTE services at a “moderate premium.” “In terms of competition, most operators in a particular market have launched LTE within a fairly short window of each other,” she says. “I expect longer term that typical market shares will eventu-
“In the longer term, with all three operators offering 4G, the differentiation will be in network performance/quality.” ate in a given market, if all operators launch LTE within a relatively short space of time.” McCormick says that being first to market with LTE is strategically important for incumbents with high ARPU customers – a large proportion of which will be “technology first movers.” She provided CommsDay with an extract from her upcoming report on LTE in South Korea, an advanced market in which 4G penetration has exceeded 40%. “In [South Korea], LTE’s ARPU premium of 22% has resulted in blended ARPU growth, but this is not due to telcos… charging a premium for LTE; rather, it is the result of mobile users, accustomed to unlimited data on 3G, spending more when migrating to LTE to ensure they have sufficient data.” However, McCormick says that these LTE data allowances
are underutilised, which in turn creates a challenge for telcos that they so far “have chosen to ignore.” In her opinion, South Korean operators should encourage users to downgrade to less expensive plans with more realistic data allowances. “Telcos need to also set data limits that reflect realistic usage limits and encourage a realistic upsell opportunity (this has not been the case in Korea),” she adds. PICKING A STRATEGY: Comparing the South Korean LTE market to New Zealand is obviously fraught with danger, given the latter is only in its infancy. Vodafone NZ, the sole LTE network operator at the moment, announced in May that it is testing out LTE-Advanced with the aim of deploying it on the 700MHz/1800MHz frequencies following the digital dividend auction, but a launch date will be dependent on spectrum blocks and device availability. But “world firsts” aside, McCormick says that when it comes down to it, Ovum prefers a “more staged LTE rollout strategy, where networks are built according to demand and where subsidies (which reportedly have reached as high as $630 [in South Korea]) and marketing costs are kept at more reasonable levels.” New Zealand has the chance to adhere to this staged LTE rollout strategy; all three operators will be operating a 4G network sometime in 2014 – and each has indicated its interest in acquiring 700MHz digital dividend spectrum. The auction rules have been largely well received and the reserve price seems fair, according to most. But while analysts do not foresee any curveballs in the looming NZ 700MHz auction, Saunders says that there is always the possibility that operators may choose not to bid for a variety of possible reasons. “The price and distribution of course may be impacted by the bidding of a fourth party,” he adds.