CommsDay Magazine March 2012

Page 1

March 2012 • Published by Decisive • A CommsDay publication

5 bright new ideas that will revolutionise the network

A chat with Australia’s convergence regulation advisor America to Iceland and Portugal, via Ireland Why Australia is an attractive tech investment destination A new submarine cable bubble or is it now different? The threat to telecoms from solar storms



COMMSDAY MAGAZINE

ABOUT COMMSDAY MAGAZINE Mail: PO Box A191 Sydney South NSW 1235 AUSTRALIA. Fax: +612 9261 5434 Internet: www.commsday.com

First Equinix eyes mobile opportunity

5

Tata profits from voice consolidation

5

Low latency takes hold globally

7

9 35

14

Interviews GLENN BOREHAM

18

RAYMOND SEMBLER, EMERALD NETWORKS

29

GROUP EDITOR: Petroc Wilton FOUNDER: Grahame Lynch

ADVERTISING INQUIRIES: Sally Lloyd at sally@commsdaymail.com EVENT SPONSORSHIP: Veronica Kennedy-Good at veronica@mindsharecomms.com.au ALL CONTENTS OF THIS PUBLICATION ARE COPYRIGHT.

Features HAS SUB CABLE BROKEN OUT OF ITS BOOM BUST TENDENCY?

22

THE THREAT SOLAR STORMS POSE TO TELECOMS

25

WHY AUSTRALIA IS AN ATTRACTIVE TECH INVESTMENT LOCALE

CONTRIBUTIONS ARE WELCOME

WRITERS: Geoff Long, David Edwards, William Van Hefner, Grahame Lynch, Dave Burstein, Bob Fonow

Cover Story 5 BRIGHT NEW IDEAS THAT WILL REVOLUTIONISE THE NETWORK

Published up to 10 times annually.

EDITOR: Tony Chan at Tony@commsdaymail.com

Columns TONY CHAN WILLIAM VAN HEFNER

COMPLIMENTARY FOR ALL COMMSDAY READERS AND CUSTOMERS.

ALL RIGHTS RESERVED CommsDay is published by Decisive Publishing, 4/276 Pitt St, Sydney, Australia 2000 ACN 13 065 084 960 33



FIRST Equinix eyes mobile opportunity Equinix is looking to replicate its success in facilitating the interconnection of wired networks for the Internet and carrier world for the wireless space. According to Jim Poole, general manager, Global Networks and Mobility at Equinix, wireless operators are increasingly seeking out the same type of interconnection solutions as their fixed line counterparts. “It is getting to the point with 4G where traditionally, they [wireless operators] were individual operators buying network access individually, to the Internet and each other,” Poole said. “Now all the operators are building out backbones, trying to get more efficient ways to manage their traffic, more efficient ways to talk to the top talking sites.” While the need and solution for interconnecting networks remained the same – to exchange traffic between networks, the practice is only starting to get traction in the mobile space. “It is the same problem that existed on the wired side before. Oddly enough, the guys in the wireless business don’t necessarily think that way,” Poole pointed out. “The wireless guys are very good at understanding RF issues, customer experience issues, but they are not necessarily strong in the backend – the wired side of things. It is not arcane science, it’s not hard, but if we get in front of that… where you start to show them they can more optimally deployed their network by interconnect-

ing, and show them the benefit they can get from that, then they go ‘oh, I didn’t think about that’.” The obvious reason for mobile operators to seek interconnection is cost, as data becomes a major component of their business. “They never quite thought about it that way – they just

brought upstream from their island,” Poole (pictured) said. “Part of that is the changing nature of how [wireless] carriers make their money. If your margins are incredibly high, you didn’t particularly care, now that margins have come down considerably, and more of the traffic is data… it is inevitable that you will look for ways to decrease costs.” According to Poole, there is clear momentum that the mobile industry is now getting the message, with mobile only players now coming to Equinix seeking interconnection services. “What we are picking up now is more of that mid-tier, regional, wireless player where they have always used an upstream, they have never really had direct exposure to other networks. I’ve had experiences at shows where literally, the third, or fourth, largest wireless operator in a developing country just walks in the door and say ‘we were told we need to be in your facility’.”

Interestingly, while the interconnection between networks is the same, mobile operators do come with their particular set of parameters around performance, and even commercial terms. Because there is an inherent latency involved with the RF component of mobile networks – and their potential impact on customer experience, where mobile operators are seeking interconnection matters, Poole explained. “In terms of how content gets distributed to the RF side, [it] may required more widely deployed content access, or caching engines… to make the experience better, than you would have, say, for the wired side of things.” At the same time, mobile operators are also bringing their customers to bear when negotiating traffic exchange deals. Instead of the basic traffic volume, or the number of network connections, that is typically used in the wired line universal, wireless operators are now seeking to leverage eyeballs on their networks as bargaining chips. “We have seen several operators, essentially adopt – the logically term is – paid peering, essentially selling access because they have wireless eyeballs… and they want to monetise that.”

Tata profits from voice market consolidation Despite some gloomy projects for international carrier voice, a combination of scale, internal efficiencies and new commercial business models, has yielded a comparatively healthy profit growth for Tata


Communications’ voice business, says Michel Guyot, president, Global Voice Solution for Tata Communications. “Our business is good. This year, our traffic grew – our fiscal year ends in March, but so far, we are at 12.5% growth in volume. We are increasing our market share… We are going to close the year with 45 billion minutes – last year, it was 40 billion minutes. On wholesale, there’s nobody that has that level of traffic,” Guyot (pictured) told CommsDay. That’s not to say Tata has grown its business correspondingly. In fact, its topline revenue for voice is expected to be flat for the year. What is impressive is that Guyot expected a 5% gain in its margin for the period. “The issue with topline, which is not really our focus for voice – is pretty flat. Even with this kind of growth, the topline is flat, but we were able to increase our gross margins by about 5%,” he said. According to Guyot, there are a number of variables contributing to Tata’s voice margin gains. One of the key factors is market consolidation. “The good thing with the OTT players and all this tough market, it will drive consolidation. Consolidation means that people will probably exit, or outsource their business more and more,” Guyot said. “We saw that three years ago, but it didn’t happen at the speed that we were expecting. But since we signed the big BT deal two years ago, we were expecting a real snowball effect, it’s slower than expected, but we have signed 15 of those deals this year.” As such, there are now at least 16 carriers that effectively hand all their voice minutes to Tata on a committed basis.

“Something we are really proud of is by signing those deals, because in voice wholesale, in the past it was a pure spot business… it was like a commodity kind of business, but by these outsourcing deal, today, 48%-49% of our traffic – the 45 billion minutes – is committed for at least six months or more,” he said. “The traffic that we carry, we own it, at least half of it,

for six months or more – this is stable traffic, that’s, I believe, why we are successful.” In addition to owning half of its voice minutes, Tata has also gone to great lengths to drive efficiencies on its network. A platform it introduced called BestValue Routing Engine, which optimises routing for voice minutes, now handles 73 million transactions daily. With this intelligent routing platform, it is possible now for Tata to find termination points directly even if the market itself has introduced number portability, Guyot explained. “We now have 20 destinations where we can exactly know where to terminate the call, when a call is directed to your mobile, if you have number portability in your country,” he said. “For the carriers, it is always cheaper if you know where the guy is and the company he is with now, to

terminate directly, instead of terminating elsewhere and then finding that guy after even if you save that .002 cent – we have that capability.” The last area where Tata is driving margin growth is though quality of service. “The mobile players, even the OTT players, people would believe that people don’t mind about the quality, but people want to have quality when they go off-net,” he said. “Instead of selling budget, we are selling quality routes, people pay premium for that, and you get longer call duration, better quality, and everybody wins.” According to Guyot, its Prime class-of-service, which guarantees direct termination, with Calling Line ID (CLI) for customers, now makes up between 60%-70% of its traffic, compared to just 10%-15% five years ago. Any telco looking at the international voice business might simply want to turn right around and walk away. After all, the entire market operates on ultrathin margins, slowing growth, and intense competition from over-the-top free services. According to the latest figures from TeleGeography, the international voice market for the telecoms industry grew at around 3% last year, down significantly from the historical annual growth rate of around 12%, while prices continue to fall – by 5% last year. At the same time, competition from free services such as Skype continues to grow, with TeleGeography estimating a 45% growth in international Skype voice traffic in 2011, which represents more than the combined number of minutes international carriers grew for the same period.


Despite these gloomy trends, one carrier that seems to be excelling in the market is Tata Communications, who is now the undisputed largest voice minutes carrier in the world.

Low latency takes hold globally There are plenty of reasons for operators to salivate upon the low latency market. After all, when the segment began – primarily in the US between the high volume trading exchanges in Chicago and New York – operators with the shortest path and best latency performance was able to command a significant premium for their circuits. The situation was so lucrative for operators that at one point, an operator reportedly paid to drill a hole through a mountain – rather than go over or around it – to reduce the latency on the route, as every little fraction of a second counted when it came to the target customer base of those routes – the high frequency, algorithmic traders. For those who commanded the lowest latency, the rewards were network leases that far outstripped standard market prices, to the extent of more than 10 fold. Now that mentality is taking root in the global landscape. Today, a number of global carriers are building low latency routes across continents in anticipation of growing demand from high frequency trading systems seeking access to markets in Asia and Europe. Perhaps the most extreme example of low latency networking comes from Hibernia Atlantic’s new cable system

across the Atlantic, where the operator says a handful of anchor clients in the high frequency trading sector, already justifies the business case for the system. The reason: Hibernia Express cable cuts a few milliseconds off the latency between North America and Europe. That same situation is also happening across the Pacific, albeit not with a new cable system. NTT Communications has now launched a new Singapore-based data centre as part of a global low latency platform that includes facilities in Hong Kong and Tokyo, as well as trans-oceanic cable systems – PC-1 and its upcoming Asia Submarine-cable Express systems. The entire NTT Communications low latency platform will allow users from North America to get to the three key Asian markets in the shortest possible time, with a thousandth, and even a millionth, of a second being calculated into the equation. In one scenario, NTT Com says it now plans to land the new ASE systems directly into its Hong Kong data centre, which itself has been intentionally located across the street from the trading platforms of the Hong Kong bourse. By bypassing a cable landing station near the shore, the operator is looking to shave additional fractions of seconds off the latency of the connection. According to a presentation by Kempei Fukuda, director of Technology for Network Services at NTT Communications, there are about 30 to 40 companies in the high frequency trading market now demand low latency services. However, the returns for

such an investment doesn’t seem as lucrative for global carriers. The routes with the best latency performance now command only up to a 30% price premium compared to standard services, Fukuda said. Another global operator preparing a low-latency network is Tata Communications, who says it will now partition off part of its global Ethernet platform for the purpose. According to Tata, it will basically look at its global infrastructure, pick the shortest paths between markets, and offer the lowest latency achievable, including selecting and configuring the equipment for minimum delay. Tata’s ambitions stand out in that it is not just targeting a single region, or a particular path, but the entire globe. Tata’s low latency network will initially link up London, New York, Hong Kong, and Singapore, with Sydney and Tokyo to be added soon after, said John Hoffman, head of Ethernet Product Management at Tata Communications. Indeed, it is very difficult nowadays to find a new network announcement without some reference to latency. But the reason might not be entirely due to demand from high frequency traders. According to industry commentators, the emergence of cloud computing, in particular private clouds, now demand much better performance as enterprises increasingly rely on the network to access missioncritical data and applications. At the same time, latency sensitive applications, such as video and online gaming, are also calling out low latency networks globally.



TONY CHAN

So where is the Internet? Pingdom’s ‘net numbers for 2011: • • • ere’s a proposition: the • Internet isn’t really any• where, but its everywhere at the • same time. After all, most peo•

H

3.146 billion email accounts 555 million websites 220 million registered domain names 2.1 billion Internet users 591 million broadband subscriptions 2.4 billion social networking accounts 1.2 billion active mobile broadband subscriptions 5.9 billion mobile subscriptions

ple buy Internet access, but nobody buys access to any particu• lar place on the Internet. That’s why it should be pret- Social media stats for 2011: 800+ million Facebook users ty valuable to carriers to know • 100 billion photos hosted on Facebook what people are actually doing • • 225 million Twitter accounts on the Internet, what content (100m active Tweeters) they are accessing, and perhaps • 250 million average tweets per day where to put their investment • 1 trillion YouTube video playbacks going forward to support those • 51 million registered users on Flickr users. (4.5m uploads daily) Traditionally, telcos have looked at their networks in geoThe latest figures should come graphic terms, for example, how as no surprise to anyone. For much traffic their users are generstarters, it is increasingly clear ating to access US servers across that content on the Internet is the Pacific from Australia. They becoming concentrated in a few then adjust their network plandiscrete places. ning in the same vein, and hope For example, the number of to balance those extra costs with Facebook accounts now equals supplemental revenue growth to nearly a third of all Internet acsqueeze out a margin. counts, while sites owned by Imagine, on a purely theoretiGoogle, including YouTube, now cal basis, that you knew that Reserve up 43% of the global video becka Black’s “Friday” video views. What does that mean for would be the most viewed file on carriers? Knowing that close to YouTube during 2011, how 40% of their traffic will go to much bandwidth would a carrier Google means they can now optisave if they simply cached that mise their Internet cost structure particular file on their local servaccordingly. Instead of buying er? While traffic from a single file transit in high priced markets, won’t make that much of a differthey can opt to buy a link directly ence, traffic to an entire site will to Google’s data centres – where, likely have the potential to imaccording to industry insiders, pact a carrier’s bandwidth cost. Google will peer (exchange traffic So what should you know freely) with everyone. about the Internet? Obviously, the accountants Every year, a company in Swewill have to do the final calculaden called, Pingdom, issues its tions, but if you take the accepted version of the state of the Interfigure that video is now some net. It uses a number of reputable 80% of all Internet traffic, and sources, so the figures should be 43% of that comes from Google more or less reliable.

sites, a carrier in a developing market paying extremely high transit prices can now offload close to 40% of their transit cost by buying an international circuit to Google’s US server farms. Obviously, each carrier will have to look at their own traffic patterns to make its own determination, but a more thorough understanding of the Internet is a probably a good place to start. IN REGION ACCESS: Conversely, for content providers, understanding where Internet users will be can also help to optimise their distribution infrastructure. According to the Pingdom stats citing the Internet World Stats source, Asia is the place to be if you want to invest in content distribution. Already Asia, with 922 million users, is home to 44% of the world’s Internet population, but it has a penetration rate of only 24%. This compares to North America, where 271 million users reside, accounts for only 13% of the world’s Internet population, but with a penetration rate of over 78%. It’s no surprise then that Google is building out a panAsian data centre infrastructure to access Asian users. The good news for Asian carriers is that they can now connect in-region to Google instead of going over the Pacific. Which brings up the interesting question: how will this impact the transit market in Asia, where international carriers have so far been able to monetise a lot of Asian traffic going to Google sites in the US?


WHERE AUSTRALIA’S TELECOM LEADERS MEET PLATINUM SPONSOR

Westin Hotel, Sydney Australia Tuesday 17 April, Wed, 18 April 2012

OVER 30 SPEAKERS INCLUDING GOLD SPONSORS

Shadow comms minister Malcolm Turnbull

Primus CEO Tom Mazerski

Telstra GMD Consumer Gordon Ballantyne

ACMA chairman Chris Chapman

Huawei Australia director Alexander Downer

Optus director, digital media Austin Bryan

DINNER SPONSOR

REFRESHMENT SPONSOR AARNET CEO Chris Hancock

AAPT CEO David Yuile

Market Clarity CEO Shara Evans

SILVER SPONSORS

Internode MD Simon Hackett

BigAir CEO Jason Ashton

Comms Alliance CEO John Stanton

PLUS SPEAKERS FROM Telstra Wholesale ● Equinix ● Broadcast Australia ● Vocus Comms ● Truman Hoyle ● Genband ● Oracle ● Pitney Bowes ● Ciena ● Overture Networks ● Tellabs ● Qualcomm ● Emersion ● AMTA ● Norton Rose ● Pottinger ● Eaglecomms ● Ericsson ● Ovum ● Adtran and more

SUPPORTER SPONSORS

COMMUNICATIONS DAY 19 January 2012 Page 7


WHERE AUSTRALIA’S TELECOM LEADERS MEET PLATINUM SPONSOR

Westin Hotel, Sydney Australia Tuesday 17 April, Wed, 18 April 2012

Tuesday 17 April

Wednesday 18 April

9AM MORNING SESSION KEYNOTES

9AM MORNING SESSION KEYNOTES

• • • •

Telstra group managing director consumer Gordon Ballantyne Internode MD Simon Hackett Primus CEO Tom Mazerski Optus director, digital media Austin Bryan

11.00AM MORNING PLENARY • • •

Communications Alliance CEO John Stanton Huawei Australia board director & former Foreign Minister Alexander Downer Equinix AP business development director Andrew Oon “Data Centres – At the Intersection of

Cloud, Networks & Content” • •

Pitney Bowes Software general manager, APAC Customer Analytics & Interaction Chris Lowther Ericsson CEO Australia NZ Hakan Eriksson

1PM LUNCH 1.50PM AFTERNOON PLENARY • Market Clarity CEO Shara Evans • Qualcomm president SE Asia/Pacific John Stefanac • Broadcast Australia strategy and corporate development director Brett Savill • Tellabs director of market strategy & analysis Mike O'Malley • Adtran president & CEO, Bluesocket Business Group Mads Lillelund 3.45 DAY 1 CLOSING PLENARY • Emersion Software Systems CEO Paul Dundas

"Cloud OSS: are you ready for the changing times?" • • • •

AMTA CEO Chris Althaus BigAir CEO Jason Ashton Norton Rose partner Nick Abrahams Super panel: THE STATE OF THE INDUSTRY IN 2012 featuring BigAir CEO Jason Ashton, Overture Networks APAC MD Graeme Bellis + more

5.35PM DRINKS 7PM DINNER SPONSORED BY VOCUS COMMUNICATIONS

COMMUNICATIONS DAY 23 November2011 Page 7

• • • •

AAPT CEO David Yuile ACMA chairman Chris Chapman Telstra Wholesale executive director, sales Glenn Osborne Shadow Communications Minister Malcolm Turnbull

11.00AM MORNING PLENARY • • •

AARNET CEO Chris Hancock Oracle director, sales consulting Simon Wong Genband director voice solutions Chris Koehncke

“The future of voice” • •

Ciena APAC CTO Karl Horne Huawei head enterprise business Gavin-Milton White

1PM LUNCH 1.50PM AFTERNOON PLENARY • Truman Hoyle partner Richard Pascoe • Pottinger CEO Nigel Lake • EagleComms Advisory leader Dr Nguyen Duc

“Bridging Government NBN Policy to End-User Experience” •

CLOSING KEYNOTE: Ovum analyst David Kennedy

4.00 CLOSE

COMMSDAY SUMMIT OFFICIAL DINNER 7pm, Tuesday April 17 • • •

A great networking opportunity with nearly 300 industry luminaries. Plus the release of the 2012 CommsDay Industry Pulse With live musical entertainment


Accounting for over $2 billion of investment last year, Australia’s data centre industry is shaping up as the critical enabler for domestic adoption of the cloud as well as a lucrative industry in its own right. This peak event, the day before the CommsDay Summit, features the sector’s major players in a one day format designed to impart maximum information and networking opportunities.

PLATINUM SPONSOR

Australian Data Centre Summit Where connectivity, capacity & the cloud meet

Westin Sydney, Monday April 16 2012

SPEAKERS INCLUDE

NSW deputy premier Andrew Stoner

Vocus CEO James Spenceley

Metronode GM Malcolm Roe

LUNCH SPONSOR

Cloud Plus CEO Jules Rumsey

Baker & McKenzie Partner James Halliday

OzHub chairman Matt Healy

THE PROGRAMME 9.00am Official welcome & keynote: NSW deputy premier Andrew Stoner (invited) 9.25am Vocus Communications CEO James Spenceley 9.55am OzHub chairman Matt Healy 10.20am Metronode GM Malcolm Roe 10.45am Morning tea

SILVER SPONSOR

REFRESHMENT SPONSOR

LEGAL SESSION 11.10am Herbert Geer partner Paul Noonan 11.35am Baker & McKenzie partner James Halliday 12.00pm Corrs Chambers Westgarth partner James North 12.25pm Equinix Australia sales director Jeremy Deutsch 12.50pm LUNCH CLOUD SESSION 1.40pm CloudPlus CEO Jules Rumsey 2.00pm OrionVM CEO Sheng Yeo

SUPPORTING SPONSORS

2.20pm Arbor Networks ANZ country manager & Asiapac business development manager Nick Race 2.40 Warren & Brown’s Mike Heins 3.00 Extreme Networks’ senior director, data centre marketing Marty Lans 3.20 Tellabs’ director, market strategy Mike O’Malley 3.40 Afternoon Tea 4.10 CLOSING PANEL SESSION

REGISTER ONLINE AT http://bit.ly/vEAYj8


Where Australia’s telecom leaders meet

Speakers from

COMMSDAY SUMMIT 2012+

• • • • • • • • • • • • • •

• •

AUSTRALASIA SATELLITE FORUM 2012 & AUSTRALIA DATA CENTRE SUMMIT SUMMIT 2012

REGISTER NOW COMMSDAY SUMMIT 2012 APRIL 17 & 18 [ ] Both days of Summit including lunches, drinks A$997+GST [ ] Both days + admission to CommsDay Summit Dinner A$1155+GST [ ] CommsDay Summit Dinner only A$170+GST [ ] AUSTRALASIA SATELLITE FORUM APRIL 16 only A$747+GST [ ] AUSTRALIA DATA CENTRE FORUM APRIL 16 only A$397+GST [ ] I want to send a group, contact me to discuss I WANT TO STAY AT THE WESTIN HOTEL AT SPECIAL RATE OF A$275 [ ] Night of April 15 [ ] April 16 [ ] April 17 [ ] April 18 CommsDay will contact me with more details of special offer Name ____________________________________________________________ Company _________________________________________________________ Phone No ________________________ Email ___________________________ Address ___________________________________________________________ __________________________________________ Postcode _______________ I want to pay by: [ ] Mastercard [ ] Visa [ ] Amex [ ] Diners [ ] Invoice me Name on card ______________________________________________________ Card Number ______________________________________________________ Expiration Date _______________________ Signature _____________________

TO REGISTER: Fax this form to 02 9261 5434 (+612 outside Australia) Phone Sally Lloyd at 02 9261 5435 Email Sally at sally@commsdaymail.com Register online for CommsDay Summit at http://bit.ly/vEAYj8

Telstra Optus Internode AAPT Primus BigAir Vocus Communications NewSat NBN Co AARNet OzHub Pacnet AMTA Communications Alliance • Huawei • Ericsson • Genband • Equinix • Ciena • Pitney Bowes Software • Truman Hoyle • Overture Networks • Tellabs • Qualcomm • Pipe Networks • Emersion • Broadcast Australia • Arbor Networks • Adtran • Mallesons Stephen Jacques • Pottinger • EagleComms • Intelsat • SES • GE - Satellite • Thuraya • Gateway Teleport • Cisco Systems • Arianespace • Newtec • Comsys • Speedcast • PacTel Int •GOLD TC SPONSORS Communications • Comsys • Loral • Hughes Network Systems. • Thales Alenia Space • ASC • ND SatCom • Baker & McKenzie • OrionVM Plus more to be added


5 bright new ideas that will revolutionise the network With networks brimming at the rim with data, the telecoms industry is on the cusp of major innovations that will not only add capacity to the global information infrastructure, but potentially change the way networks are conceived, built, and operated. Tony Chan takes a look, together with some radically new ways of thinking about both fixed and mobile networks, now being proposed by network vendors and operators.

F

or the past year, there is an undercurrent of urgency in the telecommunications industry. On the one hand, traffic continues to grow on both fixed and mobile networks, calling for ongoing investment by operators already facing shrinking margins and fickle customers who want more, and more, and more, for the same price, if not less. On the other, the explosion of services, exacerbated by the surge in smartphones usage and the growing acceptance of cloud computing, is accelerating the complexity of networks, driving up management costs and stretching the limits of existing networking technologies. Almost everywhere in the networking industry, there is a sense of foreboding, a sense that things will need to change, or they will reach the breaking point, both

commercially, as well as technically. Inside the data centre, across the WAN, in the radio access network, that sense of an approaching crisis has helped spur the industry in new directions, giving birth to new ideas, new ways to thinking on how networks should be designed, built and operated. Arguably, for the first time since the introduction of IP, the networking sector is abuzz with creativity and innovation. This time around, the focus is not just about sending more bits down a wavelength or packing more ports inside a router, but on the core architectural elements in a network, the manageability of the entire infrastructure, and the efficiency of operation. There are now initiatives across the board, led by vendors and service providers alike, that

aim to expand the scale of networks and services, increase the flexibility and management capabilities of networks, and to optimise limited resources, such as spectrum.

1

SOFTWARE DEFINED NETWORKS

An area where networks are coming under intense pressure is inside the data centre. As cloud computing takes root globally, the scale and complexity of data centres and the networks that are needed to run them, are escalating exponentially. One initiative looking to solve that challenge is OpenFlow, backed by some of the biggest data centre operators and service providers in the world – Google, Facebook, Microsoft, Yahoo, Ver-


izon, Deutsche Telekom, and numerous vendors, including Cisco, Juniper, Brocade, Ericsson, HP, IBM and others. In a nutshell, OpenFlow separates the control plane for routing decisions into a separate software plane, which enables what Ananda Rajagopal, senior director of Product Management for High-End & Service Providers at Brocade, dubs software-defined networks. As such, the entire network is treated and managed as a single pool of resources from software. This type of network allows the network operator to use the software controls provided by OpenFlow to configure and direct the network paths within a particular networking environment. While this doesn’t seem like a big deal for conventional networks, the capability is critical when it comes to massive data centre environments, or what Rajagopal refers to as “hyper-scale” data centres. As data centres scale up in size, consisting of “thousands of racks” consisting of “20 to 40 servers” running some 20 virtual machines on each, data centre environments are now outpacing the capabilities to existing data centre networking technologies, Rajagopal said. With traditional data centres, each individual network element manages and maintains its own “states” as servers are turned up, turned down, or moved. But when the data centre scales to millions of VMs per site, traditional networking models cannot scale the same way without significantly pushing up management complexity, he explained. The key advantage of OpenFlow is that all the “states” are now managed centrally using software, allowing the operators to easily see and manage resources. And as cloud computing evolves into multi-site and multiregion environments, OpenFlow can also work on the backbone. In effect, OpenFlow will allow network operators to virtualise

their backbone networks into a single pool of resources. By setting up and managing specific data paths within that pool, service providers can create separate, distinct, logical networks for different applications or customers on the backbone. More importantly, it will be able to speed up provisioning times to adjust for increased loads, or dynamically reroute traffic in the event of outages. At the very least, OpenFlow will enable flow optimisation for service providers – essentially traffic engineering within the resource pool. OpenFlow would allow service providers to define and set up specific paths for certain types of traffic, either for quality, or service differentiation purposes.

We are at a point where service providers are really being forced to look for alternative architectures

2

TRANSPORT ORIENTED NETWORKS

Another area where networks are being pushed to the brink is the complexity, and the associated cost of managing, IP traffic. Karl Horne, chief technology officer of Ciena Asia Pacific, says that transporting IP packets on Layer 3 of the network OSI stack is no longer efficient as traffic volume escalates. “We are at a point where service providers are really being forced to look for alternative architectures,” Horne said. “We have all seen these very aggressive bandwidth growth curves for IP traffic… end-user demand is growing in terms of the number of them, and they are also growing in terms of the size of them – IP transactions are just getting larger and larger, and more sustained over time

In his view, networks will have to go through a process he calls, ‘bifurcation,’ in which the addressing intelligence will be separated from the actual transport layer, which will take on the role of connectivity at a lower cost basis. “We absolutely need IP and service handling – IP is the global address scheme, with an IP address, I can find anything I need in the world,” he said. “[But] the way to find the service end points to carry out the IP transactions is not necessarily the best way to provide the connectivity and the transport between those endpoints – if you come at it from a cost point of view, or a complexity point of view. Essentially, if we do this bifurcation in the packet ecosystem… there are other technologies that are around now to build that connectivity for transport of IP services.” That’s where packet optical switching technologies, such as Carrier Ethernet and MPLS-TP come in, Horne added. “From my experience, operators will use one or more of those depending on what they specific needs are,” he added. Taking the idea one step further (or lower on the OSI stack) is Ed McCormack, vice president & general manager, International Accounts and Submarine Systems, at Ciena. “The next step is to actually take real cost out of the network on an end-to-end basis,” McCormark explained. “Ten years ago, when we talked submarine, we were talking about beach to beach. The demand now is driven by consumers, and consumers don’t care about submarine networks... They care about end-toend service, device to device, without interruption. I think you’ve got to look at development of submarine and terrestrial networks differently to accommodate that view.” In other words, more of the network should be designed like subsea cables to take advantage of


the cost efficiencies of pure transport. Instead of the traditionally shore-to-shore configuration of subsea cables, there is now an opportunity to see the entire network as a single system – with links architected from POP-toPOP, without stopping off at landing stations.

3

LIQUID MOBILE NETWORKS

The call for innovation is even more pronounced on the mobile side. Constrained by spectrum and pressured by explosive data traffic growth, mobile operators are now under severed pressure

trend in the mobile industry is the notion of the cloud-based network. Nearly all the major mobile network vendors have some strategy to separate the antenna element of the radio access network and their baseband processors required to handle the traffic. Nokia Siemens Networks calls it Liquid Radio. Alcatel-Lucent calls it LightRadio. Others have their own terms for it, but the idea is to have multiple antennas link back into a centralised processing unit, often refer to as baseband hotels. Work is still ongoing in the area, but according to Mike Murphy, head of technology for NSN Asia Pacific, the vendor can now

band units, and ongoing debate on how much intelligence to move off the antenna. But once those are worked out and the model gets adopted, it will radically change the mobile network. It will eliminate the need for costly site real-estate since the size of the antennas can now be reduced dramatically – to a package the size of a deck of playing cards in Alcatel-Lucent’s case. Combined with features like Self-Organising Network (SON), it has the potential to improve the efficiency of mobile networks, directing resources to where it is needed dynamically. At this point, the model operates on a single technology, such as 3G, but work is already underway to support multiple networks – 2G, 3G, LTE – from a common pool of baseband resources, a capability Huawei is developing with its GigaSite concept, which aims to consolidate the infrastructure of multiple technologies, i.e. 2G, 3G, LTE, into a system.

4

Alcatel-Lucent’s LightRadio fits in a hand

to control costs, while adding capacity to ensure a good customer experience. While new network models such as pico and femto cells, spectrum refarming, and traffic offloading to Wi-Fi networks, do present some incremental steps to solve the capacity problem, some vendors and operators are now proposing some new, and sometimes radically different, approaches to spectrum utilisation, radio network resource management, and mobile architecture. Perhaps the most pronounced

pool up to 9 baseband units in a single ‘hotel’ supporting up to 10Gbps of traffic. The advantage of this architecture is that the processing resources of those baseband units can now be shared across all the antennas. This allows the network to dynamically allocated resources to areas that requires it – much like a cloud computing model. There are challenges for this model to work, such as its heavy reliance on fibre connectivity between the antenna and the base-

HETNETS

Integrating multiple technologies is another way to boost performance of mobile networks. One operator, SK Telecom, has developed a way to offer simultaneous services on multiple network technologies. By the second quarter of 2012, the operator will be offering its subscribers a service that will actively access 3G and Wi-Fi networks. So instead of connecting via only Wi-Fi when the handset senses a hotspot, SKT’s solution actually uses both the 3G and Wi-Fi networks for downloading content. So instead of freeing up 3G capacity to ease congestion with Wi-Fi, like most examples of WiFi offloads today, SKT’s service will actually use Wi-Fi as an additional bandwidth resource to boost speed – up to 60Mbps with the 3G and Wi-Fi solution, and up to 100Mbps when the system is upgraded to LTE and Wi-Fi in


the future, the operator said. So far, this approach seems proprietary, based on 61 technologies developed in Korea. However, SKT is pushing for standardisation of the capability as “Heterogeneous Network Integration Solution” through the 3GPP and the ITU.

5

ALL-LTE NETWORKS

Even more radical perhaps is the notion of unifying the entire spectrum portfolio under a single technology. Yoshioki Chika, chief technology officer at Wireless City Planning, the company backed by Softbank, says that the current model of operating multiple networks – GSM, 3G, LTE, is inherently inefficient. “It used to be that bands were used for each system, for example, the 900MHz for GSM, and 3G means 2.1GHz. “In each system, they would have macro cell, micro cell, pico cell, but they are using the differ-

ent type of cells within the band, creating a lot of noise and interference with each other,” Chika said. “But if we can use LTE in all bands, then we can use the most efficient model – maybe the lower frequency will be used only for the macro cells and the higher frequency can be used only for micro and pico cells for capacity, then we can get more frequency efficiency than ever before.” Those efficiencies will become increasingly important as the amount of data traffic grows. Illustrating a scenario where traffic doubles every year, Chika noted that traffic on such a network will increase some 1000 fold over a span of 10 years. Given such traffic requirements, the industry will have to come up with new ways to squeeze as much efficiency out of their spectrum resources as possible. “Because of the capacity issues, because of the ecosystem issues, I would say that, in the next ten years, we will only have LTE

in the world for the wireless system.” Eventually, even different versions of the same technology, namely TDD and FDD LTE, can be configured for specific roles within the network to squeeze out more capacity. Specifically, networks will take advantage of the frequency efficiency of TDD as the downlink channel. “If you are looking at the FDD frequency usages, there are around 20%-30% ‘guard band’ is required between the uplink and the downlink.” “In other words, 20%-30% of the frequency is useless. Also, because of the asymmetric traffic trend, there are a lot of idle frequencies on the uplink,” Chika said. “If you are looking at TDD, we do still need ‘guard time’, but it is only around 10% of the entire frequency, and also the bandwidth on the up and down link is flexible, so we can easily understand that the frequency usage is much, much better than FDD.”

Even more bright ideas These are but five examples where networking is heading in the near future. There are plenty of other approaches now being investigated, and invested in, by different players. Juniper Networks for example, is pushing forth a strategy to application-enabled the network, allowing developers access to its networking APIs to drive innovation in that area, enabling services such as security, customer management, to be integrated into the network. “The challenge that exists is not only are the applications changing, the users behave and consume, and they experience and application and information, very personally. While they all shared stuff, they consume individually, so they behave individually, and individual behaviour cannot be aggregated, so you almost have to be able to make these applications personable for every users,” said Luc Ceuppens, vice president of product marketing, platform systems division at Juniper. “Today, the application world and the network world are strictly separate, there’s a big wall between them. But in order to improve the user experience, and again, improve the operational efficiency of running a service provider business, you actually have to have the application and network interact more, and that’s where network programmability comes in.” Similarly, Akamai is seeking to leverage its 100,000 servers across the world as an application platform, which will soon allow for services, such as real-time conversion of media files, to be conducted on the network, instead of on the service provider network, or the end-user device. No doubt, everyone else in the industry is also seeking answers to the challenges that are so apparent for today’s networks. Some will work better than others, some will get traction in the industry, some will not.


Regulating for convergence: A chat with Glen Boreham Australia’s Convergence Review is a government initiative to define a regulatory framework capable of keeping pace with the unprecedented convergence of media, as well as the other effects of rapid technological change. Unsurprisingly, the process is attracting a great deal of interest both within the APAC region and internationally. While the Review Committee’s final report isn’t due until this month, the recommendations contained in its interim report have already sent shockwaves through Australia’s telecoms industry—including a call to replace or rebuild the country’s current communications and media regulator. Petroc Wilton caught up with chairman Glen Boreham.

PW: This is obviously an interim report… how will this relate to the final version? Are most of the key recommendations bedded down here already? GB: We’ve already taken 280 submissions, over 2,600 pages... that’s like four telephone books! So these interim findings we haven’t made in isolation; we’ve made them having taken lots and lots of feedback from the industry. So my expectation is... the topline recommendations will be what you’ll see in the final report. But where I expect lots of really healthy debate, valuable input to us, will be around implementation plans, transition plans, definitions, etc, that I think will be really important to us in framing the final report. PW: The first thing that leapt off the page—and indeed the first thing that’s flagged in your media releases—is the idea of a new independent regulator. That will raise some eyebrows; first of all, why

do we need a separate regulator, and won’t this dilute the existing role of the Australian Communications and Media Authority? GB: First of all, I want to make it clear that that recommendation shouldn’t be seen in any way at all as criticism of the ACMA. The ACMA was formed in 2005, and I think there’s broad agreement that in some aspects of their operation they were given a very limited toolkit. In 2005, there were no smartphones, no

internet-enabled smart TVs, no tablet devices; the world’s changed. Most of the skills that

currently reside in the ACMA would be valuable to the new regulator, but the new regulator would have a bigger remit, and the most significant thing is: today the way the industry is regulated is largely via prescribed black-letter law. And what we’re moving to is an environment of managed regulation where it is a really different discussion, where the new regulator would take a policy framework from the government. It would be able to interpret that, it would be able to make its own rules, and it would be able to adapt to technological change etc. occurring over time. That’s the first thing. And one of the things that I think comes out of that is, why at arm’s length to government and why independent? The real reason is speed and flexibility. You’re going to have a new regulator with this ability to interpret, adjust, make rules; you want that to be able to move quickly. So the idea that it is independent, that it is at arm’s length to gov-


ernment, and can operate quickly without having to revert everything back into the machinery of government, is why we’re recommending that. PW: You’ve made the point that you would very carefully leave the Australian Competition and Consumer Commission’s existing activities—in particular, in the structure of the telco industry— unchanged. In the wake of this new independent regulator, where would that leave the ACMA? GB: The ACMA would be consumed by the new regulator; the ACMA would not exist in its current form. It would form a foundation part of the new regulator... this should be seen as taking the ACMA to its next level, with a much higher capacity and a different role. Rather than just implementing prescribed blackletter [regulation], it would be an organisation that could actually interpret, make rules, and evolve. PW: It sounds like there are parallels with where the Australian Competition and Consumer Commission operates now, where it’ll have a framework under legislation but it’ll be up to the ACCC to consult with industry on the interpretation of different issues. GB: Yes, that’s correct. The way we thought about it was, we looked at successful regulation in Australia, and I think there would be a pretty much unanimous view that the way the financial services sector has been regulated over the last few decades has been very successful. If you look at that sector, you’ve got a strong independent industry-based regulator in the Australian Prudential Regulation Authority, you’ve got them interacting with the ACCC and also with the Reserve Bank of Australia. That model of managed regulation has served the finance sector well in an environment where

things have changed that nobody could have ever predicted, and Australia had the flexibility to manage its financial sector through that, I think, demonstrably better than anyone else in the world. We’re taking that thinking and saying ’look, in this content and communications space, none of us know what the next big technology thing will be, none of us know what it’ll look like in ten years time, none of us know what shocks may emerge’. So take that

set; rather than trying to add a new division to an existing body, or add another schedule to existing legislation, I guess our mindset has been that in order to position Australia for decades ahead, the thinking should be ‘let’s start with a clean sheet’—let’s get the best skills, best ideas, and start there, rather than renovate existing structure. I think that leads to more profound conclusions, and it leads there faster. It’s like renovating a house or building one from scratch: often, you’re better off starting from the green field.

I think there would be a pretty much unanimous view that the way the financial services sector has been regulated over the last few decades has been very successful

PW: It strikes me that if there is an overarching theme that runs through the report, it’s the idea of decoupling content from platform. You’ve got this idea of the ‘content services enterprise’—so you don’t have [specific] regulation for TV broadcasters, you don’t have regulation for IPTV, rather you have a set of criteria for someone being labelled a content services enterprise, and once you’re in that zone then you’re regulated accordingly. Is that correct?

concept of a powerful independent regulator, working in concert with the ACCC, that can respond to futures that you and I can’t predict—this is the best way forward. PW: And the ACMA has laid out itself, in its ‘Broken Concepts’ paper, that there are things that it—that the regulation itself—is no longer equipped to deal with. GB: That’s a very good point. In many ways, the recommendation to create a new regulator was very largely based on the good work the ACMA did in that ‘Broken Concepts’ paper. And it’s a mind-

GB: That’s correct—there may be some gradation within that, but broadly, yes. The idea that you can be an identical enterprise for all ways and means but you’re regulated because you use spectrum, or you’re not regulated because you use the internet—that’s clearly false. What we’re saying is, define an enterprise, set the bar reasonably high—so we’re looking at what most of us would recognise today as the big branded content services enterprises—set the bar at that level, and if you look, feel and smell like one of those, you fall into the category for Australian content requirements, diversity requirements, and content standard requirements. And I think that’s the only way that you can make a framework that’s enduring—because this is technically neutral, platform neutral, it’s based on what you’re doing rather than on how



you deliver it. PW: You’ve got a lot of telcos now who are signing up IPTV partnerships… I guess that means that, in a few years’ time, they could find themselves regulated on the exact same terms as a traditional broadcaster, for example? GB: Yes. We deliberately haven’t defined those thresholds in this report, it is a work in progress— but the short answer is yes. Again, if you look, feel and smell like Channel 7, you should probably be subject to the same type of public good requirements. PW: You do acknowledge, in the paper, the difficulty of regulating overseas enterprises on the content side; I’m thinking in particular of the guys who are purely over the top players, someone like a YouTube. How do you make sure that someone like that has the incentive to submit to regulation on an equal basis to an Australian broadcaster—or even, say, a telco who’s just launched their own video service? GB: Again, if you come back to the concept that a content services enterprise is likely to be one of the big brands, they have an Australian presence. They may host their content overseas, but they have Australian subscribers, or Australian advertising revenue or registered offices. So there is an Australian entity that you can regulate—which is one of the issues in the internet world, that I can’t really regulate an entity in Prague. With where we’re targeting this, we expect there would be an Australian entity. Also, it’s targeted at brands, and I know from my corporate life that big brands care about their reputation and their image. So part of it is ‘here are the rules’—but part of it is ‘here’s my image, and I don’t particular want to be seen to be flouting the laws of the land’. And the other element, which more a sanction

than a positive, is in other industries—and I’m thinking here in areas of breaches of content and community standards, like showing inappropriate material that could be accessed by minors— when people care about their brands, ‘name and shame’ regimes have been used pretty effectively. So you may not, in some cases, be able to prosecute the creator of the content in Australia… but by naming and shaming the provider, they’ll say ’well, we don’t want our brand to be associated with that’. And they’ll therefore act accordingly. PW: So you start to incentivise a co-regulatory environment? GB: That’s exactly how it would play out, I think. PW: On your spectrum management guidelines, you’re talking about more flexibility for future spectrum use, with market-based pricing etc. Could that, in theory, see additional spectrum freed up for a mobile operator—perhaps by the sort of ministerial power that you’re talking about? GB: Potentially. Basically, what we want to do; first of all, I think it’s to the commercial free-to-air TV networks’ benefit to break the nexus between broadcast license and spectrum license. It is conceivable, a long way out, that a commercial FTA broadcaster would look to use alternate delivery mechanisms… a TV station might say, many years ahead, that it’s actually more efficient to use [broadband-based] delivery systems and hand back some of its spectrum. That would therefore derive a better use of the public spectrum, which is a scarce public asset, and may then be used for other services in different spaces. So I think it’s actually to the benefit of the commercial FTAs to break that connection between the spectrum and broadcast licenses; then, make the spectrum

license pricing regime, as it is with telcos. There’s market-based pricing. We’re really conscious, and this is something that’s existed in the telco space, that you need to provide some security of tenure to existing operators. We don’t want to create an environment where you give a free-to-air a 15-year license and in the last five years of that no bank will lend to them because they’re not sure of their security of existence… we’re thinking that through, maybe it’s a license with more security for which you need to re-apply every 15 years to make sure that you’re ticking all the boxes, the price is reset based on the market, etc. So we’re looking at taking some of the learnings from the existing telco-based market, but applying the broad concept that it is a scarce public resource and should be used as efficiently as possible. If, in the future, a FTA station said it was willing to give some back to pay less, that’s the flexibility you’d want; if they actually wanted to trade amongst themselves, I think that’s something that should be explored. But it’s creating a market to derive the best use of that public resource. PW: You talk about your new regulator being able to deal with content-related competition issues; the first thing that leapt to my mind there was the vertical integration of content with networks. Is that the sort of issue you’re talking about? GB: Those are examples of where the existing metric-based blackletter rules don’t cover—and we think they should be covered. If a big online global player wanted to buy a commercial FTA station, should we be concerned? I don’t know, but it’s certainly something a regulator should explore. If a big telco wanted to buy a commercial FTA station, will that lessen competition? It’s something that should be explored.


Evolution in sub cables: an end to the boom-bust cycle? The sheer number of new ventures underway in the subsea cable invites comparisons to the dotcom boom era of more than a decade ago – and corresponding speculation as to when this latest bubble might burst. Petroc Wilton reports

G

iven the economic instability in some of the world’s largest markets, new cables have to contend with cost constraints that simply aren’t going away, and old systems that keep postponing retirement dates and cutting bandwidth prices. And there’s hardly a dearth of capacity on the major routes. But beyond the raw supplydemand numbers, the need for international bandwidth is arguably broader and more nuanced than ever before. It’s the changes here that could see the resurgent sub cable industry hold back the bust – along with some entirely new elements of the business model that, if they test out, could bring in entirely new sources of revenue. Speaking at PTC this year, TeleGeography analyst Tim Stronge counted 22 new cable projects currently underway. And while some, like the Africa-USEurope cable WASACE, are exploring new routes, others will duplicate part or all of current

systems – or even each other. The Auckland-Sydney cable planned by China Communications Service unit Axin will overlap not just the aging Southern Cross system on the trans-Tasman leg, but also the new Pacific Fibre Austral-

TeleGeography’s Tim Stronge

ia-New Zealand-US system. The ASSC-1 Perth-Singapore link, tipped for commercial launch next year, looks set to go head-tohead against Leighton Contractors subsidiary AustraliaSingapore Cable, which is already

working on a cable with a very similar route, design spec, and timeframe. It’s hard to say with any certainty whether raw demand growth alone will provide enough revenue headroom for these new and sometimes competing links to succeed along established routes, particularly with the ongoing erosion of bandwidth costs. TeleGeography reports that the annual rate of international bandwidth growth is dropping off, from 60% in recent years to 47% last year, and expects it to relax further – down into the mid -30% region – over the next few years. And while Stronge noted that while trans-Pacific routes now account for almost 80% as much traffic as the traditionally dominant trans-Atlantic systems, he also put utilisation under 20% on most routes. Meanwhile, the price of international capacity continues to fall; Southern Cross, for example, has just slashed prices to the US from both New Zealand and Australia by 44%. And specific limiting factors mean that cable construction costs are unlikely to see


a corresponding decline. “The fundamentals of building cable systems – the cost of steel, the cost of oil both for the vessels and to make polythene, the cost of copper, the cost of materials to build repeaters – that isn’t going down. All those indices are going up,” remarked Huawei Marine Networks CEO Nigel Bayliff. “We can try and space things out more, we can try and squeeze more out of each cable – but fundamentally, those costs have an endpoint.” Of course, cables are built not just to meet new demand; they’re also laid to replace older systems. But design lives are not necessarily set in stone, further complicating the business case for new cable entrants who are compelled by logistics to plan their builds two or three years in advance. Southern Cross Cable went live in 2000, but its life has been increased by a series of large capacity upgrades taking it far beyond its original 120Gbps: current total lit capacity is now 1.4Tbps, according to sales and marketing director Ross Pfeffer, and he anticipates 6Tbps by December, higher than the initial design capacity of Pacific Fibre. “The quality of our network also allowed us to confidently extend customers’ capacity contracts from 2020 to 2025,” Pfeffer added in his address to PTC. “I expect that opportunity will arise again in 2015 when there is a strong likelihood that the commercial life of the Southern Cross Network will be able to be extended until at least 2030.” Still, cable vendors, at least, are keeping their cool. “I think even if people do keep [cables] for 25 years, looking at... the dynamics of the market and the diversity of routing, I still think this industry’s going to be okay, I still remain bullish,” said TE Subcom CEO David Coughlan. Indeed, route diversity looks set to be one of the key drivers that could help sustain the flurry of new builds. Ciena submarine networks industry marketing di-

rector Brian Lavallée says that, with roughly one submarine cable cut somewhere in the world every three days, an increasing number of operators are moving away from linear or ring redundancy configurations to strong mesh networks to ensure that services stay up. Resiliency has always been a selling point, but Lavallée suggested that current technology trends are only increasing its importance – even making it a differentiator for end-users. “We keep talking about the cloud, and it’s beautiful, but I just imagine a bank, a government department, a very large enterprise... if you don’t have access to your business systems or your data, and we’re talk-

Telstra’s Jim Clarke

ing outages here for some of these cables in the earthquakes of 35 days... you could lose a huge customer base in one shot,” he said. With the international bandwidth glut and price erosion, more carriers can afford more bandwidth on more cables than in the past, buying into enough links to build resilient meshes. And that’s good news for the cable operators, since demand growth in context of mesh systems could have a compound effect on revenues across the market, ideally more than offsetting overall the price falloff on individual systems and better utilising unlit capacity. “We’re working with a lot of individual enterprises... [who] have individual contracts with maybe fifteen different cables... and they’re actually meshing the network them-

selves,” said Lavallée.

T

elstra International is one example of a firm doing just that; global carriers SVP Jim Clarke sees a business case for two or more new cables crossing the Pacific from SouthEast Asia. “When the Japanese earthquake happened in March last year, I think the industry as a whole were quite pleased with the way that various diverse networks responded to many of the outages that occurred… previous quakes in years gone by had taught the industry how to mesh the networks, how to make sure there was enough built-in resiliency and redundancy so that when such a catastrophe occurred, they could easily reroute traffic,” he said. “But, in the conversations we had with our various carrier partners and the like, it also made us all sit up and say ‘well, how much is enough?’ We probably need more systems across the Pacific, particularly to support some of the 4G rollouts that are happening on both sides of the Pacific and just driving traffic exponentially. That needs to find a home somewhere. Yes, there is still a lot of unlit capacity across transPacific, that’s a given – but people are still looking for greater levels of diversity and redundancy in their networks.” And if the case for resiliency is changing, so too is the market for straightforward consumption. NextDC CEO Bevan Slattery – previously MD and co-founder of Pipe Networks, which built the PPC-1 Sydney-Guam link – not only sees traffic consumption in the booming datacentre market exploding, but shifts in bandwidth pricing opening up new opportunities for submarine cables. “Organisations are now saying... ‘I can now run a 100 meg link, or a gig link, for not a lot of money’,” he said. “I think that’s one of the biggest sleepers; sometimes, when you change your


bandwidth pricing structure... it can actually enable something completely different.” “I see the pricing coming down to the point where... corporates will take a gig link from Hong Kong to Singapore to Australia to wherever, because pricing is becoming really attractive; it’s becoming a real enabler.” Predominantly fibre-based national broadband networks are frequently cited as a future demand driver, as well – although, as the Japanese example shows, it’s difficult to accurately predict when a very high speed customer access network will actually translate into concomitant bandwidth consumption. Still, Internode regulatory and corporate affairs GM John Lindsay has flagged encouraging signs from the early days of Australia’s nascent national fibre-to-thehome network. “Internode has hundreds of customers on the NBN... the average traffic to those endusers is at least double what we see in our DSL market,” he said. “That’s because the lowest speed that’s usable on that network is 12Mbps [downstream], and then [the next service] is 25Mbps; when you have a 25Mbps service, when you have a definitely 12Mbps – not ‘maybe 8Mbps’ – you click the HD link when you go to a movie preview site, because you know that you’re not going to be waiting around for it.” Finally, some of the new submarine cable entrants are looking at new ways to sell their systems. Latency is emerging as a possible competitive differentiator; NTT Communications director of technology for network services Kempei Fukuda told delegates at the Carrier Ethernet APAC event that some 30-40 companies in the high frequency trading market demand low latency services

where every millisecond, and even microsecond, counts.

H

ibernia Atlantic is looking to cut a few milliseconds off existing latencies with its new cable across the Atlantic; when announced, Pacific Fibre was touted as the fastest Australia-US cable; and Arctic Fibre is aiming for 168ms or less from Japan to London. The question, though, is how

TE Subcom’s Golovchenko

much of a premium an operator can extract for a low-latency cable. Arctic Fibre president Douglas Cunningham, himself a veteran of the investment scene, is cautious. “These low-latency services are coveted by the financial services community... [but] we are not building this on the basis of trying to achieve any premiums relative to the other revenues that are available today,” Cunningham told PTC’12. “I believe that in the not-toodistant future, the regulators at various stock exchanges around the world are going to increase transparency of markets to prevent financial ‘front-running’, or trading ahead by using data ad-

vantages. That is to say, they will stop the clock at the top of the hour, allow a burst of data, and so on. So we’re not predicating a business model on any premiums from that side of the equation.” Also coming out of PTC’12 was the suggestion from TE SubCom product management MD Dr. Ekaterina Golovchenko that new cables could be built to bear undersea sensors such as TE Subcom’s own Subsea Power and Data Port. This could give cable operators an entirely new revenue stream from a scientific community which currently finds it difficult to collect data from the ocean floor. Indeed, the firm is already working with Pacific Fibre on the opportunity to add sensors to the latter’s network. It remains to be seen whether TE Subcom and other vendors will get the extra investment required to make co-located subsea sensors a reality, though – and whether they’ll be able to navigate the already complex legal network of treaties and protections around subsea cables crossing exclusive economic zones, continental shelf areas and the high seas. There are undoubtedly interesting times ahead in the next couple of years for the submarine cable market. Particularly in the current economic climate, the possibility remains that some of the array of new systems will either fail once in operation, or simply not manage to attract the funding they need to begin construction in the first place. But with a greater breadth of demand than ever before, new potential business drivers and revenue streams, and a very different resiliency paradigm than that of a decade ago, the submarine cable may just have enough options to be able to ride out the current surge and avoid repeating dotcom-era history.


Solar storms represent a clear & present danger to networks While prophecies of gloom and doom for the year 2012 will hopefully prove to be overblown, there exists a very real threat to the world’s telecommunications infrastructure that will begin its peak this year and continue well into 2015 – the solar maximum. William Van Hefner reports

J

ust as the seasons on Earth carry with them the potential of damage to infrastructure due to rain, wind and natural disasters, the sun experiences its own natural cycles that culminate every 11 years in what is known as the solar maximum—a period when solar activity is at its peak. During this time, the danger of the earth being struck by a solar flare or Coronal Mass Ejection (CME) is at its highest, bringing with it the threat of significant disruption to worldwide communications, navigation and power distribution. While solar storms have been

observed to cause disruption to communications networks and damage to equipment since at least 1859, society's increasing dependence on electronics and telecommunications services make such incidents more noticeable and potentially hazardous with every passing year. While a major solar storm in 1859 may have caused only minor damage and disruption to telegraph service at the time, a storm of the same magnitude today would likely result in the destruction of key electric and telecommunications infrastructure that would take months, if not

years, to repair. While solar flares and CMEs have the ability to affect the earth’s atmosphere in a number of ways, their greatest destructive potential is in the form of geomagnetic induced currents. Much like lightning, these electrical currents find a path to ground through any material capable of conducting electricity. Unlike lightning however, the energy is invisibly induced into conductive materials at speeds far to fast to be stopped by modern surge protectors or lightning arrestors. Most vulnerable to these cur-


rents are electrical cables, transformers, telephone wiring, satellites and communications towers. Underground and undersea cables are not immune, either. Surges of thousands of volts are capable of flowing for miles both under ground and under the sea. Fibre To The Rescue? The replacement of highly conductive copper cabling with fibreoptics most certainly improves the odds of a telecommunications network surviving such a solar event. Unfortunately though, a chain is only as strong as its weakest link. In the case of fibreoptics, the weak link in the chain results from the necessity of most telecommunications equipment to run on electricity. While fibreoptic cables themselves may not conduct electricity, electronic circuitry and switching equipment does. This is complicated by the fact that most of these circuits are eventually connected to the electrical grid at some point, which remains the most vulnerable part of our infrastructure. Any damage to the electrical grid will eventually cause fibreoptic communications to fail, unless carriers have the ability to generate sufficient electricity to keep their own equipment running until power grid repairs can be completed. Proper planning for such an event would include making provisions for powering not only long haul transmission facilities, but end-user telephone equipment for extended periods of time. If back up batteries are located at the premise, adequate shielding would have to be provided to avoid damage. Wireless Problems Certain forms of wireless communications are also highly vulnerable to solar storms. Most at risk are communications satellites, which can suffer permanent damage to their communications capabilities as well

as attitude control. Even if a satellite continues to function, atmospheric interference can temporarily disrupt reception by satellite phones and fixed earth stations. GPS receivers are especially vulnerable to interference, as GPS satellites currently transmit using very low power levels. Also subject to severe atmospheric interference, or complete communications blackouts, are all forms of radio communications. Lower frequencies tend to be hit the hardest, with UHF and microwave transmissions being less prone to interference than HF and VHF and communications. Atmospheric interference and communications blackouts can last anywhere from minutes to days, depending upon the severity of the storm. Being relatively new, cellular and high-frequency data networks have yet to be tested under severe geomagnetic interference. However, the equipment’s typically low transmit power levels and dependence upon cellular towers could prove troublesome, as cellular towers themselves are dependent upon interconnected power lines and backhaul service to continue operating.

Nowhere To Hide Telecommunications networks, electric utilities and wireless communications at the most northern latitudes generally run the greatest risk of service interruption during solar storms. However, destruction of electrical equipment and radio interference has been recorded as far south as South Africa. Additionally, exposure levels increase with altitude. Facilities located at high altitudes are more likely to suffer from magnetic and radio interference, as well as the effects of induced electrical current and (nuclear) radiation. What Are the Odds? According to a 2010 U.S. National Academy of Sciences report titled Severe Space Weather Events, the odds that earth will be hit by a solar storm as severe as that of 1859 are classified as, “low, but real”. Dr. Richard Fisher, director of NASA's heliophysics division, was quoted as stating, “It’s very likely in the next 10 years that we will have some impact like that described in the National Academy report. Although I don’t know to what degree.” The report stated that a major solar flare, such as one that occurred


in 1859, would have “catastrophic” consequences were it to hit the earth today. Risk Mitigation At present, most electrical grids and telecommunications networks are ill-prepared for the type of disaster that a major solar storm could bring. Electric utilities are generally more vulnerable than telecommunications systems, but without power from the grid most telecom networks will eventually fail. Carriers utilising both copper and fibreoptics can greatly enhance their network's odds of survivability by adequately shielding cabling and electronics. Even greater protection can be afforded by locating equipment in hardened facilities that shield interior equipment and cabling from electric induction. Probably the most important safeguard that carriers and IT facilities can take to prevent disrup-

tion of service is to maintain their own, on-site power generation equipment. Grid power may be unavailable for weeks or even months following a major solar storm. Carriers making use of satellite delivered communications are in a much tougher position than those running land-based networks, since repairing satellites physically damaged by solar activity is difficult if not impossible. Building and launching replacement satellites into orbit takes years of work and planning. Keeping a “hot spare” satellite in orbit may not be an economically viable solution, but is the only option that provides any reasonable redundancy for satellite operators at present. The Future Both electric and communications networks of all types are becoming increasingly interconnected as technology continues to dis-

solve the divide between them. While the many benefits of network interconnection may be obvious, not so obvious is the risk associated with increasing network interdependence. Scientific observation of solar activity has existed for a relatively brief period of time, but historical records are replete with examples of violent storms that would do immense damage to modern networks were the earth to experience them today. Unfortunately, the relative lack of solar activity we have experienced in recent years has had the effect of lulling society into a false sense of security concerning such dangers. Unless carriers and network designers plan ahead for such events, our first encounter with a major solar storm in the 21st century could provide us with a very expensive lesson in underestimating the immense destructive power of our own sun.

Storm effects upon telecommunications services 1859 Telegraph communications across Canada and the northern United States are rendered “utterly impossible” due to electromagnetic interference from intense solar activity. This was likely the result of the greatest coronal mass ejection in recorded history. 1882 Solar activity interrupts telegraph traffic across the U.K. and the United States. A Western Union switchboard in Chicago catches alight. Telegraph keys are melted at some locations. Notably, buried cables and above ground cables were equally affected. 1903 An aurora display that stretches “half way around the globe” temporarily disrupts telephone and telegraph service in many parts of the United States, Canada, Mexico, the U.K., Switzerland and France. Telegraph lines are unusable and reportedly charged as high as 675 Volts. In Switzerland, both telephone and electrical service is disrupted. Disruption to communications via trans-Atlantic cables is also reported. 1909 Solar activity causes severe electrical and magnetic disturbances in most of the United States and Europe. Electrical currents absorbed by telegraph wires cause them to fuse together. Compasses on lake ships in the United States are rendered completely inoperable, sending numerous vessels off-course.

1921 A particularly large solar flare causes complete communications blackout from the U.S. Atlantic Coast to the Mississippi River. Induced currents cause fires at both a telephone company in Sweden as well as at a New York Central Railroad facility. Telephone service throughout Europe is also affected. 1940 Almost all telegraph and long distance telephone lines in the United States are rendered inoperable. A transAtlantic cable ceases to function after recording power surges of up to 2,400 Volts. Long distance lines between the United States and Canada are later found fused together by high voltages. 1972 A Class X2 solar flare causes long distance circuits of AT&T and Canadian telephone operators to shut down after 60 Volt surges are recorded on their cables. Various telephone system components “from noise filters to carbon blocks”, are destroyed. 1989 A massive cloud of superheated gas from the sun bombards the earth, resulting in a ground current surge that cripples the electrical grid of Hydro-Quebec Power Company. Six million Canadians awake to find themselves without electricity. The outage caused an estimated $US2billion in damage. In the United States, over 200 incidents of power disruption were reported.


• National Coverage • Supporting Infrastructure • Speed to Market

For more information contact our Site Sharing & Services Team at sitesharing@broadcastaustralia.com.au or call (02) 8113 4666.

www.broadcastaustralia.com.au


America to Iceland & Portugal, via Ireland Emerald Networks has embarked on an ambitious project to build a 40Tbps cable system across the North Atlantic. CommsDay talks to Emerald Networks CEO Raymond Sembler about the business case behind the construction of a new system on route widely regarded as having plenty of latency supply and pressured by continual price erosion.

TC: What is the Emerald Networks project about? RS: We are building a transAtlantic route, backhauled from New York City, right to London. On the wet plant side, we are going to go from Shirley, Long Island, which is about 8 kilometres from NYC, we are going to go across the Atlantic and hit the west coast of Ireland, coming into a small town called Belmullet. From there, we will do terrestrial backhaul across the whole of Ireland, through Dublin, into the Irish Sea, then into the UK, and down into London. And what we will do, on both of the terrestrial sides, we will have dual routes. So we will have primary route that will focus on low latency, and then we’ll have a backup route to make sure that people get some assurance of the network on the terrestrial side. What we are doing off the trans-Atlantic, is as you head eastward, we will have a branching

unit that will run up to Iceland, and a branching unit that will come off and come down to Portugal. We are going to service the trans-Atlantic and we are going to have a low-latency, high capacity route with 40Tbps across the ocean, and we are going to have a 10Tbps branching unit into Iceland, and a 10Tbps branching unit coming off to go down into Portugal. What is the strategy behind the build? The strategy is two fold. We want to service the southern European market by coming into Portugal, and we do know that Africa is underserved, it has a hard time getting to North America, so what we are planning to do is connect right in with the ACE system and other systems that come out of Africa and into Portugal, and then from Portugal, it will go north into Europe, France, and

then come back into the UK, and form an almost circular route in southern Europe. Iceland is the other part of our dual-fold strategy. We know there is an emerging data centre market in Iceland, and that’s really based upon the energy there – the geothermal energy and the hydro energy there, which is 100% green, and the most important thing about this green energy is that it is probably one-fifth to one-tenth of the cost of carbon -based supplies in Europe and North America. So we have seen an onslaught of data centre activity in Iceland, and we do know that there is a quite a number of large Fortune 500 type technology companies that are looking to place some services there, and then to attract the Fortune 500/1000 to put their data centre in Iceland. What is unique about Iceland now with the energy play, is that it is somewhat in between Europe and North America, so we are


starting to see what we called the Emerald triangle – so if you imagine the map, from New York to London and with the bridge up to Iceland, people who have data centres in Europe may put either backup facilities, or DR, sites in Iceland initially, or start to move some supercomputer sites to take advantage of the power there. At the same time, by coming into the west coast of Ireland, people are saying ‘I’m in Dublin, and right now from Dublin, I go eastward to London and then I jump onto one of the other transAtlantic routes and come around back towards North America.’ What we are getting a lot of attention on is: ‘But if I’m in Dublin, why don’t I shift my traffic west to Belmullet, and jump on the Emerald Express and then hit North America that way. It will be much more efficient.’ Ireland has done a great job with the data centres in Dublin – there are about 25 Fortune 500 companies there, and they are

looking to attract services to Ireland on the west coast to take advantage of their proximity to North America. Where is the project status today? Where we are right now is we have some Round A funding and Wellcome Trust is one of our investors. We also got US$500,000 out of Iceland, just to basically reassure us that Iceland is very much in the game. We have three focuses right now. The CFO and myself are focusing on raising Round B money to the tune of about US$15US$20 million, and we are very hopeful that by the end of February, we would have locked down that additional equity investment. We’ve got Jefferies & Company, a world-class investmentbanking firm out of New York City – we’ve been working with them since October. We have fin-

ished up our business plan, we’ve got an executive summary that we are handing out to initial investors now – it’s basically a three pager with our cashflows and ROIs, for the early equity investor. They are going to launch Round C in the March-April timeframe in hopes that we will raise another US$80 million in equity in May. And following on in May, there will be US$100US$110 million in debt that will be structured finance. The initial route to go from New York to London with the backhaul and so forth, is about a US$220 million cost, and then as we add the branching units to Iceland and Portugal, it will be another US$70-US$80 million to do that. So we are on target and we feel pretty comfortable right now that we will be in the water on the New York to London route in late 2013. We continue to raise the money that we anticipate, and our


presale is a big part of that, and we are hitting all the Tier-1, Tier2 carriers right now in the process of doing pre-sales with them, that will help fund the debt. With the schedule that SubCom is putting together for us, we have a September 8th [2013] date for RFS that would basically be when the lights turn on and the plan is to come back in the spring of the next year to finish those branching units. How will Emerald Networks compete in what is a pretty competitive market? The last cable that went in was 2003, so by the time we light this up in 2013, the youngest cable, beside ourselves, will be at least 10 years old. The conversations that we are having with the large carriers is – they are members of the TAT-14 and there’s other opportunities out there – one of our strategies is we are really pushing hard the 15-year IRU. Basically, we feel that people can get 5 to 7 years out of their systems, so if carriers go back and purchased additional capacity on those systems, they will only get 5 to 7 years at most, and basically, for the same amount of money, they can get a newer system, with 100G speed, newer technology, better fibre and a 15-year IRU. I don’t think our take is that the Atlantic is out of capacity, I think our take is that the Atlantic capacity is older – out of technology rather than out of capacity. We are really focused on being new, with new technology, and then by tying into Portugal by bringing in the African traffic up, and bringing up to Europe as well as North America, and with the data centre play – our business model that we are showing investors looking to make serious investments in our company – basically we are going to use the Atlantic to help pay the bills, we are not trying to flood the market,

we are not trying to give away capacity, but we will use the Atlantic revenue to then fuel what we believe is the real cornerstone of our business model, which is the data centre play with Iceland and Ireland. And then the big Fortune 500 companies we are talking to, that we think will move their data centres in Europe more towards Dublin and into Iceland, would want to connect to America as well and really create that triangle between North America, Iceland and Europe. So a big chunk of our revenue model in 2015/16/17 is primarily focused on the data centre triangle. We think we will have dual strategy that is somewhat unique from the other cables. Iceland seems to be playing a major part in your business plan, what’s the connection there? We’ve got a founder who is Icelandic and we have a board member who is Icelandic, and we have ins with the government up there. We’ve been up there many times – one of our founders still lives there. What we are seeing is that by the time we complete Iceland in 2014 – two years from now, there’s big pushes – Verne Global [data centre operator in Iceland] just received a huge investment from Data Catalyst and Wellcome Trust and basically that furthers their business model. We have been chatting to what we called, the Fortune 50 firms, who we know have been to the island, who are now talking to us about packaging our capacity with their services for data centres. So while Iceland has been thinking about it for a while, we do think that by the time we put the system in the water, they will be ready for us. They have done a lot of work on the tax side, they are not going to charge you for

the equipment and services that you bring in for the data centres. It is almost as if they have cordoned off a free-trade zone. They haven’t really done that, but essentially, they have put some tax incentives in place, and with the cost of power and with the rules coming down on CO2 taxes and the cost of coal-based services, we do think the timing is right for Iceland to make a major play to use their power for something other than the aluminium smelters, which is basically the biggest users of power on the island right now. What about the ongoing price erosion that is expected in the trans-Atlantic market? The last three years, we’ve seen prices flatten out, which is good for us. There is still a little bit of hangover from the 1999/2000 era. It is not our intent to flood the market with capacity to drive prices down. What we have projected in our business model is a couple-percent drop in prices over the years, it has really help the investors feel comfortable that we are projecting on the Atlantic a significant increase in pricing. As our cable goes in, and it is newer, we do expect to receive a little bit of a premium, for the newer route, and a premium where we can give someone a 15year IRU where the other cable providers just can’t offer up 15 years when they have been in the water for 10-15 years. The real premium that we will get on pricing will come from the data centre market. Have you sorted out the landing parties yet? We are in negotiations with one of the biggest carriers on the east coast of the US, to perhaps partner with them on a cable landing


station, so we are in the midst of having conversations there. We don’t have Portugal yet, but given that is 2014, we have some contacts there, and we don’t see that as a problem. Our initial focus is we’ve picked one in New York and we are still in contract with them because we are looking to do a swatch for some bandwidth – lets use the cable station, lets look at the backhaul up to NYC. In Grindavik, Iceland, we are only 10-15km out of town, and there are a couple of backhaul providers. Our founder up in Iceland are working with people who have a ring around the island, so Iceland is pretty easy – we’ve already picked a spot there and securing land. In Ireland, we have a very friendly Irish government, who have attracted us to the island. They have basically helped us with the permitting process – they told a six to nine month permitting process and did it in about three weeks. They are giving us access to their oceanographic ship for three weeks, basically at no cost, to the marine survey off the tip of Belmullet there. The biggest challenge now is probably to get the permitting done in New York. How will the project generate returns on investment for investors? We think we have a strong opera-

tions team, a strong finance team and a strong sales team, and basically, we take one foot forward in sales, one foot forward in finance, and one foot forward in operations. So we are kind of marching along together, to get towards that 2013 calendar year date, because we need some more money obviously to build it out, sales will help drive funding, and funding will allow us to make those payments to SubCom, that we need to start making in short order, so they can start ordering the long lead items. We have looked at a lot of the studies. We subscribe to TeleGeography as other folks do and Cisco has come out with some info. What we have seen is that we are looking at a 9-fold increase in data traffic from now to 2017.

This cable is going to have a life of 25+ years. We have run our cashflow out to 15 years and the way the model works right now, by the years 5, 6, 7, we’ll 100% paid off our debt. We’re going to give early investors, 35%-40%, and on the debt side, given the nature of the capital markets right now – they are very conservative, we are putting forth an 8% return interest rate on the debt side. Given the traffic increase that we’re seeing –70%-75% of the Internet is still served up by the US. The Atlantic has yet to see real Internet traffic, and what we’re seeing, protection-wise, while there is still capacity there from the other carriers, people are going to migrate to new technology, to safer routes, to more low latency routes.


Australia a more attractive global destination for tech spend? As volatility continues in the US and European markets, Australia is emerging as an increasingly attractive option for global telco market players looking to refocus some of their domestic spend overseas. David Edwards reports

A

ustralia enjoys a strong position as a gateway market for APAC, with many telco firms large and small looking to invest in the region more than ever before – if only to offset declines elsewhere. A new Gartner global report, covering 37 industries in 45 countries and polling more than 2,300 CIOs with a combined IT budget of US$321 billion, found there would be an average IT budget increase of just 0.5% across the board in 2012. But while IT spend in North America and Europe was expected to decline 0.6% and 0.7%, respectively, APAC spending was tipped to rise a healthier 3.4%. Norton Rose partner Nick Abrahams says that because Australian tech budgets have not been “squeezed as hard as budgets in Europe and America” since the 2008 Global Financial Crisis, the result has been a significant rise in US companies launching startup tech businesses in Austral-

ia. “The Aussie dollar is doing well, our technology budgets are still pretty good, and Australia is seen rightly or wrongly as a stepping stone to Asia, as an Englishspeaking outpost in this part of the world,” he says. Ovum analyst Kevin Noonan cites two reasons for the renewed interest in Australia. The first is Australia’s current reputation as a resilient economy, which is enticing companies into spreading their national risk from the traditional European and US markets. Secondly, like Abrahams, Noonan says that Australia offers overseas companies a natural launching pad – both regionally and culturally – into Asia. “Gaining experience in Australia first has indeed been a very good way of [eventually] moving into Asia, particularly since the Asian markets tend to value relationship and trustworthiness even more,” he explains. “A good example is companies that do badly in Asia tend to be the ones that just want to do a deal, and

are simply saying ‘what sort of a deal do I need to do to make you buy?’ Australian markets tend to be sensitive to that issue as well but are far more upfront in educating companies of their shortcomings!” According to BT Australasia MD Paul Migliorini, the idea of Asia being critically important to telcos is hardly a new one. But since the GFC, he says, traditional multinationals from Europe and the US have been quickly discovering that Asia is now a key – and, in some cases, their only – source of top line growth. “And that drove a real change in decision making… [from being] an organisation where the [global] CIO has traditionally been an implementer of global architectural decisions in Australia, or decisions were made in the US. Today the Asian CIO is responsible for enabling services that will drive top-line growth of 10-30% a year in Asian markets – and all of a sudden the Asian CIO has the pen on what deci-


sions get made for his region.” “There are a lot of multinationals who are looking at Australia now and saying ‘Australia is economically in the best shape it’s been in a long time, arguably ever’,” he continues. “And I think Australia does compete with Singapore and Hong Kong for those regional hub-type investments as well, and the quality of our infrastructure locally is a factor considered heavily also.” That’s not to say that Australia is taking all the regional business, however. AT&T ANZ MD Fred Girouard says the US-based telco is still seeing its own customers venture elsewhere in APAC for their regional hubs. “Although Australia is an important regional market for our customers, it is not a current trend for Australia to be the regional headquarters. For our customer base, the more frequent locations are the ASEAN markets or greater China,” he says. “We have not noticed a significant increase in the number of global MNCs setting up business in Australia. However we are seeing growing demand from Australian headquartered MNCs – especially in the mining and resource sector – driven by strong economic growth in emerging countries like China.”

tems, a cloud provider… Vodafone Global bought a software company called Quickcomm. There’s also M2M communications and remote meter reading… and previously coming to Australia wouldn’t have been a first option for these types of companies.” Genband is one US-based company that has recently stepped up its ANZ investment. The global IP infrastructure solutions provider bought out Nortel’s carrier voice and applications services unit in 2010 and has since integrated the company’s VoIP products into its portfolio; Genband now provides service offerings to Tier 1 carriers – including Telstra, Optus and TNZ – and smaller Tier 3 carriers, such as iVoisys. APAC VP for sales Dave Shier says that the growth rate and momentum of the Australian economy “remains much higher than in other countries such as the US and many parts of Europe.” “We are focused on the growth of our Australian team. For example, we are doubling our investment and focus on Tier 1 carriers as we know there are substantial network transformation

N

evertheless, Abrahams says that the number of firms coming into Australia from overseas, particularly Europe, is currently unprecedented. “[These entrants] are all related to tech and telco – whether it’s utilising cloud in some way or software associated with the comms sector. If you look at last year, we had a very large number of venture capital US companies come in that previously had never been interested in Australia; before then you’d have to move to Silicon Valley… your whole business over there to get their attention.” “Last year NTT came here and bought 70% of Frontline Sys-

Norton Rose’s Nick Abrahams

business opportunities to assist them in… [transitioning] to all-IP converged networks,” he says. As for companies establishing a presence in Australia as a gateway to Asia, Shier says that all ser-

vice providers are finding they need to play on a global scale to counter the borderless nature of the internet. “In Australia, as we watch Telstra push out [overseas], we see other national operators push in, like SingTel, BT and Verizon,” he says. But the strong Australian dollar will not play a significant role in Genband’s future expansion plans, according to Shier. “While [it] does make our resources in the region more expensive, many of our contracts are also in Australia dollars, which helps minimise the impact,” he explains. Rather, terms of driving future investment in Australia, BT’s Migliorini and Shier both see the country’s governmentfunded National Broadband Network project opening up possibilities for overseas-based firms; the latter describing it as an opportunity to “reinvigorate the communications space here.” Ovum’s Noonan adds that regional Australia, in particular, could be the big winner in terms of foreign investment in an NBNenabled world. “The market is set for growth in regional Australia; there are already universities, attractive and cheap living conditions – so there’s a challenge for regional Australia, in that councils are looking to use this opportunity to attract business to the region,” he says. For now, Abrahams expects this pattern of strong overseas IT and telecommunications investment in Australia to play out over the next few years as the US and Europe shake off their GFC hangovers. “Australia is still very strong and the banks are very profitable… there’s the mining boom, governments are still raking in good tax dollars and can afford to spend,” he says. “And organisations can’t continue to hold back on tech spend any longer. They’ve held back during the GFC and now upgrading has gone from being discretionary to critical.”


WILLIAM VAN HEFNER

How to destroy the internet B

ills introduced into the U.S. House of Representatives and U.S. Senate in late 2011 had the potential to all but cripple much of the Internet had they eventually been passed into law. A last-minute flurry of grassroots opposition spurred on by the support of companies such as Google, Netscape and Wikipedia managed to temporarily derail passage of the bills, but it is almost certain that they will be reintroduced at a later, more politically expedient time. Known collectively as SOPA (Stop Online Piracy Act) and PIPA (Protect IP Act) the two bills were sponsored by legislators with close ties to the entertainment industry. In theory, the power of these new laws would have been used to combat international copyright infringement. In the process, the legislation would have also completely bypassed one of the most fundamental aspects of United States law – the right to due process. Essentially, SOPA and PIPA would do away with one’s right to be presumed innocent until proven guilty in a court of law. All it would take is a mere complaint to the proper authorities in the United States to unleash a scorched earth campaign against the accused, no matter where in the world they resided. While most U.S. laws do not normally affect those outside of the United States, the effects of this legislation would have been felt across the entire Internet, with little or no recourse for those in other countries. While SOPA and PIPA may

have had some effect against copyright violators, the potential collateral damage to both international businesses and individual Internet users make any losses felt by the entertainment industry pale by comparison. In essence, these new laws would put direct control of domain name resolution (DNS) and the blocking of IP addresses in the hands of bureaucrats with little or no concept of how the Internet actually works. The potential for incompetent government officials to accidentally cause worldwide disruption to major portions of the Internet seems to have never been contemplated, nor had any consideration been paid to the effect it may have on those outside of the U.S. At best, SOPA and PIPA would place the very existence of entire international corporations in the hands of media outlets and copyright lawyers. At worst, the laws could be used by anyone with a fax machine and too much spare time on their hands to bring much of the Internet to a grinding halt. Since neither of these laws place any accountability on government employees or copyright holders for simply following the letter of the law, it would be entirely within the boundaries of the law to file a complaint that could shut-down key domains and IP addresses used to direct Internet traffic. For example, a copyright complaint targeting google.com or opendns.com would have the effect of cutting off DNS service to literally tens of millions of Inter-

net users across the world, since many carriers, ISPs, government agencies and private companies rely upon them for DNS resolution. In fact, the government would only need to block international IP traffic to a total of four IP addresses to render these two leading commercial DNS services completely inoperable. All it would take is a mere allegation of copyright infringement. Rather than a web of interconnected networks crossing the globe, in many respects the Internet more closely resembles a chain that is most vulnerable at its weakest link. Perhaps the most troubling aspect of all this is not so much the possibility (if not inevitability) that a scenario such as the above could occur is the fact that a single government could, and would, build the infrastructure necessary to disrupt worldwide Internet traffic at the mere push of a button. To hackers, terrorists and anarchists alike, such a system would prove an almost irresistible target. To politicians and private business interests, the ability to use or abuse such a system would also likely prove enticing. The United States already has a reputation of using “gunboat diplomacy” to impose its political will across most of the world. The mere fact that it is contemplating enacting legislation that has the potential to disrupt worldwide communications should be of concern not only to those of us in the United States, but to the citizens of every nation.


Change, or be changed Transforming the Telecommunications Industry through Customer Centricity www.pbinsight.com.au

Every connection is a new opportunity™


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.