Page 1

December 2011 – January 2012 • Published by Decisive • A CommsDay publication

Are data centres now essential economic infrastructure? The new face of innovation at AT&T Behind the new Connecting Africa cable LightSquared tips the boundaries of politics & tech Amazing new developments in optical technology

Early Bird Registration now open for Australia’s premier telecom event PLATINUM SPONSOR

Westin Hotel, Sydney Australia Tuesday 17 April, Wed, 18 April 2012


Shadow comms minister Malcolm Turnbull

Telstra GMD Consumer Gordon Ballantyne

ACMA chairman Chris Chapman

DINNER SPONSOR Primus CEO Tom Mazerski

AAPT CEO David Yuille

Internode MD Simon Hackett


Comms Alliance CEO John Stanton

AARNET CEO Chris Hancock

Ovum analyst David Kennedy

Market Clarity CEO Shara Evans

Mallesons Stephen Jaques’ Neil Carabine

“Where Australia’s telecom leaders meet”

Plus many more—full programme in January

Early bird registration available now: Stay at the Westin for convenience and value. Special offer. Register here:


ABOUT COMMSDAY MAGAZINE Mail: PO Box A191 Sydney South NSW 1235 AUSTRALIA. Fax: +612 9261 5434 Internet: COMPLIMENTARY FOR ALL COMMSDAY READERS AND CUSTOMERS. Published up to 10 times annually.

Columns KEVIN MORGAN Will the poor really pay more for fibre?



WILLIAM VAN HEFNER The missing of politics & telecoms


EDITOR: Tony Chan at GROUP EDITOR: Petroc Wilton FOUNDER: Grahame Lynch WRITERS: Bill Bennett, David Edwards, William van Hefner, Grahame Lynch, Dave Burstein, Bob Fonow



EVENT SPONSORSHIP: Veronica Kennedy-Good at






CommsDay is published by Decisive Publishing, 4/276 Pitt St, Sydney, Australia 2000







ACN 13 065 084 960

Beyond Broadband - Fluid Experiences on Liquid Networks (“Worms ate my network”) Australia is entering the “post-broadband” era. Operators no longer control the value chain and devices and independent applications are important as the customers are connected over a range of networks - a mobile network on the way to work, Wi-Fi at work and the fixed network at home. This is a whole new “can of worms”. Operators must come to terms with the surges of demand as customers pile into trains, pull out the smart phone that was released just last month and start watching the “Over-TheTop” streaming video service that just arrived in Australia. The demand changes continuously and the customers experience how the network is coping on their devices minute by minute as the train gobbles capacity from base station after base station like Pacman or the Worms of the last generation’s handheld games. Customers expect fluid experiences as they move from place to place and even network to network. One passenger on the train picks up reading a book from one device to the next – opening automatically at the same page synchronised via networks. Another carries on a long conversation as the phone shifts from cellular to Wi-Fi at work. Yet another finds the movie that they started watching on the media player and large screen TV at home can be finished on a tablet on the train. It is becoming a world of fluid experiences on liquid networks. Access network capacity has to be more liquid than ever before and the customer relationship and experience will only be fluid if the core network is designed around this goal. Kalevi Kostiainen, Managing Director for Nokia Siemens Networks, Australia and New Zealand, will describe the changed environment and how infrastructure has to be even more carefully architected to accommodate the dramatic changes in unpredictable requirements. The fixed and mobile access networks need to deliver the speed and gigabytes, but the operator’s core network is taking on a much greater role in maintaining the customer relationship and growing revenues beyond access. “Honey, I lost my Telco!” was the new concern last year as operators started to grapple with the changes around devices. This year there is an added concern as demand surges and changes. “Worms ate my network” could be the CTOs complaint in 2012 if action isn’t taken quickly.

Copyright Nokia Siemens Networks. All rights reserved.


Will the lower paid really pay for fibre?


ccording to the latest Australian Bureau of Statistics (ABS) 2010- 11 survey around 1 million Australian households now use mobile broadband as their ‘main type’ of broadband connection – 16% of Australia’s 6.2 million broadband households. This marks a significant increase since the 2008-09 survey when only 7% of broadband enabled households were using wireless (mobile) as their main broadband connection. The surge in the use of mobile was to be expected given the bureau’s mid year ‘Internet Activity Survey’ which found there were now more mobile broadband than DSL connections. This rapid growth in mobile connectivity has implications for the NBN as the Greenhill Caliburn review of the corporate plan stressed twelve months ago. The NBN’s business case rests on demand for fixed line services being maintained with no more than 13% of occupied premises being wireless only by 2025. Both the government and NBN Co are sensitive to increased interest in mobile broadband and have been at pains to stress that the sheer volume and growth of data being downloaded demands fibre. Their argument that fibre and wireless are complementary has obvious merit and there seems little question that the ability to bundle mobile and fibre delivered broadband will be a critical to ISP success in the NBN era. So should these latest findings from the ABS be of concern to NBN Co? Yes but not necessarily solely because of the threat of wireless only households. Just because 16% of broadband households currently identify as mobile

only doesn’t mean they are locked in as wireless only households. There may be a number of factors at play not least of which is the impact of, as CommsDay founder Grahame Lynch elsewhere described it, the six year capital strike that has prevented investment in fixed broadband. Households especially on the urban fringe unable to get ADSL because of broadband blockers may have been forced to use mobile services. Once available they will probably take fibre services especially if they have school aged children , the demographic that has the highest broadband take up. Overall the impact of mobile may, as I’ve previously suggested, be more subtle in a large part of the market than direct substitution. The real question is will households, and not merely those using mobile as their main broadband connection, be willing to give up their wireless service to pay for the more costly higher speed services which are critical to the NBN’s success. Many households may chose a complementary package of entry level or moderate speed NBN service and wireless but may not be able to afford very high speed fibre based broadband and wireless – although the two may be complementary they are competing for a relatively fixed household telecom budget. ABS household expenditure surveys shows that whether high or low income, households spend around 4% of their budget on telecommunications services and that’s not changed significantly since mobile voice became a mass market in the late 1990s. What has occurred in the last

decade with fixed line demand is that as the price of voice has fallen expenditure on broadband has increased. This is reflected in industry figures which show total fixed line revenues for basic services have been flat at around $11 billion a year. Unless there is a compelling value proposition in very high speed broadband then consumer spending may not grow for fixed line services given the attractions of mobility. Therefore whilst it is reasonable to assume that with telecoms expenditure of around $240 a month high income households, those with yearly income above $120,000, will have little difficulty in paying for premium NBN services, middle and low income households may be squeezed by their spending on mobile broadband and chose not to move too far up the NBN value chain. There are currently 2.2 million households or 22% of Australian households with incomes of less than $40000 a year. If half of these have no interest in the NBN then it would reduce expected annual revenues by 2021 by at least $380 million taking into account continued growth in the number of households. At 6% of revenues this may not be fatal to the NBN but coupled with a possible shortfall in the demand for higher speed services from middle income households it could create problems given the sensitivity of the NBN’s business case to any shortfall in revenue. Consequently despite what the government might claim it seems the rise of mobile broadband will pose a challenge to the NBN and it’s a challenge that needs to be monitored.

• National Coverage • Supporting Infrastructure • Speed to Market

For more information contact our Site Sharing & Services Team at or call (02) 8113 4666.

The coming data centre age As the world turns digital, the physical facilities that serve as the repositories of data will play an increasingly key role in the future growth of Asia Pacific’s leading economies. Can any market afford to ignore a data centre strategy as part of its future plans? By Tony Chan


very successful economy is built on a set of basic, and common, foundations. Roads need to be built so goods can be delivered, schools and education systems need planning to ensure a skilled workforce. Airports, train stations, container terminals, hospitals, business districts, housing development, water and power utilities infrastructure –the list goes on. The simple fact is that for any market to thrive and prosper, it requires a basic set of infrastructure that helps facilitate investment in business, and enables its people to be more productive. Where would New York be without Wall Street? Where would London be today without Heathrow? Where would any major market be without the essential infrastructure to support business activity and the well being of its people? For the last century, the emphasis has been put on physical infrastructure – roads and highways, buildings and offices, and other such projects. But as the world enters the digital realm of the 21st century, those concepts simply won’t be enough. In a world where information is the most popular and often most val-

uable form of interaction, where data is the dominant commodity, it is what nurtures, stores and manages digital data that will be key to economic success for any market. In other words, any economy that hopes to succeed in the future will need a development

strategy that takes into account support for digital data. That means data centres, with all those attributes – power, connectivity – that are needed to support a healthy data centre ecosystem. For any country to stay competitive, there needs to be a view that data centres are “essential infrastructure, critical infrastructure, so that [they] become the nerve centre of a country’s finan-

cial system, a country’s technology system, and a growth platform going into the future,” according to Kris Kumar, APAC head for global data centre operator Digital Realty Trust. “Whether you like it or not, it doesn’t matter what services you sell out there, they originate from a server, from a network device, they originate from a computer or storage box, and they need a home, and that home is a data centre. And just like as a logistics business grows, warehouses grow... data centres have to grow. “And these are significant growth drivers for businesses, and the data centres down at the bottom are just the enabler, but a very important enabler in this full ecosystem.” SINGAPORE’S DRIVE Not surprisingly, governments around the world are starting to recognise the importance of data centres, not only as a source of growing investment into their markets by global players but also as key platforms to facilitate economic activity. One example of a government that sees data centres – in fact all aspects of information and communications technology – as a

key national asset is Singapore. In addition to its government-led national fibre roll out, the Singapore government is now actively looking to attract data centre investment to its shores. “They are providing guidance on Singapore as a destination for data centres, through the economic development board, through the IDA - they are out there talking to customers, our customers and other potential customers, every day,” said Kumar. “They are setting standards, they are setting goals; they are looking at data centres as being critical infrastructure, essential infrastructure, like sub-stations, desalination plants, reservoirs, any other critical infrastructure that a country should have. More importantly, the government is neutral to bids and business models when allocating land, incentives, licenses, and ground uses.” Other markets around the Asia Pacific region are increasingly starting to take up the same attitude towards data centres, added Anthony F. Balinger, VP at HKCOLO, the operator of two data centres in Hong Kong. “I think that in developing countries right across the region, there is a general recognition that data centres are vital economic drivers,” said Balinger. “That means traditional economic centres like Hong Kong are increasingly being targeted by emerging economies who want a piece of the action by building out data centre capacity, just like they have put up container terminals, or airports, in the past.” “We tend to think of Singapore as being the traditional competition to Hong Kong, but I have to say that there are developing markets that are appearing, such as Manila, where economic and public infrastructure would provide additional competition,” Balinger continued. “Shanghai is looking increasingly interested, Singapore of course – we know they are always there. But don’t discount the

Philippines – they have an English-speaking workforce over there, they have expandability in the Central Business District, and they certainly have expandability in data centres as well. It could be anywhere around the region where we [will] find people who will take that opportunity [from Hong Kong] if we let them.” REGIONAL DEMAND Data centre demand will ramp up at a CAGR of 21% out to 2016,

Kris Kumar said Mark O’Brien, regional CEO for Asia Pacific at Global Switch. While increase consumer usage will drive a large chunk of that growth, there is growing evidence that shifts in corporate IT operational models will also contribute a large part of the growth. According to O’Brien, wholesale data centre providers like Global Switch now play a key role in enabling corporate and business growth. “Especially for the financial sector, they are starting to grasp that they can focus on their fundamental sectors and that they can outsource the nuts and bolts of data centres to companies such as ourselves – that has certainly been increasing in the last several years, gaining general acceptance,” said O’Brien. While corporate IT might not entirely outsource core applications and IT operations to outsiders, they are starting to decouple

the underlying data centre infrastructure – the facilities, power management and so on – from the actual IT operations themselves, added Kumar. He is already seeing this happening, not only in financial institutions, but also across all sectors of businesses. “[It was] the days of old when, in immature markets, most companies would try to stack up value and vertically integrate – even if they were right at the top of the cloud layer. The IBMs of the world thought they needed to own their data centres, and vertically integrated that model through the value stack to extract the maximum value of revenue,” said Kumar. “That is an expensive strategy, and as data growth continues, with smartphones, mobile phones and data growing and storage growing the way it is, it is not a feat for the faint hearted – to be able to invest the kind of capital that you need.” This is both good news and bad news for governments. While there is clear demand identified by the industry, billions of dollars will be needed to meet that demand with new facilities. Kumar estimates that between US$5 billion to US$8 billion of investment is needed to build 4 million square feet of data centre space –estimated by Frost & Sullivan to be the amount that Hong Kong will need in the next five years. Add on the hardware and software that are needed to fill out those data centres, and the cost mushrooms to US$15 billion to US$20 billion. THE STAKES Failure to attract that kind of investment for data centres would leave markets vulnerable to competition from neighbours. And failure to have enough data centre capacity would ultimately stunt business growth, added HKCOLO’s Balinger. “Data centres play a genuine role in the city’s ability to grow. If the distance to the data centres is

Fender didn’t anticipate Hendrix Great inventions never get old. They constantly evolve to spawn new great inventions. Just like Fender’s electric amplification of guitars eventually paved the way for rock & roll, a genre that remains the engine of global youth culture today. We believed from the start that Mobile Broadband was such an invention. That it would come to revolutionize how the world communicates. Today, 20 years after coining the term and filing the first patents, we think we’re finally there. Mobile Broadband has become a whole new way of doing business, and it’s time to explore its full potential.

UNPLUG! Let´s Unplug Mobile Broadband - visit

too far from the common centres, then it creates other challenges… backhaul engineering, infrastructure considerations, cost considerations, and even resource and logistics costs mount up and effectively have a negative impact on the location,” said Balinger. So a place like Hong Kong “needs to keep its supply of data centres available in step with commercial business growth. Failure to do that would simply open up the door for other locations in the region to provide these solutions.” And it is not just the data centre investment that markets will be missing out on, but all the investment in businesses that data centres support. “I cannot emphasize strongly enough how essential this is, because any weakness or shortfall in infrastructure... has a knock on effect as you use up reserves. The gestation period for data centres is not short, so if there is a shortfall, you will have to wait [until] further along before that capacity is available,” said Balinger. “We need to consider the intermediary and longer term strategy; what does Hong Kong need in 30-40 years’ time, what could be the environment that it will find itself in.” FUTURE For the Asia Pacific, the situation is more pressing because there wasn’t an abundance of facilities to start with – and those are in operation are rapidly running out capacity. “In the Asia Pac, the growth of ICT is obviously an uniform driver globally, but what is more important, is we [Asia] didn’t have the dotcom boom and bust. We didn’t have the luxury of having data centres sitting around to be acquired, and converted and redeveloped, into fit-for-use for newer purposes – they just don’t exist in Asia Pac,” said Kumar. “It is a new build environment, where we have to create these from scratch, greenfield or conversion of existing facilities; that’s what it takes to satisfy this

overwhelming demand from all aspect of business.” Highlighting a survey by Tier 1 Research, Kumar noted that many of the top data centre markets in the Asia Pacific will be at, or approaching, full utilisation in the coming two years. Hong Kong, Sydney, and Melbourne will out of capacity by 2013, according to the research. “You can see that Hong Kong’s utilisation rate in 2013 is going to reach 99% of the existing capacity in Hong Kong, including new capacity that is coming on stream. We typically see… a data centre that is 80% utilised as being full. So what that tells you is that we’ve gone beyond the capacity that is currently there in Hong Kong,” said Kumar. “Singapore, pretty much the same, Sydney is the same, Melbourne is the same. Asia Pacific is ready for some phenomenal growth.” “We are on the cusp for data centres in the region.” Similarly, the emergence of cloud computing represents another pressure point for the data centre industry in Asia Pacific. CLOUD GROWTH “Cloud computing… there is huge growth from that,” said O’Brien. “One of the things we are looking at is how do we actually provide data centre space which meets the very flexible model that is necessary for cloud computing.” Those requirements are now outpacing the last generation of facilities in both features and capacity. “The standard problem is that the specifications for those data centres were worked out 10 years ago. They were fine at the time – we thought we were looking forwardly – but they are not the right specifications for the next ten years,” said O’Brien. “They are approaching capacity in terms of space and power… and there’s not a lot you can do to upgrade the data centre once it is built. You can tinker at the edges and improve the PUE, but

not at the level of new data centre builds… new data centres are needed.” GOV’T SUPPORT There is increased governmental recognition of the importance of information technology to economic growth, clearly evident in national initiatives to install broadband fibre in Australia, New Zealand, Singapore, Malaysia, India, Thailand and many other markets. However, as noted by the data centre industry, few governments – perhaps with the exception of Singapore – have included data centres as part of their broadband strategy. And even in those Asian markets that do offer incentives for data centre operators as part of larger sets of incentives to attract foreign investors, it is also clear that those measures are falling somewhat short. As Global Switch’s O’Brien illustrates with an example from Hong Kong: “There has been some discrimination. There have been some good plots of land in Tseung Kwan O, for example – we would have loved to have gone in there – but basically, we were excluded due to the fact that we did not meet the criteria.” Data centres, both in real estate and technology terms, require their own regulatory environment; one that recognises the huge amount of investment required and the long-term nature of that investment, while providing investment protection for those willing to get in the game. Singapore is leading the way with its Data Centre Park initiative, where it is designating an area for data centre development and putting in the supporting infrastructure – including the commissioning of a new power plant – for future data centres. Australia seems to be catching on to the idea, with the government now pondering the exemption of data centres from its carbon tax program. The question is, where is everyone else?














Connecting Africa A new cable system connecting the east coast of Africa is introducing global fibre optic communications to an untapped market. CommsDay spoke to the CEO.


ark Simpson, the new CEO of SEACOM, is no stranger to the subsea market. Having served as CEO for C2C and then Pacific Crossing, both of which owned major private subsea cable systems, he brings decades of experience from the telecoms industry to SEACOM. This time around however, his role is slightly different. Instead of taking the helm of distressed assets – both C2C and Pacific Crossing had gone through Chapter 11 – and trying to restructure the companies, Simpson is taking charge of a brand new cable system focused on growth. But while SEACOM is a first mover on its route along the east coasts of Africa, it is also operating in a market where basic connectivity is not always readily available, where traffic demands are only starting to be understood, and much of the infrastructure that is required to take advantage of the international connectivity SEACOM offers is still absent. So while there’s clear potential in demand, the pieces needed to serve that demand are not always immediately evident.

CommsDay spoke to Simpson about the opportunities and challenges for SEACOM going forward and the strategic direction in which he is looking to take the company.

CommsDay: What is the focus for SEACOM now that the cable system is up and running and serving customers? Mark Simpson: Interconnecting Africa, and doing that on a reliable and resilient basis, is probably more of the challenge than anything else. We have backhaul solutions in places where we land, in Mozambique, in Tanzania and in Kenya, plus obviously global connectivity. We have a pretty solid IP network, and we’re able to do IP Transit on an international basis. It is basically about continuing to build out resilience and reach on the network, so you can get beyond the initial landing points and get to a range of other countries, like Rwanda, Namibia, Zambia, Uganda, any of those countries that are not immediately accessible from the coast, and then be able to do that on a reliable and cost effective basis. There’s a lot of work going on

with various providers and potential partners around terrestrial fibre discussions. There is a lot of infrastructure that needs to be built for IP, not only in terms of the core networks, but obviously the edge devices, and all the things that get [data] in and out of a customer premise at the end of the day. This is happening in different ways in different countries. In some cases it is government-led development; in some cases, it is private investment that is doing some of that. It is all in various stages, so sometimes you are still going into places where you are relying on satellite, microwave, or potentially some not-so-resilient fibre installations that are not well buried, and solely on a linear route and subject to all sorts of risks. What do you see as SEACOM’s role in the development of Africa’s connectivity? A lot of the development is that connectivity into and out of Africa is more interesting than before. I think what SEACOM has been about is building the infrastructure and encouraging infrastructure to be built in Africa, so

that to get from Kenya, on an IP network, content from Nairobi, you don’t have to go to London and bounce back. So you can develop local routes, local content, local repository of content, data centres and all those sorts of things. And you get much better latency performance, and drive out some of the costs as well, [by] not having to bounce all over the place. We will continue to invest in our core network, through partnerships as well as direct investment, on some elements of terrestrial networks and some infrastructure as well, like data centres. Getting content to the right exchanges in Africa and then hopefully also helping to bring various applications a lot closer to African businesses, and then enabling our customers to effectively be Africa-based providers of those services. Since SEACOM’s launch, a number of other systems, EASSy and TEAMS for example, have also launched on the east coast, how do you see the competitive landscape? It’s a growing competitive market. I think that is pretty good for us. Obviously it gives people choices in terms of resilience and redundancy, which is important. It puts pressure on us to have some of those routes as well, to be able to provide the same resiliency and redundancy for our customers. A market where only SEACOM is serving the market would not really help in the development of ICT in Africa. So we are pleased to see those people onboard. Competition will keep us on our toes, and it will continue to keep the price reduction coming. The strength of SEACOM will be in our ability to work very closely with our wholesale customers to execute with absolute certainty, something that a private cable certainly does much better than some of the consortia cables. There’s been a lot of news on SEA-

COM upgrading its capacity, is that the case?

tive as yet. There are a number of discussions going on.

The fact that we are doing upgrades shouldn’t be surprising. As you know, when you build a system, you initially light a certain amount of capacity and as you consume that capacity, you come back and continuously reinvest and light more capacity. That will be and has been [the case] in the subsea world for quite a while. Every 24 months, you’d expect to add more capacity on the network, building up your inventory and then selling off on that. Obviously, with the improvements in technology, most of us will be breaking through our design capacity using 40G and potentially 100G in the future.

From your business so far, where is the traffic going to?

Will you be looking at 40G then? I think from what we’ve seen, 40G is likely to work over our longest segments, so it’s one of the things that we need to do when we do our next upgrade – to consider who can deliver with reliability and appropriate economics, the best technology for the many segments that we’ve got. We have a mixture of quite short segments on the network and much longer ones going up from Djibouti through to Marseille where we land. There were reports that SEACOM has entered in discussions, even MoUs, with systems operators on the west coast, e-Five Telecoms for example, to create a ring around Africa. Any update on those plans? There is no MoU signed with eFive, just discussions. From our perspective, strategically, it is important for us to have an Africa ring topology. With our strength on the east coast, we certainly want to balance that out with reach and capability, from a diversity as well as growth perspective, in terms of getting capacity on the west coast. We certainly expect to find some solution in that regard, but nothing defini-

It’s going to Europe mostly, but we are trying to understand how much of that is actually bouncing back into Africa. That is one of the challenges of Africa being served appropriately, that so much content and routes are dominated by US and European routing . You are getting high latency – it would be much better serving that within Africa. As we develop our IP capability much better, we will ensure we find the shortest route for our customers. Does that mean you will also be rolling out data centres to host that content? We have limited coverage at the moment, but certainly as we look at our strategy, obviously we will look at how we can participate through data centres and bring in services – cloud based services. We certainly have quite a few strategic discussions underway with potential partners about bringing SaaS, PaaS, IaaS to Africa and getting it based here, which might see us participating in partnerships, joint venture models in data centres, so we can serve those customers. Any plans to extend your network into Asia? Most of our connectivity continues to be through Europe at this time. Realistically, our IP network traffic is just starting to be understood. I would hope that in the coming 6 months, we’ll have connectivity into Asia, for the transit as well as at the private line level, and be able to sell products there for people who need to get into Africa. Work on that is certainly well progressed. I think we’ll be looking at potentially setting up interconnection points in Singapore & Hong Kong.

Mobile technology is transforming the way we all live, learn, work and play.

ONE BIG IDEA. UNLIMITED POSSIBILITIES. As the world leader in next-gen mobile technologies, Qualcomm is focused on one big idea — accelerating mobility around the globe.

Š 2011 Qualcomm Incorporated. All Rights Reserved. Qualcomm is a publicly traded company on the NASDAQ Stock Market under the ticker symbol QCOM.


The new face of innovation at AT&T Let’s face it. Telcos have simply been out-innovated in recent years by nimbler rivals. The grandparent of industry innovation wants to do something about it. By Tony Chan


nnovation has always been the driving factor for the technology industry. The prime example is the success of Apple, with its late helmsman Steve Jobs billed as one of the leading innovators of his time. Look at the leading companies today and all are founded by innovative ideas, built on cultures of innovation. It can be argued that telcos are no different. After all, some of the greatest inventions of the past century have come out of telcobacked organisations – Bell Labs, for instance. However, innovation in the telecoms sector has been overshadowed by the activities of hungrier, often more nimble, internet start ups, who have taken ideas such as search, social networking, user generated content, and others, and made them global phenomena, as well as billion dollar businesses. The result is that telcos have been left behind when it comes to exploring new opportunities, rolling out new services, and introducing new business models. Now AT&T is trying to change that by introducing a widespread program to foster innovation amongst its technology partners, its own labs, and even its employees.

“I would say that the number one objective is to accelerate the pace of innovation in AT&T… by enabling partners and small technology companies to leverage our network and leverage our assets in ways that they haven’t been able to in the past,” said AT&T Applications and Service Infrastructure VP Jon Summers.

Jon Summers “That means that we have to expose our services, we have to improve the way we engage, we have to change the model for engagement with these companies.” FOUNDRY PROGRAM While AT&T continues to drive innovation through AT&T Labs, where it has over 1,000 engineers

focused on researching new technologies and services for its infrastructure, the operator is willing to go much further. It is opening up its infrastructure to outside developers, who are now being encouraged to bring their ideas to AT&T’s network. “It’s really about driving innovation and enabling innovation on top of the AT&T network and doing that effectively. Over the last two to three years, AT&T has developed a productive strategy focused on opening APIs: network and billing APIs, as well as platform capabilities to support applications development and enabling technology innovation partners,” said Summers. “As a part of that work, there’s been quite a lot of focus architecturally to open the network in a way that allows developers to take more advantage of capabilities that exist in our network.” To do that, AT&T has invested in a program called Foundry, in which it has set up research and development facilities with three technology partners: Ericsson, Alcatel-Lucent, and Amdocs. “We have invested close to US$90 million to deploy these innovation centres in Palo Alto, California; in Plano, a city just outside of Dallas; and then the

third location is in Tel Aviv,” said Summers. “These centres are strategically placed in various technology centres around the world. Specifically, in Palo Alto, our objective was to have a location right in the heart of Silicon Valley.” FAST-PITCH SESSIONS In addition to providing a platform to work with the partners, the Foundry centres also take the form of incubators for new services and technologies. “These centres are unlike other innovation centres, which focused on the demonstration of capabilities, or proof-of-concept kind of work. Our facilities in the Foundry program are very much focused on producing the next wave of applications, platforms, and services,” continued Summers. “In the past, the traditional project model for us would be to define our requirements, scout the landscape for technology companies to meet that specific need, and perhaps conduct our own selection process, than ultimately engage a partner. That process can take months to execute.” “The way we have adapted the model is through what we call fast-pitch sessions.” “On a fairly regular basis, we will bring in companies and invite groups to come into these Foundry centres and each of these companies will have 15-20 minutes to present their concepts of new products, new platforms, new services, or capabilities to a group of AT&T executives. Rather than going through the whole process to evaluate, we will, similar to a venture capital company, make quick decisions regarding the concepts that are presented to us, and quickly engage with these companies.” OVER 10% ENGAGEMENT RATE This year alone, AT&T has engaged just over 500 companies through the Foundry centres,

with an engagement rate of between 10%-15%. To date, AT&T has 106 projects that have origins in the Foundry program, four of which are already in beta testing. “That’s to say 10%-15% of these small technology companies that we see, we’ll end up engaging with in various different types of projects and then ultimately, we’ll see those projects through to market,” Summers said. “The really nice thing about this program is it has really accelerated the pace of delivery. A typical product delivery at AT&T and in the industry might have taken 18 months. Through the Foundry model, our objective is to decrease that time by twothirds, so essentially our objective is to execute from concept to market in six months or less.” So has AT&T found the next Facebook, or Twitter, or YouTube? Not quite. According to Summers, the projects so far have been focused on adding new capabilities to AT&T’s infrastructure, rather than consumer-focused applications like those produced by over the top players. Not surprisingly, they are also related to the solutions of its technology partners. One example is a video billing solution developed through a company out of the Foundry centre with Amdocs in Tel Aviv, which allows AT&T to introduce automatic video-enabled bill presentment for its U-verse broadband TV platform. Another example is a new platform for AT&T to expose various network services, including SMS, MMS, billing, and so on, to web developers. Through the Foundry facilities, both of the examples were brought to beta testing in six months. OUTREACH AND IN-HOUSE PROGRAMS Outside the Foundry program, AT&T has also implemented a less-intensive outreach program, where it is working with partners

with a formal investment in an official facility. “We’ve created a new partner engagement model. We essentially have a sponsorship model that allows partner companies to engage with us through the innovation model.” “In addition to the three Foundry partners, we are also partnering with Juniper, Cisco and Microsoft, and those companies are very much engaged with us as we evaluate new projects and new companies.” While these types of partnerships don’t offer the same level of engagement as the Foundry program, they still add value by allowing AT&T to tap into those companies’ own developers programs. “They are helping us screen and identify technology companies. So they all have their own active programs to reach out and evaluate technology, so we deepen the collaboration in that regard, and it gives us access to a broader range of technology, and companies that are innovating in our space,” continued Summers. And the program doesn’t stop at technology partners, he noted. AT&T is also working closely with venture capitalists. By engaging the technology incubators, AT&T is actively looking at companies in their portfolios to get a head start on new innovation that might be taking place outside its immediate partnership scheme. Lastly, AT&T is also hoping to tap innovation that might be available internally by making it possible for its employees to contribute ideas. The internal program, called the ‘technology innovation pipeline,’ now allows any employee to submit ideas to a company website.” “Ultimately what happens is the top ideas coming out of this process are presented to an internal angel investor group that we created.” “We ultimately will fund new ideas and provide the opportunity to drive those ideas to reality.”

Coherent detection to open the floodgates for optical capacity The introduction of advanced modulation techniques to optical networking will enable vendors to pack more bits than ever into the same amount of light on a fibre. But as Ciena and Infinera illustrate, coherent detection can offer many more benefits than simply fatter pipes. By Petroc Wilton and Tony Chan


ven as the industry prepares to upgrade both terrestrial and subsea networks to more advanced wavelength capacities such as 40G and 100G, vendors are already working on the next generation of optical transport technologies. According to technologists inside optical vendors, Ciena and Infinera, the industry is now on the cusp of some serious breakthroughs in transmission capacity, with roadmaps that already chart out capacities of up to 25Tbps per fibre. Key to this is coherent detection – a technique whereby the phase and polarization of a light electric field can be measured and used to carry information, rather than just its power. “If you look at the normal progression of the telco world, which was factors of 4 every 5-10 years – 620Mbps to 2.5Gbps, 2.5Gbps to

10Gbps, 10Gbps to 40Gbps – the only tool that we had at the time was to just turn the clock rate up. So we went from 10GHz clocks to 40GHz clocks,” explained Ciena CTO Steve Alexander. “And the problem with doing that is, it moves you out of the classic silicon world, and what you would call surface mount fabrication manufacturing, into the world of microwaves – especially in the early 2000s, which is when the first 40Gbps systems were built.” “Using 40GHz clocks is a problem for silicon; it takes you into the world of gallium arsenide, indium phosphide, silicongermanium, other technologies, and the cost points are different. But when you get to doing coherent, the trick on the costs is because you run your modulation at a higher level of complexity, [you’re dealing with] the old bit

rate/baud rate tradeoff,” he continued. “If I can keep my clocks at, say, 10GHz but send four bits per symbol, I now have the economics of silicon and the speed that I’m after, which is 40Gbps. Now the only way to get that modulation to work really well at distance is to do a coherent system. So the cost benefit of coherent is that it allows you to stay in what you would call normal silicon electronics, with reasonable clock speeds... so it’s moving the telco world back onto that Moore’s Law silicon curve. And that’s a pretty important set of economics, and you keep doing that until it gets uneconomic.” BEYOND 100G How, then, to push beyond the 100Gbps barrier? “Right now, the most economic place to run is in the 40-100Gbps [range], but

A graphic from Ciena explains what coherent means by analogy

using coherent technologies and keeping the clock rates in the 1030GHz range. So to go faster… in a ‘superchannel’, you take combinations of, say, 100Gbps systems, increase the modulation complexity to get 200Gbps out of it, and then get five of them. Now you’re at 1Tbps,” explained Alexander. “What’s interesting in that approach is that you no longer think about classic ITU channel width. What you think about is creating an optical capacity, an optical bandwidth, and then filling it as efficiently as possible… and that gets into spectral density. Once you’ve spent the money to light up the fibre... what you want to do is signal along that fibre in the most efficient way you can. And that speaks to the coherent technologies and the superchannels; you’d be running 100Gbps at distance, 200Gbps at distance, 400Gbps at distance… then you’ll have multiple 200Gbps and 400Gbps [links] making up terabits.”

“So you’re going to have building blocks that you assemble these ‘flows’ out of. Then you recombine them to whatever you need; so if you need a terabit, it’s a combination of five 200Gbps flows, if you need 400Gbps it’s two of them. So it’s a good way to keep the fibre utilisation at its peak efficiency.” In this context, some of the most important ‘smarts’ in Ciena’s own offerings are on the silicon itself, in the form of the firm’s WaveLogic coherent optical processors. “Once you’ve got the WaveLogic chips, the modulation complexity is software programmable. So the same physical hardware that gives you 100Gbps will give you 200Gbps and 400Gbps,” said Alexander. “Now that doesn’t mean that as silicon technology improves, there [won’t] be a WaveLogic 4 and then a WaveLogic 5, because again you’re back onto silicon cost curves - which is a good place to be, because the worldwide de-

mand for high-density silicon is huge. So that’s actually a very good place to operate from a technology point of view.” There are some compelling value propositions with Ciena’s strategy. By putting the innovation in the coherent layer, the company can take advantage of commoditised economics of 10G lasers, while enabling its customers to squeeze more throughput out of their fibres. And with the programmable nature of silicon, it can now offer different transport capacities – through different coherent detection schemes – with minimal, if any, change to the hardware. PIC APPROACH While coherent detection is also playing a key role in its roadmap, Infinera, the developer of the Photonic Integrated Circuit (PIC), sees it more as a compliment technology to its overall product strategy.

Because it integrates the lasers inside its PICs, Infinera aims to continue to push up the clock rates to 40G and 100G, but all the while introducing more advanced modulations that offer another step improvement to overall fibre capacity. According to Leigh Wade, director of Technology at Infinera Asia Pacific, the company is now looking at continuous improvements in per fibre capacity, with a roadmap that will bring to market 25Tbps per fibre solutions with its upcoming DTN-X platform. “The thing that is kind of interesting about DTN-X is with 500G, you’re going to different modulation formats, you are not doing the on/off anymore. You will get modulation where you’d get multiple bits per clock cycle – with QPSK you get 2, with BPSK you get 4, with 16 QAM, you get 6, so there’s an extra dimension to it. So you’ve got QPSK, BPSK, and in the future, we’ll have QAM, and different modulations,” Wade said. The first generation of Infinera’s DTN-X system, featuring a 500G PIC – a photonic unit with 5 x 100G lasers pumping out a 500G superchannel – will offer a per fibre capacity of 8Tbps. Subsequent systems with more efficient modulation schemes and densely packed waves will boosts total per fibre capacity to as much as 25Tbps. Like Alexander, Wade also points to the ITU frequency grid as a limitation, one which coherent detection can overcome to offer higher capacities. “The limitation today is in the ITU grid… to scale beyond 500G superchannels, you’ve got two options. You can do to 400G wavelengths, in which case you’ve got to try to get all the electronics to run faster. “[Or] our approach is to do multiple parallel [links] using the PIC as the platform, so the 1Tbps PIC will do multiple parallel [links]. If you actually remove the ITU grid, and then run QPSK, 8QAM, 16QAM, you’re then in-

to Terabit chunks that you can add, which means you can slightly expand the C-band and get up to 25Tbps per fibre,” Wade said. “The limitation beyond 8Tbps is the ITU-T grid… By doing away [with it] – we call it gridless – you suddenly start scaling beyond 8Tbps into 25Tbps [per fibre].” FLEXCOHERENT Interestingly, the approaches of Infinera and Ciena share a common characteristic – the ability for essentially the same hardware to support different modulation schemes. While Ciena’s strategy seems to be to leverage common hard-

Ciena’s Steve Alexander ware – line cards, basically – to allow operators to match capacity with demand, the scenario is slightly more complicated for Infinera, because a lot of the hardware is integrated inside its PIC. To take advantage of the reprogrammability of coherent detection, Infinera has developed something called FlexCoherent, which allows each laser inside the PIC to be programmed with different modulation schemes depending on operator requirements. “You can tune the modulation on each of those lasers, so you can switch them all from QPSK to BPSK to get a longer distance. You can actually tune 2 lasers to

do BPSK and everything else can do QPSK,” Wade said. “The reason why customers like that approach is that, instead of having multiple modules with different modulation schemes, they have one module, and they just turn the dial to whatever they need. This makes it really simple and reduces the TCO.” TIMELIME So when will the demand for speeds beyond 100Gbps really start to manifest? “You can argue the demand for it is there today; it depends upon what the end user customer needs,” said Ciena’s Alexander. “The highest classic port rates that are available today are 100gig -E; that’s probably going to stay that way for a couple more years, would be my guess. But that doesn’t mean that people won’t want to start multiplexing 100gigEs together to get more efficient use of the fibre!” “Because one of the things that this coherent technology is letting us do is unlocking the port speeds that are at the edge of the network from the line rates that are in the core of the network. “From 2000 to 2006-7 around when coherent 40Gbps products first shipped from us there was a lock between 10Gbps. You did 10Gbps port at the edge, you did 10Gbps wavelengths in the core, then you used a ROADM to manipulate them. Once you can start to speed the core up, and say ‘I’m not just doing 10Gbps waves, I’m doing 20Gbps, 40Gbps, 100Gbps, 1Tbps waves’, then you can optimise your core and edge separately, which is a nice capability. You optimise your core around the most efficient utilisation of that fibre, regardless of what your service speeds are. “If the best answer for you as a service provider is to run at 400Gbps, that’s what you would do - even if your business said you were primarily offering gigabit Ethernet services.”

Competition in Australia turns 20. Something to celebrate? Australia’s government says that its planned open access National Broadband Network will propel the telecoms sector into a new era of competition that surpasses that of the past two decades. Kevin Morgan asks if the benefits are worth the costs.


ustralia stands on the cusp of a new era in the telecom sector with the promise of heightened competition in the fixed line market, marked by lower prices and innovative new services. The NBN company is now two and half years into its mission and all the legislative and almost all the regulatory building blocks are in place. It just remains for the Australian Competition and Consumer Commission to sign off on NBN Co’s Access Undertaking and Telstra’s Structural Separation Undertaking. Given the political weight behind the NBN the ACCC will probably accept both given that its remit to consider the real competition policy issues underlying the NBN has been denied by legislation. Now its down to NBN Co to deliver the wholesale only, open access platform that is the key to structural change and if the policy is to succeed then NBN Co

clearly needs to gather pace on the mass rollout of fibre so it can meet the government timetable for reform. Unfortunately for NBN Co the rollout schedule was not of its choosing but one determined by the implementation study. Should NBN Co fall behind it would be perfectly understandable as no company has achieved these rates of network deployment in a comparable market. Indeed with IiNet’s acquisition of TransAct at least one major ISP seems to have taken a $60 million punt on it not all going to plan. Obviously any significant delay in the rollout will hold back the reforms and also pose some complex regulatory challenges. Telstra is supposed to structurally separate by migrating its copper and HFC broadband customers to the NBN and cease providing services to retail customers on a network it controls by the designated date of 1 July 2018 when

the access undertakings for copper are also due to fall away. Even if the NBN rollout goes to schedule and the hoped for 8.5 million premises (including greenfield households) have been passed, there may still be between 1.5 and 2 million copper lines that Telstra will have to divest. But if the rollout is significantly delayed what then happens as million of Telstra lines may have to be sold? Given that Telstra cannot hold more than a 15% stake in a network it offers retail services over Telstra would be forced into a fire sale unless the minister was prepared to grant an extension. And there may be little alternative other than to grant an extension given there might not be a buyer for the fragmented islands of copper access that remain especially given that divestment may only apply to the subloop - there may not be a large demand for that part of the network on a stand alone basis. In

that case the access undertakings that competitors believe are deeply flawed will remain in force and little will change. COMPLEXITY Whether of not Telstra is actually forced into divestiture remains to be seen but this provision, the key to gaining Telstra agreement to the NBN deal, highlights how extraordinarily complex the policy is. Such complexities suggest the future is far more challenging and uncertain than the deregulatory path set out upon twenty years ago. Twenty years ago the mission was relatively simple – create a more competitive market by unwinding a publicly owned monopoly that effectively held 100% of revenues and profits. It was a policy that could be readily enacted through arbitrage given the massive margins that existed in international and long distance traffic. The scope for arbitrage was extended and room for open competitive entry post 1997 was secured by continued tariff rebalancing, principally increases in line rentals that created new opportunities such as the resale of local calls. And while Telstra’s market share was gradually eroded in fixed lines, real inroads into its monopoly were made by the granting two GSM licences to competitors. Finally as the fixed line margins were eroded local loop unbundling provided new scope for encouraging entry stimulating a competitive broadband market, at least in urban areas. Without discounting the commercial and technical skill entrants displayed, a market of known dimensions and dynamics was being opened up and the policy was relatively straightforward although subject to legal challenge and gaming. UNPROVEN MODEL Now the dynamics and the challenge are very different because the hoped for ideal of far greater, more equal competition, is predi-

cated on an unproven model, structural separation. Under this model retail prices are being set on the assumption that the NBN’s wholesale pricing is sustainable and that the NBN’s revenues can fund the massive task of renewing the nation’s access infrastructure. If all goes well they may but any slight variation in either NBN Co’s costs or revenues could destroy the promise of lower prices. But irrespective of the underlying wholesale price service providers will have to pay they will now have to fight it out in a market where the economic safety net provided by the vertically integrated incumbent is no longer available. Perversely despite the

advantages it may currently enjoy Telstra is ultimately constrained in its competitive response and not merely by regulation or its universal service obligations and uniform tariff. Telstra has been wedded to its massive margins on copper and has been reluctant to forgo them. Now it has been forced to accept that these margins are dissolving and will ultimately disappear. Already by spending to buy back market share Telstra has signalled that margins can and will be foregone although sacrificing margins won’t be the only source of the cash needed to buy back customers. The NBN deal provides buck-

ets of cash that can be used anti competitively to secure market share and these taxpayer guaranteed marketing subsidies will be made even more valuable by the advantages Telstra holds as the largest player in the NBN age able to reach all elements of the NBN access network through its own backhaul network. With these advantages Telstra’s dominance may actually increase rather than diminish especially if it wages a price war that its competitors cannot sustain. TELSTRA ADVANTAGES Indeed the new market has other pluses for Telstra as it enjoys a critical advantage that only Optus can in part match, that of being an established fixed line and mobile operator. Given the complementary nature of entry level NBN products and wireless mobility, Telstra can bundle products and even subsidise fixed line market share from its wireless earnings which will continue to grow significantly. One simple difference between 1991 and today suggests that the NBN’s impact on Telstra will at best be minimal and that the outcome may not be as expected. As noted in 1991 Telstra effectively held 100% of industry revenues – it could only lose. Today given the NBN’s focus on residential and small business voice and broadband services less than 25% of Telstra’s revenues, the $6 billion in fixed line revenue generated by Telstra consumer and business groups will be impacted by competition on the NBN. Past liberalisation policies came effectively at no cost to the taxpayer and generated benefits as Telstra lost its monopoly. Now it seems seeking to further squeeze Telstra in a $6 billion market segment by spending at least $27.5 billion in taxpayers funds and incurring an interest bill of some $14 billion by 2028 is not quite as cost effective. It could be asked whether more competition is really worth it.

QFABRIC: RETHINKING THE ARCHITECTURE TRANSFORMS TCO. QFabric requires 34% fewer devices, it can be managed by a single administrator and as a result, it radically lowers the cost of management and operations in the data center.


INTRODUCING QFABRIC. The demands on the data center have gone exponential. And with that, the connected world now demands new levels of power, processing, savings and control to meet the needs of this inescapable and constant new reality.

QFabric, our breakthrough new technology, changes the game in the data center by fundamentally solving the challenge of cost and complexity. Power and cooling is 79% lower than the industry standard, total rack space is up to 90% lower and overall, the system uses 1/5 the carbon emissions of current large scale solutions. Juniper has spent countless dollars and hours, dedicating the best minds in the networking industry for the past three years to solve what we feel is the most pressing issue in the network today. The result is QFabric and it changes everything.

jn_qfabric_a4_ad.indd 1

9/26/2011 3:33:19 PM


The mixing of politics & telecoms


elecommunications and politics would seem to make for strange bedfellows. Increasingly, though, the two seem to be inextricably linked. Few would dispute that Australia’s NBN would not exist, had the wheels of politics not been set into motion to bring it into being. Not to be outdone by the Aussies, the United States of America seems headed for its own political showdown over the launch of a revolutionary new LTE service that represents either a major advancement to telecom technology or a clear threat to the nation's security, depending upon whom you ask. At the center of this political debacle is a relatively obscure company named LightSquared. Based in Virginia, it plans on deploying its own L-Band LTE network in 2012, providing seamless coverage across the United States using both satellite and groundbased transmitters. There's just one problem. Portions of the L-Band are already in use by GPS receivers. Although the company and its experts claim that interference to these devices can be mitigated, experts working for the GPS industry aren't convinced. Neither are sources inside the military, which depends on GPS for everything from providing aircraft navigation to helping guided missiles hit their targets.

Then there is public safety. GPS is not only an invaluable tool for search and rescue operations, but is a critical part of the nation's air traffic control system. LightSquared claims that their system can be tuned to prevent interference in 99.5% of existing GPS receivers. But with an average of 87,000 flights being made each day across the country, that 99.5% figure doesn't seem very comforting. With so much at stake, you would think that the company would be forced to spend years in testing to prove that no possibility of interference with GPS receivers exists. Instead, interested parties will need to submit their own test data to the Commerce Department by November 30th. That request was made in late September, which provides labs with less than 90 days to perform full-scale testing. LightSquared could get the green light to begin deploying towers as early as December. So, what does any of this have to do with politics? Unfortunately, a lot. One of the earliest investors in LightSquared was Barack Obama. Although it has been reported that Obama has since sold his initial investment in the company, he once owned $50,000 in LightSquared stock. The investment was made at a time when Obama was making a salary of approximately $174,000 per year as

a U.S. Senator. It also turns out that many of LightSquared's principal investors also happen to be either close personal friends or major fundraisers for Barack Obama. Four of the company's major investors have donated more than $150,000 each to Obama political action committees. It gets worse. At a recent congressional committee hearing, two witnesses, Air Force Space Command four-star General William Shelton and National Coordination Office for Space-Based Positioning, Navigation and Timing director Anthony Russo claimed that they were pressured by the White House to alter their testimonies in order to mislead Congress as to the potential dangers LightSquared's new system poses to GPS and military operations. Mysteriously absent from the hearing was FCC Chairman Julius Genachowski (an Obama appointee), which the committee was scheduled to investigate the FCC’s fast tracking of a LightSquared application. Whether this turns out to be a genuine scandal or a political witch hunt, one fact is most certainly clear. With only a year until the next Presidential election, the future of LightSquared will almost certainly be decided by political will, rather than by its potential benefits to society or its potential threat to public safety.

CommsDay magazine Dec2011/Jan12  

The latest issue of CommsDay examines data centres, coherent detection, telco innovation, Australian telco competition and more

Read more
Read more
Similar to
Popular now
Just for you