Local opportunity for IMS is lucrative stablishing an in-house IT team has its pros and cons, but these days the cons outweigh the pros. So, do you want to have full-time staff tending to routine tasks like patch management, printing services, deploying and maintaining desktops, managing software updates, and such? Or would you instead have a lean team that spends much of its time thinking about, and supporting innovation? The rest of the time could be spent on significant tasks such as planning, monitoring SLAs, compliance and such. By outsourcing the management of routine tasks, one can focus on innovation, transformation, and the core business. That’s what most successful businesses are doing today. While companies in the West have for long outsourced infrastructure management to countries like India, this trend is also catching on within India itself; companies based in India are increasingly outsourcing their infrastructure management. Here are a few examples: Tata AIG outsourced 80 percent of its IT operations to Wipro’s Global Service Management Centre, enabling centralized remote management. This resulted in flexible cost structures by aligning costs of IT operations and management to business requirement. Back in 2009, Max Healthcare chose Perot Systems (now part of Dell) to manage its various IT operations, including infrastructure management, data center hosting, applications portfolio management, project management office, clinical transformation, and implementation of Electronic Health Records (EHR). Also, organizations like LG Electronics and Maruti Suzuki have accepted the fact that managing IT infrastructure should not be their core focus. LG Electronics outsources 90 percent of its IT operations. It has a group company that manages the applications; its infrastructure management is outsourced to various partners. Even its in-house data center is managed by partners. The market for Infrastructure Management Services (IMS) in India is poised for growth. According to a recent survey by Forrester Research, the IT Managed Services market in India is expected to reach USD 3.8 billion by 2013, growing at a CAGR of 23 percent. And NASSCOM has projected this opportunity to be USD 13 - 15 billion by 2013. So while India has proven global expertise in managing infrastructure, the local opportunity is also lucrative. For this month’s cover story, our principal correspondent Ayushman Baruah explores how the market for IMS is growing in India amid changes in the delivery model, nature and complexities of work, and duration of the deal cycles. He explores the impact of the U.S. slowdown and the Eurozone crisis on the IMS industry. In her story on managed services, Vinita Gupta explores how organizations are breaking up their IT infrastructure management engagements into specific areas and outsourcing them to vendors that specialize in handling a specific function. I hope you find these stories (and the rest of the issue) insightful and informative. And I also welcome you to read our daily news updates, online features, and blogs at www.informationweek.in
By outsourcing the management of routine tasks, one can focus on innovation, transformation, and the core business
Follow me on Twitter
informationweek july 2012
u Brian Pereira is Editor of InformationWeek India. firstname.lastname@example.org
contents Vo l u m e
I s s u e
J u l y
2 0 1 2
24 cover story Metamorphosis of infrastructure management services in India The market is growing in India amid changes in the delivery model, nature and complexities of work, and duration of the deal cycles
Specialization drives IT managed services market Specific niche areas, such as printing, security and storage are emerging as strong growth drivers for the managed services market in India
Why CIOs are opting for IT infrastructure management services CIOs of Indian companies, who have outsourced their IT infrastructure management, highlight that the pressing need for internal IT resources to focus on core IT functions, and increasing pressure on the companies to save manpower and infrastructure management costs are the key factors that are motivating them to avail infrastructure management services (IMS) from technology vendors who are experts in this domain. Four eminent Indian CIOs share their perspective on the benefits of IT infrastructure management outsourcing with Amrita Premrajan of InformationWeek
How India is poised to be the next IMS hub Indian players are a strong force in the RIM play worldwide — all industry verticals are now outsourcing their infrastructure and application services to India
‘Cloud is changing infra management landscape’
informationweek july 2012
Reliance Communications dials in to open source for competitive advantage One of the biggest players in the Indian telecom sector is setting a precedent for other telecom companies by aggressively adopting open source
BOI uses SIEM to reduce false positives and boost security The proliferation of devices in the Bank’s data center yielded thousands of logs; it was impossible to manually decipher those logs and make logical conclusions about threats and attacks. So, the Bank of India opted for a solution that correlates various logs, analyzes them, and offers a single dashboard
How the Future Group transformed its supply chain The implementation of a WMS from Infor has helped the firm in ensuring minimum shrinkage and in maintaining a high level of inventory accuracy
The advent of cloud is making fundamental changes in three key components of infrastructure management: people, processes and technology
Do you Twitter? Follow us at http://www.twitter.com/iweekindia
Cover Design : Deepjyoti Bhowmik
Find us on Facebook at http://www.facebook. com/informationweekindia
How CRIS revamped its data center power and cooling technologies To meet the ever-increasing IT needs of Indian Railways, the Center for Railway Information Systems has revamped its data center and designed it on Tier III parameters with 99.99 percent reliability If you’re on LinkedIN, reach us at http://www.linkedin.com/ groups?gid=2249272
THE BUSINESS VALUE OF TECHNOLOGY
interview 38 ‘Customer demands for infrastructure management are more complicated’ Maninder Singh Narang, Vice President & Global Head - End User Computing, Application Operations & Shared Services, HCL Technologies
interview 40 ‘There is a lot of ISV uptake on cloud’ Dharanibalan Gurunathan, Executive, Offerings Management and Development, Global Technology Services, IBM India/South Asia
50 feature Where cloud works Six Flags and Yelp reveal how they’ve made the public cloud work for their businesses
52 feature IPv6 arrives, but not everywhere Last month marked a major milestone as IPv6 went live on the Internet — a look at some potential security hurdles for enterprises
VMware launches Project Serengeti to virtualize Hadoop
RIM to set up BlackBerry Innovation Zone in India
Microsoft introduces Surface GE India to invest ` 300 crore in new R&D labs
Microsoft confirms USD 1.2 billion Yammer buy
news analysis..............................................17 event: cloudconnect............................. 56 opinion.......................................................... 58 analyst angle........................................... 64
Wipro, SAP partner to launch app technology & risks................................ 66
Tata Communications launches global low latency network
India plays key role in Red Hat’s global plans
practical analysis................................. 69
India hosts 30 percent of global top R&D companies: Zinnov
down to business..................................... 70
july 2012 i n f o r m at i o n w e e k 7
VOLUME 1 No. 09 n July 2012
print online newsletters events research
Managing Director : Sanjeev Khaira Printer & Publisher : Kailash Pandurang Shirodkar Associate Publisher & Director : Anees Ahmed Editor-in-Chief : Brian Pereira Executive Editor : Srikanth RP Principal Correspondent : Vinita Gupta Principal Correspondent : Ayushman Baruah (Bengaluru) Senior Correspondent : Amrita Premrajan (New Delhi) Copy Editor : Shweta Nanda Design Art Director : Deepjyoti Bhowmik Senior Visualiser : Yogesh Naik Senior Designer : Shailesh Vaidya Designer : Jinal Chheda, Sameer Surve Marketing Deputy Manager : Sanket Karode Deputy ManagerManagement Service : Jagruti Kudalkar online Manager—Product Dev. & Mktg. : Viraj Mehta Deputy Manager—Online : Nilesh Mungekar Web Designer : Nitin Lahare Sr. User Interface Designer : Aditi Kanade Operations Head—Finance : Yogesh Mudras : Satyendra Mehra Director—Operations & Administration Sales Mumbai : Marvin Dalmeida Manager- Sales email@example.com (M) +91 8898022365 Bengaluru : Kangkan Mahanta Manager—Sales firstname.lastname@example.org (M) +91 89712 32344 Delhi : Rajeev Chauhan Manager—Sales email@example.com (M) +91 98118 20301
Head Office UBM India Pvt Ltd, 1st floor, 119, Sagar Tech Plaza A, Andheri-Kurla Road, Saki Naka Junction, Andheri (E), Mumbai 400072, India. Tel: 022 6769 2400; Fax: 022 6769 2426
Production Production Manager : Prakash (Sanjay) Adsul Circulation & Logistics Deputy Manager : Bajrang Shinde Subscriptions & Database Senior Manager Database : Manoj Ambardekar firstname.lastname@example.org Assistant Manager : Deepanjali Chaurasia email@example.com
Printed and Published by Kailash Pandurang Shirodkar on behalf of UBM India Pvt Ltd, 6th floor, 615-617, Sagar Tech Plaza A, Andheri-Kurla Road, Saki Naka Junction, Andheri (E), Mumbai 400072, India. Editor: Brian Pereira, Printed at Indigo Press (India) Pvt Ltd, Plot No 1c/716, Off Dadaji Konddeo Cross Road, Byculla (E), Mumbai 400027.
associate office- pune Jagdish Khaladkar, Sahayog Apartment 508 Narayan Peth, Patrya Maruti Chowk, Pune 411 030 Tel: 91 (020) 2445 1574 (M) 98230 38315 e-mail: firstname.lastname@example.org
Editorial index Person & Organization A S Pillai, Sify Technologies........................................30 Alok Goyal, SAP..............................................................18 Alpna Doshi, Reliance Communications..............42 Anna Gong, CA Technologies ..................................57 Anurag Srivastava, Wipro Technologies...............60 Atul Patel, SAP Analytics............................................17
International Associate Offices USA Huson International Media (West) Tiffany DeBie, Tiffany.email@example.com Tel: +1 408 879 6666, Fax: +1 408 879 6669
Chandrashekhar Kakal, Infosys................................30
(East) Dan Manioci, firstname.lastname@example.org Tel: +1 212 268 3344, Fax: +1 212 268 3355
HS Shenoy, Kaseya India.............................................26
EMEA Huson International Media Gerry Rhoades Brown, email@example.com Tel: +44 19325 64999, Fax: + 44 19325 64998 Japan Pacific Business (PBI) Shigenori Nagatomo, firstname.lastname@example.org Tel: +81 3366 16138, Fax: +81 3366 16139 South Korea Young Media Young Baek, email@example.com Tel: +82 2227 34819; Fax : +82 2227 34866
RNI NO. MAH ENG/2011/39874
Chidambaran Kollengode, Nokia ...........................56 Daya Prakash, LG Electronics ...................................33 Dharanibalan Gurunathan, IBM ..............................40 Eric Yu, Huawei Enterprise ........................................57 Jay Pultz, Gartner...........................................................64 Jayabalan Subramanian, Netmagic ......................57 John Landau, Tata Communications ...................58 Jugal Kishore Dhulia, CRIS ........................................48 KP Unnikrishnan, Brocade ........................................57 Krishna Ramaswami, Zensar.....................................25 Maninder Singh Narang, HCL Technologies ISD .................................................38 N Nataraj, Hexaware ....................................................57 Neena Pahuja, Max Healthcare................................32 Neeraj Athalye, SAP India...........................................20 Nikhil Madan, EMC........................................................29 P Sridhar Reddy, CtrlS .................................................57 Rahul Joshi, Infosys ......................................................35 Rajesh Uppal, Maruti Suzuki.....................................33 Rama Murthy Prabhala, Infosys ..............................35 Rishikesh Kamat, Netmagic Solutions..................30 Sameer Ratolikar, Bank of India...............................44 Samrat Das, Tata AIG Life Insurance Company..33 Samson Samuel, Future Supply Chains................46
Shekhar Agrawal, HP IPG............................................29
Company name Page No.
Website Sales Contact
02 & 03
Vamsicharan Mudiam, IBM .......................................56
Venguswamy Ramaswamy, TCS .............................57
Vishal Awal, Xerox ........................................................29
49 http://fts.informationweek.in firstname.lastname@example.org
Vishal Gupta, Seclore...................................................22
Sumed Marwaha, Dell ................................................25 Vaibhav Tewari, Microland.........................................36
Vijay Mhaskar, Symantec ...........................................62
VM Kumar, Microland..................................................26
Important Every effort has been taken to avoid errors or omissions in this magazine. In spite of this, errors may creep in. Any mistake, error or discrepancy noted may be brought to our notice immediately. It is notified that neither the publisher, the editor or the seller will be responsible in respect of anything and the consequence of anything done or omitted to be done by any person in reliance upon the content herein. This disclaimer applies to all, whether subscriber to the magazine or not. For binding mistakes, misprints, missing pages, etc., the publisher’s liability is limited to replacement within one month of purchase. © All rights are reserved. No part of this magazine may be reproduced or copied in any form or by any means without the prior written permission of the publisher. All disputes are subject to the exclusive jurisdiction of competent courts and forums in Mumbai only. Whilst care is taken prior to acceptance of advertising copy, it is not possible to verify its contents. UBM India Pvt Ltd. cannot be held responsible for such contents, nor for any loss or damages incurred as a result of transactions with companies, associations or individuals advertising in its newspapers or publications. We therefore recommend that readers make necessary inquiries before sending any monies or entering into any agreements with advertisers or otherwise acting on an advertisement in any manner whatsoever.
informationweek July 2012
Why you should push your servers to the limit The key server challenge in the data center today is the optimal utilization and management of servers with lower TCO. To address these challenges, technologies such as server virtualization, consolidation and converged infrastructure are gaining importance http://bit.ly/KTVdvy
Cameron Larson @AstoundCom tweeted:
Good article on why you should push your servers to the limit: http://bit.ly/LMmAW5 #in
‘Social media is gaining acceptance in understanding the sentiment on a stock’ Pravin Lal, Director, Sapient Global Markets, discusses how the players in the capital market industry can tap into the wave of business opportunities presented by social media and shares examples of companies that are leveraging social media platform to drive business growth http://bit.ly/Kt97F4
Sean&Vaun Coleman@twinsmoneytips tweeted:
‘Social media is gaining acceptance in understanding the sentiment on a stock’ http://bit.ly/KfgCxT
Juan Manuel Diaz @JuanMDia35 tweeted:
InformationWeek – Data Center - Why you should push your servers to the limit http://www.informationweek.in/Data_Center/12-06-08/ Why_you_should_push_your_servers_to_the_limit. aspx?itc=edit_stub via @iweekindia
Radware @radware tweeted:
The importance of #virtualization is seen in the capabilities of your servers. Push them to the limits: http://bit.ly/Kx9M1I #cloud
How to embrace the shift to the cloud Hexaware’s cloud journey has not only helped it in driving productivity, but has also resulted in a reduction of nearly 2.7 million pounds of carbon and a positive environmental impact http://bit.ly/L4Uoy5
Liquid Accounts @liquidaccounts tweeted:
Interesting piece by Information Week (@iweekindia). How to embrace the shift to the cloud:
http://bit.ly/NxM8sf #saas #cloud
Karl Jones @forex4newbies tweeted:
‘Social media is gaining acceptance in understanding the sentiment on a stock’ http://bit.ly/MAsqKL
Managing backups in virtual environments With virtualization emerging as a top priority for IT administrators, the backup and recovery strategies need to change. Vijay Mhaskar of Symantec recommends five steps to embrace new technologies http://bit.ly/NAIaig
Sanchit Vir Gogia @s_v_g tweeted:
Good advice, lots of virtual journeys fail bcos of this “@iweekindia: Managing backups in virtual envronment
Hardik Shah @HardikShah81 tweeted:
Managing backups in virtual environments: With virtualization emerging as a top priority for IT administrators, ... http://bit.ly/L6oLFJ
James Stanbridge @Stanbridge1 tweeted:
Cloud computing is coming and businesses need to be ready #li #cloud http://www.informationweek.in/
Josette Rigsby @techielicous tweeted:
#saas #paas How to embrace the shift to the cloud InformationWeek India http://bit.ly/OwDY13
informationweek july 2012
Follow us on Twitter Follow us on Twitter @iweekindia to get news on the latest trends and happenings in the world of IT & Technology with specific focus on India.
Social Sphere Join us on facebook/informationweekindia
Wall Info Photos Boxes Discussions
InformationWeek India Share:
Write something... Share
InformationWeek India Facebook says it had an average of 526 million daily active users in March 2012, an increase of 41 percent from a year ago. It had registered 125 billion “friend connections” as of March 31 and 3.2 billion “likes” and comments
Like Comment Share l
Dipendu Chowdhury and 2 others like this.
InformationWeek India General browsing tops the average hours spent by Indians with 12.9 hours, which is followed by social networking (9.7 hours), e-mails (6.1 hours), financial activities (4.7 hours) and uploading/downloading (2.9 hours)
Wall Info Friend Activity Photos Events Website
Like Comment Share l
Sumit Verma and 5 others like this.
InformationWeek is the leading news and information source for information...
6 Shares Write a comment...
Fan of June
Like, Tag and Share us on Facebook
Sk Ekram Hossen lives in Haldia, West Bengal and follows technology-related news. He has been following InformationWeek India on Facebook and has actively shared various news stories.
Get links to the latest news and information from the world of IT & Technology with specific focus to India. Read analysis, technology trends, interviews, case studies whitepapers, videos and blogs and much more…
Participate in quizzes and contests and win prizes!
Like Comment l
july 2012 i n f o r m at i o n w e e k 11
T h e m o n t h i n t e c h n o lo g y VMware launches Project Serengeti to virtualize Hadoop
RIM to set up BlackBerry Innovation Zone in India
VMware is launching an open source project, Serengeti, to come up with changes to Hadoop that enable it to be deployed on virtual, as well as physical server. Hadoop is gaining ground as a distributed system for handling Big Data, however, deployment and operational complexity, the need for dedicated hardware, and concerns about security and service level assurance prevent many enterprises from leveraging the power of Hadoop. By decoupling Apache Hadoop nodes from the underlying physical infrastructure, VMware aims to bring the benefits of cloud infrastructure — rapid deployment, high-availability, optimal resource utilization, elasticity, and secure multi-tenancy to Hadoop. VMware is also working with the Apache Hadoop community to contribute extensions that will make key components “virtualization-aware” to support elastic scaling and further improve Hadoop performance in virtual environments. “Apache Hadoop has the potential to transform business by allowing enterprises to harness very large amounts of data for competitive advantage,” said Jerry Chen, Vice President, Cloud and Application Services, VMware. “It represents one dimension of a sweeping change that is taking place in applications, and enterprises are looking for ways to incorporate these new technologies into their portfolios. VMware is working with the Apache Hadoop community to allow enterprise IT to deploy and manage Hadoop easily in their virtual and cloud environments.”
Research In Motion and Startup Village announced plans to launch the first BlackBerry Innovation Zone in India. Located at Rubus Labs in Keralabased Startup Village, the Innovation Zone will be the first of its kind in the Asia-Pacific region, and will showcase the latest in BlackBerry technologies. Rubus Labs will be host to regular developer activities such as BlackBerry Hackathons and Bar Camps. Training sessions will be conducted across the 126 engineering colleges in Kerala under the BlackBerry BASE (BlackBerry Apps by Student Entrepreneurs) program by leveraging the campus outreach network of Startup Village.
Microsoft introduces Surface After days of speculation about a Microsoft-crafted Windows 8 tablet, the software giant introduced its own line of tablet computers at a press event in Los Angeles. Microsoft unveiled Surface — one tablet for Windows 8 RT running on NVidia’s ARM processor, and one for Windows 8 Pro, running on Intel’s Ivy Bridge (using the i5 core). The Surface for Windows 8 RT weighs 676 grams and is 9.3 mm thick. Using a 10.6-inch ClearType HD display, this Microsoft tablet includes USB 2.0, Micro HD, and microSD options for connectivity. The 2x2 MIMO antennas give it the strongest WiFi capability of any tablet, according to Microsoft’s Windows Chief Steve Sinofsky. The Surface for Windows 8 Pro is heavier, at 903 grams, and thicker at 13.5 mm. It also includes the 10.6-inch ClearType display, but Microsoft’s spec sheets say it is “Full HD.” It includes USB 3.0 along with microSD, and a Mini DisplayPort Video — so users can send video to a full screen or other compatible display. Microsoft claims the tablet is scratch and wear resistant, and designed to be both sturdy and light.
informationweek July 2012
GE India to invest ` 300 crore in new R&D labs GE India announced an investment of over ` 300 crore towards setting up of new R&D Labs and other expansion activities at GE’s India Technology Center in Bangalore. The investment is focused on expanding this center to include experimental labs showcasing research and engineering in areas of critical importance, including cancer treatment, radiochemistry and other technology applications in healthcare, locomotive engines, heavy earth moving equipment and equipment for the energy sector. John Flannery, President & CEO, GE India said, “The new investment will further strengthen our efforts to partner the State of Karnataka and provide best in class products and localized solutions for the people of India.”
S o f t wa r e
Microsoft confirms USD 1.2 billion Yammer buy Rumored for more than a week, Microsoft’s plan to acquire Yammer became official on June 25. Early business news reports turned out to have just about everything right, down to the USD 1.2 billion price. Microsoft did confirm that Yammer will be integrated into the Microsoft Office division, making it part of the same product family as SharePoint, which was one detail that hadn’t been clear previously. Some analysts thought it would be more likely to be positioned as an adjunct to Microsoft Dynamics CRM, as a counter to Salesforce.com’s Chatter service, which is similar to Yammer. Instead, Yammer is to be integrated with SharePoint, Dynamics, and other products but will also have a life of its own. Kurt DelBene, President of the Microsoft Office Division, praised Yammer as “best in class enterprise social networking” and the company behind it as providing “rapid innovation in the cloud that will benefit Microsoft customers.“ Yammer operates on a freemium business model where individuals can sign up to start collaborating with others in the same business domain. Organizations that find the collaboration useful can convert to paid account with administrative features and other upgrades. Microsoft CEO Steve Ballmer said Yammer’s sales model was one of the things he found most attractive. Yammer is “really unique, maybe very unique in the viral adoption model. You can throw the words ‘enterprise’ and ‘social’ on a bunch of different stuff, but you can’t find anybody [else] that has really built a customer base of enterprise IT customers, virally — with great respect from the IT department and with great love from the customers. I think Yammer is very unique in that.” Microsoft stressed that Yammer
Steve Ballmer, CEO , Microsoft
would continue to operate as a standalone service, even as the company looks for ways to deepen its integration with SharePoint and other Microsoft products. DelBene did not provide details about how Yammer would mesh with Microsoft’s efforts to boost the social software capabilities built into SharePoint, saying it was too early. “We’re very excited about the social feeds and capabilities of Yammer and very committed to continuing it as a standalone business,” DelBene said. At the same time, Microsoft will obviously be looking for synergies, particularly with Office365, he said. Yammer will be less relevant to the strictly on premises environment of SharePoint, he said. “SharePoint is a tremendous success and has a bright future, and Yammer is a tremendous success and has a bright future,” Ballmer said. “One of the keys, of course, is really getting the integration right.” When asked whether the model for integration might follow that of Skype, which has been allowed a degree of autonomy after its acquisition, Ballmer said, “sure” but did not elaborate. — Source: The BrainYard
Wipro, SAP partner to launch app Wipro Technologies has launched the first ever mobility app for the manufacturing sector — Wipro m-eXecute. Developed in collaboration with SAP, the app caters to the niche space of manufacturing operations. The solution is built on a Sybase Unwired Platform (SUP) and enables the functionalities of SAP’s proprietary ‘Manufacturing Execution 6.0’ solution for iPhone and iPad users. The company says other platforms will be added in the later releases. “Wipro m-eXecute is based on two principles: faster decision making and timely execution of transactions, and we have found that these are the two most important objectives for any manufacturer. We have designed it to enable collaborative decision making across the manufacturing value chain,” said N.S. Bala, Senior Vice President and Global Head -Manufacturing & Hi-Tech, Wipro Technologies. Discrete industries like HiTech, automotive, semiconductor, etc., are likely to embrace this app. “Wipro m-eXecute provides extended value to our customers by enabling mobile access to the shop floor user. This innovation demonstrates the ability of Wipro to accelerate delivery of key SAP initiatives through collaboration with SAP,” said Mike Lackey, VP of LoB Manufacturing Solutions, SAP. Wipro is a strategic partner for stakeholders across the entire manufacturing ecosystem and offers a range of solutions across various domains in the discrete and process manufacturing and Hi-Tech industries. —InformationWeek News Network
july 2012 i n f o r m at i o n w e e k 13
Tata Communications launches global low latency network Tata Communications has launched a low latency network (alias LOLA), which aims to seamlessly connect major financial capitals in Asia, the United Kingdom and the United States. The network is the first low latency service that offers a pure multipoint Ethernet platform to the financial services sector and other global industries, accelerating global high frequency trading and other low latency applications. Designed for companies that require a secure, reliable and fast low latency solution, the network will enable financial firms to execute a high frequency trade between locations, such as London and Hong Kong or New York and Singapore, in milliseconds, through a single network and single supplier model. In a pre-briefing call with InformationWeek, just a few hours before launch, John Hoffman, Head of Ethernet Product Management, Tata Communications said LOLA connects six major global financial centres in London, New York, Chicago, Tokyo, Hong Kong and Singapore. Hoffman said Tata Communications is also launching LOLA in India, South Africa, and other emerging markets where it has the shortest path of cables already existing, or other differentiators. “Banking and Financial institutions do trading on low-latency networks. The transactions are done in a few milliseconds and they make a small amount per dollar invested. But because the volumes of the transactions are high they may be making 1 – 2 million dollars per trade, per second. And let’s say they trade 15 times a day, making 15 million dollars a day. But they are investing hundreds of millions of dollars at once. The only two reasons they can do this is that they have invested in a low latency network and also in software that is automatically seeking out the
informationweek July 2012
John Hoffman, Head of Ethernet Product Management, Tata Communications
differences between exchanges and instituting buy and sell orders almost instantaneously, to take advantage of that arbitrage,” said Hoffman. The new network enables customers with low latency needs to work with a single global supplier instead of multiple country-specific point-to-point network providers. Customers can build multipoint low latency networks that communicate from city-to-city rather than exchangeto-exchange to serve applications for which latency is crucial, regardless of the software or trading platform used. Hoffman informed us that there is a “tremendous amount” of electronic trading going on; in some exchanges 92 – 95 percent of the trades are done electronically. “These electronic trades are done by computers running sophisticated algorithms, supervised by intelligent people. But the networks have to be fast enough for people to see opportunities,” he said. Apart from Banking and Financial institutions, Hoffman sees demand for low latency networks from the Pharmaceutical and Telecom industries
and from companies that do Internetrelated work. And when we asked him what’s the differentiator, he said it is about offering customers the shortest path and also the best software algorithms. “The special sauce is the specialized trading software with algorithms for that trading firm. This is written by certain software firms. They are very secretive about how they write these algorithms,“ revealed Hoffman. Hoffman said that, unlike the competition, Tata Communications is also ready to offer low latency networks to other industry verticals. While Tata Communications prides itself with being the first company in the world to fully own a fiber cable network that runs around the world, most of its competitors simply purchase capacity from telecom companies and build offerings on top of it. The Tata Global Network includes one of the most advanced and largest submarine cable networks, a Tier-1 IP network, with connectivity to more than 200 countries across 400 PoPs, and nearly 1 million square feet of data center and collocation space worldwide.
As of now this service is being offered directly by Tata Communications, but the company will eventually partner with local distributors, especially those with strong telecom connectivity in their regions. Hoffman did not give us pricing details. Service level agreements (SLAs) for the low latency network include nearreal-time latency guarantees. Latency is measured every five minutes on a 24/7 basis, up to two decimals after the millisecond range from point-ofpresence (“PoP”) to PoP. All data is stored for trend analysis. —Brian Pereira
News Open Source
India plays key role in Red Hat’s global plans Open source poster boy, Red Hat, recently celebrated the 10th anniversary of its flagship solution, Red Hat Enterprise Linux. The firm has played a key role in the development of the open source ecosystem, and has been credited with launching the first-ever open source software subscription business, spurring the adoption of Linux across the enterprise. While the company continues to strengthen its footprint across the world, India is a vital cog in the company’s global success. Red Hat confirmed India’s importance by announcing the expansion and launch of two “Engineering Centers of Excellence” in Bangalore and Pune, India. What’s significant about the launch of the Engineering CoE’s is the fact that the Pune center is Red Hat’s largest engineering facility outside North America at 50,000 sq ft. With the expansion in India, Red Hat hopes to incubate, sustain and support local talent and maintain high quality of contributions to the open source
community locally and internationally. “India is central to our global goals. Besides playing a key role in our R&D operations, our entire product line is supported from India. We have grown here exponentially, and as we add people globally, our operations in India will grow proportionally,” states Paul Cormier, President - Products and Technologies, Red Hat. Red Hat is extremely bullish on the power of India’s software development talent, and Cormier cites the example of Gluster, an open source storage software company based in Bangalore — a company Red Hat acquired last year. Red Hat believes that Gluster’s technology can help customers in leveraging open source for cloud computing on commodity storage. India’s steady rise as a hub of software development is corroborated by Evans Data’s Global Developer Population and Demographic Survey conducted in 2011, which states that by 2015, India will surpass the United States with more than 3.5 million
Paul Cormier, president, Products and Technologies, Red Hat
professional software developers. Besides the development community, Red Hat is also bullish on India as a market for its products, and Cormier cites the examples of large organizations such as NSE, Airtel, Reliance and IDEA, who are using Red Hat’s products. — Srikanth RP
S o f t wa r e
India hosts 30 percent of global top R&D companies: Zinnov India is fast growing as a major hub for global research and development (R&D). According to research firm Zinnov, currently 30 percent of the top 1,000 R&D spenders across verticals have a presence in India. As per the report titled Global R&D Benchmarking Study: FY2011, there is a total installed talent pool of 2,20,000 in MNC subsidiaries in India and these MNCs have spent USD 7-7.5 billion on the headcount in India in FY11 alone. The study also found that the opportunity areas for India to attract R&D investment span over 13 sectors with software being the most invested in sector. “India definitely has the right potential to become a key R&D hub, not only in software for which it has gained recognition globally, but also in other verticals such as Aerospace, Automotive and Defence,” Sidhant Rastogi, Director - Globalization Advisory, Zinnov, said. The Zinnov report which analyzed trends in R&D activities and investments globally stated that after a decline in the global R&D spending in FY10, the fiscal of 2011 has seen
informationweek July 2012
an increase in spending and the potential for India to be among the top nations for R&D has become stronger. Global net sales and global R&D spending have grown at a rate of 13.55 percent and 8.2 percent respectively. The contribution of R&D spend is divided across North America, European Union and Asia Pacific regions at 36 percent, 34 percent and 7 percent respectively. Though the APAC region’s contribution as a percentage of the global R&D spend is seemingly small — within the region, R&D investments have increased a significant 28 percent as compared to the previous year. India is a key contributor to the APAC region which has the presence of one-third of all global 1000 R&D spenders. “The sentiment on the role of R&D in driving the future continues to remain positive across geographies. Global R&D investments have grown by 8.2 percent as compared to the previous year FY2010. This growth has been primarily driven by organizations in the semiconductor, industrial & consumer hardware and electrical & electronic sectors,” said Rastogi. —InformationWeek News Network
SAP prepares to tap SMAC opportunity Here’s how SAP is realigning its entire product portfolio to move its customers from a “system of record” to a “system of engagement” so that they can do “business in the moment” By Brian Pereira IT vendors have been closely monitoring business models and processes, and the way executives capture, store, analyze and apply business data. Software products are designed accordingly, strategies are aligned, billions of dollars are spent to acquire companies, and there are heavy investments in R&D. SAP, a maker of enterprise applications, has moved cautiously along the years, making a late move into areas like cloud and mobility. Unlike its competitors, who have been acquiring numerous companies of all sizes, most of SAP’s acquisitions in the mobility and cloud space were made in the last four – five years. And SAP has been picky about the companies it acquires — the major ones being Sybase (mobility), Business Objects (BI), SuccessFactors, and Ariba (cloud). Apart from this, SAP has strategic alliances with large vendors like Microsoft and smaller ones like NetBase. Now in its 40th year of operations, SAP has a huge base of ERP customers (190,000 customers worldwide including 4,900 in India). It also has a base of 2 million developers. SAP wants to move its customers from a “system of record” to a “system of engagement” so that they can do “business in the moment.” And its strategy to integrate its social, mobility, analytics, and cloud offerings (see box ‘Businesses adopt SMAC for faster and accurate decisions’) is the key to achieving this objective. SAP has a leadership position in ERP, with a 47.20 percent share of the Indian ERP market (IDC). So when competitors like Ramco Systems and Salesforce.com came along and started offering SaaS-based enterprise software, it started thinking about cloud, and
eventually brought out its Business ByDesign (on-demand) portfolio. Today, SAP has solutions for on-premise, on-demand (cloud) and on-device (mobility). In fact, SAP’s business is built on five pillars: cloud, mobility, database & technology, analytics, and core business applications. In the first 30 years it focussed only on core business applications. So, it has achieved a lot in the last 10 years. For this story, we report on SAP’s cloud, mobility, and analytics strategy (SMAC).
Analytics, HANA and ERP
SAP believes the combination of mobility and analytics will be the next killer app in the enterprise. That’s why it spent USD 5.8 billion to acquire Sybase in 2010. Core to its analytics strategy is HANA, an in-memory database appliance that processes records at incredible speed. Atul Patel, Vice President, APJ, SAP Analytics says, “We just did a benchmark for a 100 terabyte database with 100 billion sales and distribution records — and we ran that on 16 IBM X5 servers costing USD 600,000. We achieved 20 times compression of the database in memory. It took less than a second to do a BW analytics query. Analytics slicing and dicing took less than a second to two seconds.” Analysts believe HANA will be
the game changer for SAP. It will be the centerpiece in the integration of SAP’s on-demand, on-premise, and on-device solutions. However, there was a time when SAP did not own its own database, and its customers had to integrate its ERP with solutions from Oracle. Eventually, SAP created Business Information Warehouse (BW) which combined data warehouse functionality with a business intelligence platform. Patel informs us that BW is mainly used for reporting, and customers were looking for a solution that extracts selective data from the database (specific views) to create data marts for specific LOBs (line of business) like HR, Sales, Operations, etc. As SAP did not have all the components in the enterprise applications stack, its customers began to use a mix of solutions from other vendors, including some of their own customized applications. And that generated a lot of non-SAP data, posing another challenge. “There was a need to combine nonSAP data with SAP data and create an ODS (operational data store), to extract a report. It has also been a practice for businesses to separate its transactional and analytics data — and to have separate databases for each. It was perceived that mixing the two would
“With the help of HANA, we can create an in-memory data mining experience for our Business Objects customer base of 46,000 customers worldwide”
Vice-President, APJ, SAP Analytics
july 2012 i n f o r m at i o n w e e k 17
News Analysis impact performance. All this resulted in a number of layers, and now SAP wants to replace all these layers with HANA,” says Patel. HANA will connect directly with SAP ERP in the back-end, which will further integrate with the mobility and cloud platforms. And this combination will create what SAP calls a “system of engagement” — enabling executives to pull selective data from the transactional data base and do “business in the moment.” According to SAP, HANA can accept data from legacy databases (non-SAP) and also offers unlimited scaling. It can process both structured and unstructured data. And it offers both OLAP and OLTP capabilities.
Mobile analytics enables one to analyze key metrics and uncover data trends, on-the-go. So, one can instantly share business insights with others. For instance, a retailer will be able to view sales of a particular product for a specific quarter, and then compare the monthly margins for various products. He can also drill down using data from the transactional system in real-time. This is done using real-time analytics from HANA in the backend, running
“Today, everyone carries a different device to work. SAP Afaria will address the issue of managing different devices in the enterprise in a secure manner”
COO, SAP Indian Sub-continent
SAP BusinessObjects Mobile. “People want to see analytics on-the-go, so I think mobile analytics will be a killer application. The mobile analytics solution is available on the app store,” says Patel. SAP BusinessObjects Mobile presents information from BI or web intelligence reports. It also has the capability to use the information from HANA. For instance, many SAP customers use Crystal Reports and all these scenarios are now available on its mobile analytics platform. So, all Crystal Reports or web intelligence reports built on-premise, are now available on the mobile device. What’s more, SAP has embedded analytics in ERP. “Customers who buy our ERP solution today also get Business Objects. SAP is offering
embedded content — with more than 100 reports, mashups and dashboards created using Business Objects. So, if you are in an ERP screen, you will see a Business Objects element embedded. This is called embedded analytics. In addition, you can use this framework to build extra reports on top of ERP,” informs Patel.
Apart from analytics and BI, SAP also wants to extend certain enterprise functions to the mobile platform. It recently acquired a company called Syclo for solutions that address mobile asset management, field service, inventory management and approvals/ workflow. These will now be integrated with the Sybase Unwired Platform. SAP is also working closely with
Businesses adopt SMAC for faster and accurate decisions The tech community and tech savvy business users are now talking about the latest buzzword, abbreviated SMAC (Social, Mobility, Analytics and Cloud). Here is a lowdown on SMAC and how it is impacting business today. Business models are changing and the pace of decision making has increased. Executives across the ranks need instant access to the latest and most relevant business information. And they need to have this information on their personal devices. Remember how push e-mail revolutionized business communications? Well, the same thing is now happening for business data. Ten years ago, all business information resided onpremise, in a centralized database and was accessed using enterprise applications through a client server model. Employees had to use desktop computers to log in to an ERP, CRM or SCM system. Before that, they had to go through a training process to familiarize themselves with a complicated interface, processes and workflows (remember all those SAP training sessions?)
informationweek july 2012
Today, the dull battleship grey interfaces in ERP are making way for colorful interactive interfaces with data visualization tools. ISVs are building customized, industry-specific templates, layered on top of those dull ERP interfaces. This improves the user experience, and enables executives to quickly access the latest business information. With analytics capabilities integrated, one can access the most relevant and updated business information — to make on-the-spot decisions. Using social or collaborative tools on the cloud, groups of users can review this information and collaborate on those decisions. And since this information is also accessible on mobile devices, one need not be in the office (or behind a desktop) to access it. This combination of Social, Mobility, Analytics and Cloud technologies (abbreviated SMAC) is what businesses are adopting today, for the cutting edge. Just look at how SAP, Oracle, Microsoft, IBM and others are acquiring companies or developing applications to address the demand for SMAC.
News Analysis three partners for mobile solutions: Adobe, Appcelerator and Sensor. But enterprise users also want to build their own applications to enable mobile workers to access enterprise data. For that, SAP has the Sybase Unwired Platform (SUP). It says there are more than 2 million developers who have committed to the platform. Sybase and RIM have been working together for some time to create applications, and now SAP wants to have similar partnerships with Apple and Samsung. “At the end of 2010 we had just two mobility applications. But at the end of 2011, this number shot up to 150 plus. A bulk of these apps came from partners. When 2 million developers create apps, the power of this platform will be realized. Ever since we acquired Sybase, we have been working very aggressively with partners,” said Alok Goyal, COO, SAP Indian Sub-continent. There is also the trend of users
“Some of our large enterprise customers have chosen our supply chain solution onpremise, but are also looking for a cloudbased reverse auction platform or a cloud sales force automation”
Neeraj Athalye, Head - Cloud Business, SAP India all these different devices in the enterprise in a secure and robust manner? And that’s what SAP Afaria will address,” said Goyal. SAP CIO Oliver Bussman said SAP has itself been using different mobile devices internally and has tested its Afaria and Unwired platforms. “We are the second largest user of iPad’s globally with 17,000 iPads used by SAP. We also have 13,000 iPhones deployed. And we have started
Analysts believe HANA, an in-memory database appliance that processes records at incredible speed, will be the game changer for SAP. It will be the centerpiece in the integration of SAP’s on-demand, on-premise, and on-device solutions bringing in their own devices and trying to connect these to enterprise networks. That presents IT with a new set of challenges like securing data and preventing data leakage. What’s more, managing different devices and different mobility platforms (Android, RIM, iOS, Windows Phone, Symbian) is a nightmare for IT managers and CIOs. SAP has addressed this through Sybase Afaria, a software platform that delivers centralized control of all mobile devices and tablets — including iPhone, iPad, Android and BlackBerry. SAP Afaria offers enterprises the flexibility to deploy on-premise or partner hosted. “We want to provide the best applications and the best underlying platform for user enterprises. One platform will manage heterogeneous devices — everyone carries a different device, so how do you manage
informationweek july 2012
deploying different Samsung Android devices. So we have a mobile device agnostic strategy,” said Bussman. SAP also wants its large base of customers to come and experience its Afaria and Unwired platforms, and to “play” with all the applications. It has latched on to the consumerization of IT trend, and wants to take the same approach as Apple. Prospective or existing Apple customers can walk into an Apple store, use its various products, and explore the features and applications. There is a special section in the Apple store called the Genius Bar, where Apple experts are on hand to address technical queries from users. SAP is trying to create the same experience and give a ‘consumerlike’ feel for its mobile applications. It recently unveiled a Mobile Solutions Center (MSC) in Mumbai. The MSC is
dedicated to help customers experience SAP’s mobile innovations as well as understand how to integrate mobile solutions into their business strategy. It plans to launch more MSCs in other Indian cities, and also in other countries. Notes from a press release inform that the MSC will connect businesses with SAP mobile industry experts, where they can learn best practices to develop or expand their mobile strategy. Like the Genius Bar and Apple stores concept, companies or customers visiting the MSC can interact with SAP solutions in the experience zone, where they can gain handson experience of real-time mobile scenarios across industries. And of course, they can play with the wide range of SAP mobile business apps.
Cloud + Social
SAP’s Business ByDesign portfolio brought applications that were meant for large enterprises to the SME sector. The move from CAPEX (licensing models) to OPEX (pay-as-you use) is an attractive proposition for SMEs that are unable to make huge investments in software licenses upfront. But SAP insists that its cloud offerings are not meant to replace on-premise applications — rather, on-demand will complement on-premise; a SAP cloud application will pull data from an SAP application on-premise, and make this available to users on the cloud. So, users will have specific views of this data on the cloud, and collaborate on decisions and workflows, using social media interfaces. SAP has also acquired companies like SuccessFactors and Ariba to take its plans for cloud forward. “We see a lot of uptake for cloud
ERP (OPEX) from the SME sector. It is a TCO value proposition. But with large enterprises, it is an innovation value proposition. Some of our large enterprise customers have chosen our supply chain solution on-premise, but are also looking for a cloudbased reverse auction platform or a cloud sales force automation. So the investment in on-premise is extended to the cloud or mobile platforms,” said Neeraj Athalye, Head - Cloud Business, SAP India. SAP customers want to collaborate on the decision making process, on the cloud. And that’s where SAP StreamWork, a collaborative decision making software, comes in. It connects to the SAP backend, so you can pull your purchase history or account history from the ERP into a collaborative decision making environment, explains Athalye. StreamWork is aimed at the “Facebook generation” of users and it allows them to create an activity and add people, who become part of the collaborative discussion. Users will receive an e-mail with a link to the discussion in StreamWork. Users can add files, agenda, tools catalogs etc. The soluition includes time management, coordinating tools, voting, video and discussion tools too. And there are many partner tools embedded like Mind Mapping, ConceptDraw, Box.net etc. SAP uses StreamWork internally for collaboration among its employees and partners. Records of discussions are stored on the cloud, not on the client. So anyone joining in the discussion later can pick up the history. If the event is captured from start to end, one can audit it in future. So it doesn’t matter if people move on or teams change. There are iPad and BlackBerry apps available for StreamWork. But again, analytics will be the killer app for collaboration on the cloud. SAP is reselling a social media analytics solution from NetBase, and is also integrating it with SAP Business Objects.
Predictive analytics is the next thing
that customers are demanding. Here’s one scenario how it can be used in the retail industry. Data collected from stores can be used for retail store segmentation analysis. The analysis can be performed in the database in HANA, and also in data sources such as Business Objects Universe and in CSV files. The solution that is used here is SAP Business Objects Predictive Analysis. In addition to the algorithms for predictive analysis the product also includes new data visualization tools. Until recently SAP was reselling a predictive analytics solution called SPSS. But IBM bought the company in 2009. So SAP either had to acquire a predictive analytics company or build its own solution. It decided to use the IP acquired from Business Objects and created SAP BusinessObjects Predictive Analysis (its BI platform). “We have a big Business Objects
Patel informs us that SAP has customers like Air Asia in Malaysia using this solution to study the customer experience, and compare this with data from other airlines. Then there are a couple of consumer product companies in Singapore evaluating it. An Australian bank wants to gauge what customers are saying about its products and services, and wants to sign up for this tool.
Look closely at SAP’s proposition and you’ll see that analytics and HANA are the centerpiece to its SMAC strategy. With its large ecosystem of developers and a wide base of SAP ERP users across verticals, SAP is poised to revolutionize the way business is done today. Competitors like Oracle, IBM, EMC, Salesforce.com and Microsoft are keeping a close watch on SAP,
SAP’s late entry into the predictive analytics market has also allowed it to observe the deficiencies in competing products, such as SPSS, Cognos & SAS, and build these in its own solution — SAP BusinessObjects Predictive Analysis customer base (of 46,000 customers worldwide) and they are demanding predictive analytics. Secondly, we believe that, with the help of HANA, we can create an in-memory data mining experience,” says Patel. The solution lets users create new predictive models, do data visualization, and share the results in PML (Predictive Markup Language) with other tools in the market. In addition, SAP has its own models stored in HANA Predictive Analytics Library (PAL), and also those in the open source System R. SAP is now competing with SPSS, Cognos, SAS and some smaller players in the predictive analytics market. But its late entry into this space has also allowed it to observe the deficiencies in competing products, and build these in its own predictive analytics solution. The large library of predictive models is a case in point.
and there are smaller competitors like Roambi, Workday and NetSuite. But will prospective customers who have invested in say Oracle switch over to SAP because of the benefit of speed, integrated products and a better user experience? That may be so only with the larger enterprises who benefit from the full spectrum of solutions —the ones who look at transformation in a big way. The CIO of a medium-sized company told us recently that his product margins aren’t large enough to justify the ROIC (return on invested capital) for SAP. He is making do with customized solutions for now. SAP will want to first get such companies on to its on-demand platform. Meanwhile, the battle for SMAC is going to get heated in the next few months and all eyes are on SAP, SAS, EMC, Oracle and IBM. u Brian Pereira email@example.com
july 2012 i n f o r m at i o n w e e k 21
Can IRM solve security issues related to BYOD? Unlike the device-centric strategy advocated by major vendors to solve the BYOD problem, security vendor, Seclore, is proposing a simple content-centric IRM approach that it claims will work on any device By Srikanth RP As more and more employees bring in their devices into the workplace, the first reaction of IT has been to apply a policy similar to what they have been following for corporate-owned devices. In some organizations, the IT function allows only specific devices to be part of the network, as the IT function has the capability to manage only certain types of devices. However, this is easier said than done.
Byod in a disparate world
Traditionally, the IT function had to just deal with a homogeneous desktop PC environment. But the Bring Your Own Device (BYOD) trend is forcing them to manage mobile devices from multiple vendors running on disparate operating systems, such as iOS, Android and Windows Mobile. The introduction of new unsecured devices is creating a security hole for organizations. A Mobile Device Management (MDM) policy focused on specific devices will defeat the true purpose of BYOD. Given the diversity of mobile platforms, few organizations are prepared with a security strategy for this emerging world. Some enterprise companies have even suggested the use of logical partitions — one for personal and other for professional use, wherein the IT function has complete control over the professional partition. Other enterprise companies are using MDM features of remote locate, track and wipe facilities if a device is lost. However, in an era, where the thin line between work and home is rapidly vanishing, and a number of companies are giving their
informationweek july 2012
employees the option of working from home, it is extremely difficult for any organization to control how employees consume or use information. “Most of the current MDM systems cannot even prevent the copying and transfer of information between one logical partition and another logical partition of the same device. From a data security perspective, this is an absolutely basic requirement,” says Vishal Gupta, CEO, Seclore. Gupta argues that controlling the end device will not work in an era where the form factor could range from a mobile device, a tablet or a kiosk. “MDM systems allow contextual and policybased access to information. However, they do not differentiate between the right and wrong use of information. For example, what happens if a rightful owner of information downloads information on his tablet and copies it to another personal device? If this employee leaves the organization, the information leaves with him,” says Gupta. The BYOD issue also brings into focus the company’s insistence on managing personal devices. Most users object to such an approach, as they do not like the company controlling and installing device management software, on a device they have brought with their own money. This is also risky as a remote wipe initiative can inadvertently wipe off personal information.
A new approach to Byod
Previously, a company’s information network ended at its firewall, and its valuable data remained relatively secure within that network. But today, information is no longer contained within the four walls of the business,
and the network today ends with the user, and ultimately with the device that the user uses. Security, hence, has to go where the information goes. This can be enforced using Information Rights Management (IRM), which ensures that the security is embedded in the information itself. Hence, unlike an MDM policy, which permits only ‘X’ or ‘Y’ mobile device to work, an IRM solution can ensure that enterprises can adopt a BYOD policy without device restrictions, and have personal devices accessing corporate information. IRM allows organizations to set rules regarding who can access data. Prevention of screenshots, copying and pasting together with clear definition of who can access the data makes unauthorized replication of the data extremely difficult. “With IRM, an enterprise can do away with the need of controlling devices. There is no need for partitioning either, as security is built in the content itself,” explains Gupta. To showcase the capability of IRM in mobile devices, Seclore recently launched an IRM solution for Apple’s iOS platform. The solution will enable enterprises to collaborate across enterprise managed devices and (personal) iPads and iPhones without worrying about information breaches. The application can be downloaded from the Apple Marketplace. This is an entirely different approach to the vexing problem of BYOD, and offers enterprises a promising alternative that they must consider before they decide a MDM policy. u Srikanth RP firstname.lastname@example.org
Infrastructure Management Services
Metamorphosis of infrastructure management services in India
Specialization drives IT managed services market
Why CIOs are opting for IT infrastructure management services
june JULY 2012 i n f o r m at i o n w e e k 23
Metamorphosis of infrastructure management services in India
The market is growing in India amid changes in the delivery model, nature and complexities of work, and duration of the deal cycles By Ayushman Baruah
ndia may still form only a small piece of the cake in the global infrastructure management (IM) services market but the share and momentum of growth is fast increasing. The current size of the global IM services market is around USD 370-380 billion â€” of which close to 40 percent is outsourced, i.e. around USD 150-180 billion. Today,
informationweek july 2012
Indian service providers deliver over about USD 4 billion worth of services in the IM services space. For most Indian providers, the bulk of their revenues come from the U.S., while India contributes only a tad bit of it. For instance, Microland, pioneer in remote infrastructure management (RIM), generates 83 percent of its revenues from the U.S. and 17 percent from India.
For smaller players like Zensar, India contributes less than 2 percent of their global IM revenues. These figures are however changing as India is beginning to pump in more revenues. According to Zensar, a multi-shore end-to-end IM services company, India is a growing destination for IM services but it is a highly commoditized service market, strongly competing on
price and dominated by a few larger players. The key driver of the market and the core differentiator for the providers would be in offering niche and specialized IM service projects with high-end skills. “These are the kind of projects being awarded to service providers like us. More and more companies in India are looking at total IM outsourcing with the objective of generating significant cost savings,” says Krishna Ramaswami, Senior Vice President, Global Infrastructure Management Services, Zensar. The mode of delivery of the IM services has also changed and evolved over the years. While in India, onsite delivery of IM services has been predominant, the trend is changing to a more hybrid model today. According to Zinnov, there were two reasons that made Indian customers deploy their personnel on-site. First, it gave them a comfort factor. The second reason was the lack of appropriate IT infrastructure and tools in Indian corporations to perform support remotely. However, as organizations are global today, with multiple offices across geographies, the hybrid approach seems to be working out better in many ways. Most of the providers are experiencing an equal mix of requirements. “While there are multiple requirements for the on-site “fix on fail” and “hands and eye” support, there is an equally large demand to provide remote services especially on the server and network support. The domestic opportunity will largely be on high volumes and lower billings and will form a significant base for growth for IMS companies,” says Ramaswami of Zensar. Players such as Dell Services are witnessing more and more customers asking them to decide the best model
In the next three-five years, all enterprises will run a hybrid infrastructure environment and the role of IM service providers will evolve Vaibhav Tewari
VP & Business Head- Cloud Services, Microland
for delivery of services to them. “We have capabilities to deliver services both from customers’ premises, as well as from our remote delivery centers. Both these models have their own advantages and a final decision is taken based on what works for the customer based on their requirements and strategic priorities,” says Sumed Marwaha, Country Manager - IMS, Dell Services India. “In India, we are focusing on some key customers where we already have existing relationships. We focus on value-added SLA-based services rather than just the provision of manpower-based services. We also have a customized go-to-market model for India. This model brings in the global best practices to India at a market price point.” Given that globally, RIM is the preferred model, Indian providers are utilizing this opportunity in a big way. RIM is the remote management of the IT infrastructure of a company, such as its workstations (desktop PCs, laptops, notebooks), communications and networking hardware and software, as well as applications by a service provider. Remote monitoring and management is undertaken through global delivery centers, where skilled staff monitors and manages the infrastructure, ensuring uptimes and availability. With a growth of over 30 percent CAGR during the initial years
and sustaining above 15 percent CAGR for the past couple of years, RIM has contributed significantly to the overall IT services exports in the country. One of the significant innovations that have occurred much earlier in RIM compared to application development management (ADM) services is the disruption in linearity between revenues and effort. The companies were quick to move to an outcome-based pricing model and offer efficiencies through automation. All industry verticals are now considering India as a preferred location for outsourcing their infrastructure and application services. While some of it is in the form of setting up captives (predominantly the BFSI organizations), others are adopting more conventional outsourcing models. “While initial interest in India as a RIM service provider was mainly due to the significant cost arbitrage, this was soon overtaken by other benefits such as labour arbitrage, productivity improvements through improved operations, productivity from better tools and incremental value from value-added services,” says Rama Murthy Prabhala, AVP, Practice Head, Manufacturing at Infosys. Kaseya, a pioneer in automated managed services (AMS), says the recent trend points to an increase in adoption of AMS among the small- and medium-
RIM services has been a major growth driver for the Indian services industry in the past few years, and has the potential to become the 3rd largest revenue contributor to the IT services industry by 2020 Source: NASSCOM
july 2012 i n f o r m at i o n w e e k 25
Cover Story sized businesses in India. Enterprises with highly distributed, complex IT environments, such as those in the retail, BFSI, healthcare, and education verticals, have adopted AMS. “Kaseya’s low-cost, highly flexible AMS solution can ensure very low downtime, superior visibility and control of IT through single pane of glass interface, enhanced security of data and assets, ease of IT management and more,” says HS Shenoy, Director of Marketing, Kaseya India.
DEALING WITH SLOWDOWN
As the major revenue generator, the economic slowdown in the U.S. and the eurozone crisis has had a partial impact on the IM services industry. Infrastructure, especially the “run” part of it, is part of non-discretionary spend and hence companies are bound to invest in keeping their basic infrastructures running. However, transformational infrastructure projects, which form part of discretionary spend, are being delayed and cancelled, which is hitting the industry hard. “We are seeing a fairly robust demand environment. Enterprises need to keep their IT infrastructure running and therefore it is not a discretionary spend. Offshoring/RIM can help them cut costs and hence there is continued strong interest. Clearly, pricing becomes an important variable where customers expect yearly discounts during the term of a contract. Microland’s focus is to improve our productivity using automation and better processes to be able to support that expectation,” says VM Kumar, Chief Marketing Officer at Microland. The durations of annual IM services contracts have also shortened. The clients, which earlier signed contracts
for seven years have now shortened it to about three years up to a maximum of five years. The decision-making cycles have become longer and most of the clients are not comfortable with signing deals longer than five years. Apoorva Singh, Senior VP, iGate, says the primary reasons for shorter deals are business uncertainty and the need to have IT aligned to business. “Also the buyer is much smarter now and wants to evaluate the sourcing decision in shorter time periods. The buyer also wants a greater transparency and ‘skin in the game’ from the service provider which is more easily obtained by shorter deal periods.”
THE CLOUD IMPACT
Cloud will change the way infrastructure management is looked at. “While the underlying pieces of infrastructure will remain same even with the advent of cloud, the increased business focus in the new environment will make significant changes in the way we approach infrastructure management across all the components. IT as a service is the new delivery paradigm and infrastructure management will continue to evolve to handle this new paradigm effectively,” says Vaibhav Tewari, VP & Business Head- Cloud Services, Microland. Cloud will eat into the IMS market only to the extent of software-as-aservice (SaaS) wherein the providers manage all the layers. But this, Tewari says, is only a small segment of the business. “Bulk of the enterprises will retain portion of their physical infrastructure or move to the private cloud where they would still require somebody to manage it for them and help them re-architect it with the changing business environments.
Going forward, in the next three-five years, all enterprises will run a hybrid infrastructure environment and the role of IM service providers will grow and evolve.” Most companies are preparing themselves for the cloud wave that is coming their way. For instance, Microland, a pure-play IM services provider today offers consulting services to companies in their journey from the traditional IT to a cloud environment. “We enable enterprises to transform their delivery and consumption of IT services to an on-demand model leveraging hybrid cloud environments through our consulting, solution integration and IM service offerings. We work with global enterprises helping them with their cloud roadmap across different stages. It’s easier for us because we understand our customer’s infrastructure requirements best as we have been managing their physical infrastructures for a long time,” says Tewari. As companies are not likely to move their entire physical infrastructure to the cloud, IM service providers will still have their share of business. The nature and complexity of their work will however change. Customers will look for a single point of contact for all their infrastructure management — be it physical, virtual or cloud. The infrastructure management and monitoring tools will have automation, self-service, chargeback etc., as the key components and the providers will have to evolve accordingly. The focus will change from managing IT to delivering business value. u Ayushman Baruah
India’s expected revenues from RIM exports are expected to touch USD 12.5 bn by 2015 Source: NASSCOM
informationweek july 2012
Remote Infrastructure Management Opportunity in India Remote Infrastructure Management (RIM) is a strong growth area for India-based providers, with revenue experiencing more than 40% growth on average
RIM industry in India is expected to reach
USD 13 billion - USD 15 billion by 2013
Source: McKinsey and NASSCOM
The market has the potential to create 3.25-3.75 lakh jobs by 2013 Source: McKinsey and NASSCOM
A well-crafted approach to RIM offshoring can yield as much as 25% savings from total infrastructure spend budgets , savings from labor alone are in excess of 50% Source: McKinsey and NASSCOM
By 2020, RIM is expected to contribute more than rd of total IT services revenues for India
BFSI sector has the greatest traction to RIM services, contributing 52 percent to RIM exports, followed by emerging verticals like telecom, retail, and media & entertainment
Drivers for RIM Scalability, Improved risk mitigation & Cost reduction
Developments in vendor and offshore supply environment
Increased focus on managing labor costs and productivity
India can sustain its leadership by: • Ensuring 24/7 uptime of technical infrastructure • Building a large pool of talent through investments in training • Ensuring client’s compliance requirements are met • Implementing security best practices (Source: NASSCOM)
july 2012 i n f o r m at i o n w e e k 27
Specialization drives IT managed services market PRIN
Specific niche areas, such as printing, security and storage are emerging as strong growth drivers for the managed services market in India By Vinita Gupta
hile the overall IT Managed Services market continues to grow rapidly in India â€” this space is witnessing the emergence of specific niche areas within the IT managed services market. Analysts are observing a trend where organizations are breaking up their IT infrastructure management engagements into specific areas and outsourcing them to vendors
informationweek july 2012
that specialize in handling a specific function. Hot niche areas that are emerging in the managed IT services space include Managed Print Services, Managed Security Services and Managed Storage Services.
Perhaps, the most established niche is the Managed Print Services (MPS) space, which is dominated by vendors such as HP, Canon and Xerox. Reports
by independent research and analyst firms confirm this demand. A report by Forrester Research estimates that the MPS market in Asia-Pacific (excluding Japan) will grow to USD 1 billion by 2012. The Indian market is expected to be the fastest growing market in the region, with a CAGR of 22.6 percent. Similarly, a report by the Photizo Group estimates that the managed print services market in India is growing at CAGR of 51 percent, and
is expected to be worth ` 1,530 crore in 2012. While CIOs have traditionally looked at squeezing costs from their IT infrastructure, printing is one function that has traditionally been overlooked. Specialist players in this space are evangelizing clients on the huge cost efficiencies that can be gained by availing Managed Print Services. For example, pre- and postanalysis of some of HP’s managed print services customers’ imaging and printing operations reveal energy savings ranging between 30 percent and 80 percent, and reductions in paper consumption to the tune of millions of pages. “Over the years, organizations have realized that delegating their printing and imaging portfolio will help them save 15-30 percent in cost,” opines Shekhar Agrawal, Director - Managed Enterprise Solutions, HP IPG. A similar opinion is shared by Vishal Awal, Executive Director - Services, Xerox South Asia. He reveals that enterprises spend up to 2 percent of their revenues on running printers, copiers, scanners, fax machines and other devices that are often energy inefficient. “An office print optimization strategy can reduce this expenditure by as much as 15-25 percent while also decreasing energy consumed and waste generated. Additionally, employee productivity can be enhanced due to efficient management of print operations, and via innovative features like mobile and cloud-based printing.” In India, there are several examples of firms that have shown
that MPS delivers proven ROI. For example, by outsourcing its printing needs and management to Canon, mid-sized firm, MindTree, achieved more than 50 percent decrease in paper consumption and 25 percent reduction in printing costs, as well as savings of over USD 62,000. Similarly, KPO firm, Evalueserve has been able
IT manage in India is d services market expected to hit U
SD 3.8 bi in 2013
to save approximately 40 percent of paper usage due to forced duplex printing enabled by MPS. The firm has also managed to reduce the number of printers from 54 to 25, as the new arrangement provides the firm secure printing, which in turn allows the firm to share printers between multiple groups. The new printers consume low energy as compared to old printers, leading to 70 percent savings on electricity costs.
In an always on environment,
Organizations have realized that delegating their printing and imaging portfolio will help them save 15-30 percent in cost Shekhar Agrawal
Director - Managed Enterprise Solutions, HP IPG
enterprises cannot afford to have even a single moment of downtime. In addition, as storage volumes are growing at an exponential pace, enterprises have gradually started exploring options such as storage-asa-service and DR-as-a-service. “Due to tremendous amount of growth in data, it’s predicted that data will grow to 2.3 billion petabytes by 2020. Today, we see that more and more enterprises in India are looking at outsourcing their storage and backup,” states Nikhil Madan, Director - North, EMC, India. To tap on this opportunity, EMC has partnered with vendor Tulip Telecom to offer managed on-demand storage services and Backupas-a-Service (BaaS). Similarly, IBM is also betting big on cloudbased DR and BCP services. As a growing number of Indian enterprises are virtualizing their infrastructure, they are looking at options to backup their virtual machines. To address this market, IBM is providing SmartCloud Virtualized Server Recovery service. It is providing customers with a range of options, such as always available virtual machine, virtual machine available only for use during disaster and test, and virtual machine for importing server images and backup data from storage media. Should disaster strike, users can directly access their data on the cloud via a portal, eliminating the need for travel to the offsite location. DR-as-a-service is a promising niche area that is showing good potential for adoption. For organizations that are struggling to monitor and maintain the Recovery Point Objective (RPO) and Recovery Time Objective (RTO), as per industry requirements, DR-as-a-service can prove an attractive option. If an organization outsources these services to a service provider, then as per the SLA, the vendor has to make sure that it meets the RTO, RPO and other business objectives. As most of the organizations do not have the
july 2012 i n f o r m at i o n w e e k 29
Cover Story right IT infrastructure and expertise to maintain and meet the RTO and RPO requirements, this is fueling the need for outsourcing the requirements to specialist vendors.
Due to tremendous amount of growth in data, more and more enterprises in India are looking at outsourcing their storage and backup
Security as a service
This is another market that is on the verge of exploding. With the growing complexity of attacks, enterprises are forced to regularly upgrade their security infrastructure. This is fueling Indian enterprises to tie up with specialist providers to better protect their infrastructure and ensure compliance. A case in point is iYogi, a firm which provides subscription-based tech support for PCs, connected devices and peripherals. The firm has partnered with Verizon for enhancing protection of its network and online payment system. Today, Verizon is involved in providing iYogi with a range of managed security services such as compliance to PCI DSS and ISO/IEC 27002, vulnerability assessment and investigative response services. Similarly, Religare has partnered with Tata Communications to better protect its critical network infrastructure and business applications.
Director - North, EMC, India
IT spends,” says A S Pillai, Senior Vice President – IT Services, Sify Technologies. For example, a bank can outsource specific IT workloads like Internet banking, asset management, mutual fund products and other small projects and not the core banking
HP’s MPS customers have recorded energy savings ranging
to percent between in imaging and printing operations and reductions in paper consumption to the tune of
Cloud as a delivery model The huge popularity of cloud-based technologies is also encouraging service providers to come up with new delivery models. “With cloud computing, the managed service offering is moving from a physical to a virtual environment. The organization has the option of outsourcing the whole infrastructure or a part of the IT environment to optimize the
Specialized managed service is complex and hence there is a need to define the exact SLAs
platform. While the cloud is well-suited for delivering a commodity service to a large number of customers, delivering a specialized service is complex. “Specialized managed service is complex and hence there is a need to define the exact scope, roles and responsibilities (SLAs),” says Rishikesh Kamat, Senior Product Manager – Infrastructure Management & Managed Security Services, Netmagic Solutions. Organizations like Infosys have developed specialized offerings to target the rising opportunity in providing specialized services. “We have developed IT assets, which enable our customers to maximize their returns on their IT infrastructure, as well as enhance performance and improve compliance,” states Chandrashekhar Kakal, Senior Vice President, Global Head of Business IT Services, Member, Executive Council, Infosys. The company has technology platforms in the market such as Infosys IT Asset Performance Management, ProcureEdge and TalentEdge, which are hosted, operated and managed in the cloud and are offered to organizations on an outcome-based pricing model. Driven by increasing pressure on the companies to control cost in the wake of global economic slowdown and mammoth presence of the SMB market in India, analysts believe that the IT managed services market will experience huge growth in the domestic market.
Rishikesh Kamat, Sr. Product Manager
Infra Management & Managed Security Services, Netmagic Solutions
informationweek july 2012
u Vinita Gupta email@example.com
Why CIOs are opting for IT infrastructure management services CIOs of Indian companies, who have outsourced their IT infrastructure management, highlight that the pressing need for internal IT resources to focus on core IT functions, and increasing pressure on the companies to save manpower and infrastructure management costs are the key factors that are motivating them to avail infrastructure management services (IMS) from technology vendors who are experts in this domain. Four eminent Indian CIOs share their perspective on the benefits of IT infrastructure management outsourcing with Amrita Premrajan of InformationWeek We entered into a contract with Dell Services in July 2009. It started with our requirement to implement Electronic Health Records (EHR) in our hospital for driving better quality of service to our patients. To implement EHR, it was also important to have an equally stable IT Dr. Neena Pahuja infrastructure environment. CIO, Max Healthcare Our infrastructure then was not very stable â€” we had no redundancy, and no remote manageability. In terms of infrastructure, we wanted to avail of complete data center hosting services, network services, server management, and support management of all our servers and network lines to provide the right kind of industrial product, and to be able to do remote check up of how we are working. So we partnered with Perot Systems, which was later bought by Dell. The combined contract to take care of infrastructure,
as well as the implementation of EHR, with support for our Hospital Information Service (HIS) was given to Dell Services. We created a private cloud, as well as a redundant clustered environment to ensure high uptime and scalability at the data center of Dell Services. And that has helped us scale up without any issue to our new hospitals. When we finalized the contract, we were at eight locations, then we started our hospitals at four new locations: Shalimar Bagh, NCR; Mohali, Punjab; Bhatinda, Punjab; and Dehradun, Uttarakhand. Today, we have been able to scale up to more than 100 percent from an IT point of view. And I am pretty sure if we want to open up more hospitals, we can scale up the same environment. What we have today is much more stability and an environment that has much more redundancy. If something goes down, there is another environment that automatically takes over and provides seamless service to our consumers and customers.
Cloud powers infrastructure management In September 2009, Max Healthcare chose Dell Services as a technology partner to manage various IT operations, including infrastructure management, data center hosting, applications portfolio management, project management office, clinical transformation and implementation of EHR. After a year, Dell Services converted the information technology infrastructure of Max Healthcare facilities into a private MPLS (Multi-Protocol Label Switching) cloud, running remotely from Dell Services data center in Noida. As Max Healthcare adds more hospitals to their network, the cloud deployment gives a near plug-and-play capability for information technology deployment.
informationweek july 2012
The key driver to choose infrastructure management services was the need to improve IT services to business, which is my core function. Three key reasons which led us to opt for IMS provided by Wipro included — making the business Samrat Das CIO – Tata AIG Life Insurance services more predictable, to be able to measure it Company objectively and to embark into the regime of variable costing. Both our primary data center and the Data Recovery data center are in the scope of Wipro’s services. We operate around 400 branches across the country, and the end-user management is
run by Wipro. The SOC, NOC and the Building Management Services are managed by Wipro. We run 35 applications each one from a triple-A, double-A or single-A kind of rating from a business continuity perspective. IT cost is fixed in nature. In any industry, trying to variablize that cost is business prudence, and we made objective methodologies to ensure that such business or economic cycles are managed by variablizing the cost. It may not be completely variablized, but it is variablized to the extent that it is not absolutely fixed, irrespective of the business or the economic cycle. I think from a company perspective it is already a big win for us. Being able to variablize cost and objectively measure the benefit that accrues out of it is a huge plus.
Rewards of outsourcing infrastructure management Tata AIG has moved management of 80 percent of its IT operations through Wipro’s Global Service Management Centre (GSMC), enabling Centralized Remote Management. This has resulted in flexible cost structures by aligning costs of IT operations and management to business requirement. Wipro’s scope of service also included implementing tools for monitoring IT infrastructure and application availability and performance; automating identified IT processes through systematic SIPs; and managing application development and messaging services using in-house resources.
LG has a philosophy since inception — keep the core and outsource the rest of it. As far as infrastructure and applications go, we have a majority of our operations outsourced. I would say that over 90 percent of our IT operations are outsourced today, and this has been so since the beginning as a Daya Prakash fundamental structure of the IT team. CIO, LG Electronics So we have a group company, which actually helps us manage the whole application piece and there is another organization that helps us manage the infrastructure piece — infrastructure being outsourced to various partners. We have the data center in-house, but it is being managed by the partners. The benefit is that it helps us focus on the core business. Our internal IT resources are more aligned towards the business. In the process, we don’t end up addressing issues of skill shortage or skill upgradation. For these things, we heavily depend on the partners and the partner ecosystem. Cost optimization is another crucial factor — keeping, rotating and upscaling the staff not only ups the cost, but also poses a lot of operational challenges. Outsourcing IMS is the most efficient way to overcome all these challenges.
Long ago, we decided that managing IT infrastructure is not our core activity — my team should be concentrating on planning, SLA monitoring and control and should not be involved in the actual execution. Rajesh Uppal Infrastructure Executive Director (IT) & CIO Information Technology management as an activity is outsourced Division - Maruti Suzuki to the companies who are the best in these streams, who can invest in managing these activities with their core competencies. So our complete infrastructure — whether it is our data center at MSIL Gurgaon or the data center at Bangalore (hired) for our dealers — everything is outsourced to a managed service provider. The IT infrastructure belongs to us but the whole support team scales up based on the business requirement, which then factors into the contract. We pay them on the contract terms.
july 2012 i n f o r m at i o n w e e k 33
How India is poised to be the next IMS hub Indian players are a strong force in the RIM play worldwide — all industry verticals are now outsourcing their infrastructure and application services to India
ver the past two decades, Indian IT services providers have established a strong presence in ADM, systems integration and enterprise software and consulting areas. Today, they are among the best in the world, and are adopting cutting-edge processes, systems and technology to deliver world-class software to clients. Indian providers now have a new opportunity in providing Remote Infrastructure Management (RIM) services, which is relatively a new area and is estimated to be a USD 500 billion opportunity worldwide, growing at a CAGR of 4 percent.
In the past, India-based vendors were not considered for large end-to-end infrastructure deals by multinationals due to their inability to scale up and their lack of experience in this space. Over the years, India-based providers have invested significantly in building RIM capabilities by investing in people, technology, infrastructure and in building innovative solutions to compete with global players. Some of the key reasons that Indian providers have been able to break into IMS/RIM space are: l Growing maturity of Indian RIM service providers and the availability of native and custom remote management tools and software to deliver such services in a secure manner. l Indian providers’ early adoption of global standards like ISO 20000, ISO 27001 and best practices like Microsoft Operations Framework, MOF and ITIL V3. l Growth in the telecom sector leading to availability of bandwidth at affordable costs. l Adoption of global best practices in infrastructure management and global delivery models has enabled Indian providers to offer
informationweek july 2012
significant cost arbitrage with little impact on quality. l Indian providers have invested a significant amount in training and nurturing the large pool of English speaking technical talent to effectively use them. While initial interest in India as a RIM provider was mainly due to the significant cost arbitrage, this was soon overtaken by other benefits, such as labor arbitrage, productivity improvements through improved operations, productivity from better tools and incremental value from value-added services. Today, Indian players are a strong force in the RIM play worldwide. They have acquired or built data centers, invested in strong-alliance ecosystems and partnerships, invested in building point and generic infrastructure solutions and taken assets in the books to break into large deals. In many ways, Indian providers have challenged the status quo in the IMS space and continue to evolve the service models and challenge the leaders. International players like IBM, CSC, HP, Accenture etc., also leverage India as the preferred center for delivering RIM services for their customers. The RIM segment in India has been in existence for just about a decade now. From providing IT help desk services in 2000-02, Indian vendors have moved up the value chain — and today provide
end-to-end services in the space, including infrastructure consulting, which comprises IT transformation, green IT, cloud, etc. Continuing with the GDM model, offshore providers are increasingly leveraging tools, accelerators and intellectual assets, and innovative commercial models to maintain the cost efficiencies. With a growth of over 30 percent CAGR during the initial years and sustaining above 15 percent CAGR for the past couple of years, RIM has contributed significantly to the overall IT services exports in the country. One of the significant innovations that have occurred much earlier in RIM compared to ADM services is the disruption in linearity between revenues and effort. By moving into the non-linear commercial models early, Indian providers have addressed the customers’ need for economical services and their own requirement to reduce the cost of talent. All industry verticals are now outsourcing their infrastructure and application services to India. While some of it is in the form of setting up captives (predominantly the BFSI organizations), others are adopting more conventional outsourcing models. It is relevant to add that organizations are also opening-up to outsourcing strategic work including consulting, enterprise architecture, engineering and design etc. This also alludes to the fact that Indian
Benefits by Outsourcing RIM to India
Productivity Improvements through improved operations 5-10 percent Productivity from better tools
Incremental value from Value Added Services
Total Client Value
providers have added significant capabilities in delivering these transformational programs. In spite of these trends, some organizations are unable to leverage the RIM opportunity due to regulatory, legal, geo-political reasons. Some of these regulations are in the information security and data management areas (again predominantly in BFSI, federal, defense). We believe that the increasing focus on ensuring data security and plans to provide services locally will enable Indian providers to penetrate these segments.
RIM in the domestic market
Domestic IMS started with facility management and end-user support services. These services were provided at client locations in large banks and government enterprises and were typically staff augmentation roles supplementing full-time employees. However, with rapid adoption of IT â€” both in the public and private sectors â€” there is a shift towards total outsourcing in the domestic market. We are all aware of the Indian Railways, India Post, Income tax deals in the public sector and large deals in sectors like telecom and banking. The Indian RIM services market is poised to grow at 20 percent CAGR and providers are gearing up to meet the domestic demand, which has grown from over USD 238 million in 2006 to about USD 520 million - USD 550 million in 2011. We believe that the Indian enterprises will now shift from on-premise computing to cloud computing in the near- to mid-term, taking advantage of the technology disruption, that is cloud computing. Related significant development is the growing investment in setting up large data centers with cloud capabilities by Indian players like Reliance, Tata Communication, CtrlS, etc. These companies are already servicing the burgeoning demand for hosting, both for enterprises as well as individual users. With the Indian governmentâ€™s IT policy charting out a road map for taking the government to the citizens,
it is imperative that large investments will flow into modernizing the IT Infrastructure and for providing services to manage and maintain the assets. This is once again an opportunity for the domestic and MNC providers to compete for the IMS pie in the country. To conclude, RIM services segment is expected to clock over 20 percent growth Year-on-Year in India. To its advantage, India continues to be the leading offshore location for delivering RIM services, with little or no competition from the other global locations. With maturing tools, processes and security controls, global enterprises are keen to outsource their infrastructure management services to India. However, to maintain this leadership, providers need to focus on innovation in service delivery, automation and transformational projects. Providers also need to meet client expectations on cost and quality. The ongoing global economic instability may once again force clients to reduce their operational costs, leading to increased outsourcing in the infrastructure management. Providers should be ready to grab this opportunity and help clients achieve their cost and quality objectives through innovative solutions. The Indian government should also enable this segment of IT services by implementing forward-looking policies on telecommunications infrastructure and broadband availability. The Indian government should join hands with other developed countries in implementing strict data security laws. This will instill more confidence in multinational corporations to choose India as the location for managing their enterprise infrastructure.
Over the years, India-based providers have invested significantly in people, technology and infrastructure for building RIM capabilities. Today, global enterprises are keen to outsource their infrastructure management services to India
u The article has been written by Rama
Murthy Prabhala, AVP, Practice Head - Manufacturing at Infosys Limited and Rahul Joshi, Principal Consultant, Financial Services and Insurance at Infosys Limited.
july 2012 i n f o r m at i o n w e e k 35
‘Cloud is changing infra management landscape’
The advent of cloud is making fundamental changes in three key components of infrastructure management: people, processes and technology
he biggest change cloud is making on infrastructure management (IM) is re-defining it from being technology driven to running an IM business for the end users — evolving the concept of IT as a service. This is making a change at that philosophical level as every existing resource, component and element needs to be re-looked in the light of being a business. Let’s see how cloud is affecting key components of infrastructure management:
Traditionally, infrastructure has been silo-based and the same has been the case with team-managed networks, data center or end users. With cloud in the play, managing IT as a service is equivalent to managing a business in a distributed environment — compelling employees to re-skill in short- to mid-term. The IM teams will have to be trained in the areas of service management, vendor/supplier management, end-user experience management, etc.
http://www.on the web Who is using cloud computing? Read article at:
informationweek july 2012
From process perspective, ITIL has been the backbone of traditional IT infrastructure management, which was developed to manage primarily in-house infrastructure. The ITIL service management framework establishes processes for ensuring availability, continuity, security and all other aspects of service level management. In the cloud world, IT departments will offer IT as a service with a hybrid ecosystem. So, availability, continuity, security, etc., will depend on multiple providers with their own level of sophistication — making it much more complex to stitch it together and drive efficiencies, along with delivering business value. Cloud also demands change in the level of granularity required to be managed —
the focus must be more on processes that are critical for business users. The management of service metrics and performance parameters that are offered by the outsourcer also becomes very critical. SLAs (which traditionally have been more technical in nature) will change to business SLAs as you are offering IT as a service. Service uptime will be most important and SLAs will have to be managed accordingly.
In the cloud era, we are looking at a single pane of glass for all IM — be it physical, virtual or cloud (private or public). Management platforms will have self-service combined with process automation to accelerate compute resource delivery times and improve the IT service delivery experience. Automation platform should allow companies to select from several pre-defined automation workflows, set up different processes for different groups and should be able to customize the service delivery. Chargeback will be another critical component of technology in the cloud world. From monitoring perspective, dedicated converged infrastructure coverage will be important — auto monitoring of VMs, detecting new VM guests, auto applying monitoring and putting policy control alerts for failure versus removals will be some of the critical features to look out for. While the underlying pieces of infrastructure will remain same with the advent of cloud, the increased business focus in the new environment will make significant changes in the way we approach IM across all the components. IT as a service is the new delivery paradigm and IM will continue to evolve to handle this new paradigm effectively. u Vaibhav Tewari is VP & Business Head- Cloud Services, Microland
‘Customer demands for infrastructure management are more complicated’ What are the various end-user computing services provided by HCL Infrastructure Services Division, and how are these different from desktop management services? Our gamut of services, under end-user computing goes beyond desktop management services. Desktop, laptop, tablet, PDA or BlackBerry is basically the interface — there is
at provisioning, policy, security, ubiquitous access, control, and compliance. We have put together a holistically integrated service because we want to create the user experience and not just manage the device. So there is a set of people who first address the clients, which is servicedesk-as an offering. Then you have a Desktop Management team, which manages your applications. After that,
you had those dumb terminals. But now the world has moved from the mainframe to client–server; the great benefit of client-server is that desktop workstation takes the processing power away, makes it cheaper, and enables you to communicate on a network. You can keep the graphic interface here, part of the application can reside on the front-end and the processing can happen there. So, we
Maninder Singh Narang, Vice President & Global Head - End User Computing, Application Operations & Shared Services, HCL Technologies ISD tells Amrita Premrajan about the major trends he is witnessing in the space of end-user computing and information technology as a whole, and talks about the fresh challenges CIOs are dealing with today due to the fast evolving IT landscape a lot of technology behind it that includes not just the hardware (the devices) but also the application that sits on the device, messaging and collaboration (e-mail, Lync or chat platform), and Citrix-based VDI or a VM-based VDI. So, there is a lot of data center technology and network that sits behind end-user computing. Our service in end-user computing is not just the management of the device but all of the encompassing end-user computing technologies. It also includes user provisioning — you have the Active Directory so you define policies around users. It is a fairly complex piece but one integrated offering. Today, the user is looking at a great experience with any device, and the enterprise is looking more
informationweek july 2012
there are subject matter experts who enable the entire technology, which involves virtualizing your application delivery to desktop. Finally, there is messaging and collaboration because 80 - 90 percent of our work today happens on e-mail — we are using less of phone and more of chat, social collaboration, social networking and messaging. We call this entire gamut of services as Managed End User Computing. We started with the mainframe way of functioning, then we moved to the client-server model. How has this model evolved over the years? If you go back 30 years, the proposition was mainframe, where everything was centralized, and
changed the mainframe — we said it is junk, the technology is outdated, and everything will now be clientserver. What has happened now is that we have again moved away from the client-server model towards the mainframe model, saying that app virtualization and device virtualization is nothing but everything done centrally, irrespective of the type of device which is used for accessing apps. The moment you say virtualization of app, it means the entire app is actually sitting in the data center now. The technology has become a little more open ended. The browser has become the ubiquitous kind of interface, and there is a lot more evolution that has happened in that technology. Obviously, the user
experience is a lot better. So, if you really look back at our mainframes to the current days, it is basically cannibalization of one after the other. The IT industry has remained where it is because of its ability to cannibalize itself, and then again re-invent itself. What are the major trends you are witnessing that have the potential to drastically change the way IT has been functioning traditionally? A key trend that we clearly see is the consumerization of IT, which means that the device is no more enterprise determined. More enterprises are also letting the employees buy their choice of device as it takes away the burden of supporting that hardware. While the user manages the device, the enterprise delivers the applications seamlessly. This is driving BYOD and blurring the lines between consumer and enterprise technology. Lot of activity is happening on the security and access control front. The security policy that governs my personal work is different — I use a different image there. And when I connect to my enterprise, I use a different image. All the security controls and the compliance are controlled at the network, firewall and data center levels. To control access, one may implement many policydriven restrictions. The second key trend is that social network is getting adopted in peerto-peer and group communication. About five-six years ago a majority of our customers would have 90 - 95 percent voice and 10 - 5 percent e-mail support for end-user computing. Today, it is about 30 - 40 percent chat/e-mail support. This dramatic increase is because of greater acceptability of social networking. The third significant trend is instrumentation and automation. For example, you have a desktop running XP — every time you have an upgrade, Microsoft asks you to upgrade that patch. Following this, the IT teams at the backend ask you for downtime to upgrade the patch, which is quite
intrusive from an end-user point of view. Now let’s look at an iPad or an Android, when a patch is released for these devices, they automatically tell you about the patch availability and auto-update the system when you connect next time. A lot of instrumentation and automation has gone into enabling this technology. The fourth trend that we clearly perceive is emergence of global delivery model. We were earlier perceived as an offshore company, but today we have delivery centers all over the world that are servicing clients across the geographies. So the global delivery model has emerged and many of us are seeing that our larger wins are coming from the areas where we are demonstrating our ability to globally deliver to our customers. The fifth key trend is the impact of open source on technology. For example, Android has had a big impact on the operating system market and the cost of delivery of technology. The penetration of open source technology may be low today, but it will increase dramatically over the years, as the open source market becomes a little more consolidated. So, if someone can give me the same level of availability, reliability and maintainability running on a Linux platform on a desktop, why not use, say a Ubuntu, rather than pay about USD 600 for Microsoft. The impact of cloud — a re-invented term in my view, is the sixth trend. If you go back to mainframes, it was always cloud and it was all virtual. Cloud will cause a disruptive pattern in the consumption of IT — which today is in a very definitive way. I think it will become undefinitive and very discrete. So, cloud will become a utility. I think we are still far away from a very ubiquitous model but technology availability to service will get there in the next few years. How have customer demands changed over the years? What are the fresh challenges CIOs are facing today? The demands from our customers are now more confusing and complicated.
The traditional enterprise controlbased model said: ‘I will give you the device and I will tell you which application to use.’ The model has now changed to: ‘You bring your own device, you will manage two images, you will access it anywhere on the planet, and you will use any network.’ The data center will be hosted somewhere else, part of the applications will be cloud-based and some will be internal (on-premise). This new model is posing fresh challenges to CIOs. Apart from this, the business head comes and asks the CIO why he is spending so much money on something, which he can buy on a shared basis from outside. And he is asked to cut the budget by 20 percent. Then the security guys tell him that they won’t allow certain things because it might compromise IP protection, information security, etc. So, there are so many conflicting demands, and at the same time churn is happening. So customers are looking for someone who can first help them in the transformation from a classical enterprise IT support to this new model, where there is pervasiveness of the devices and consumerization of IT. They have to manage a heterogeneous IT environment, which is now selfprovisioned with cloud, transformation of network, security, and controls. This is a difficult task, but most of our propositions today are focused around meeting these three customer requirements: help him manage the consumerization; instrument and automate as much as you can so that provisioning becomes faster and management becomes more reliable and secure; and combine public and private. For example, there are many customers where messaging is partly in-house — with their most privileged users. But for dealers and the extended organization, it is all on the cloud — so you marry both of them. u Amrita Premrajan
july 2012 i n f o r m at i o n w e e k 39
â€˜There is a lot of ISV uptake on cloudâ€™ Can you update us on cloud adoption from an Indian context? There are a couple of areas where I see cloud being adopted. Firstly, the service providers are jumping onto the cloud bandwagon. They are looking at offering various services on cloud. For instance, co-location providers want to get on to the cloud. I also see many managed service providers preparing to offer cloud services. Typical services include desktop being available on cloud; servers being made available as VMs; and storage services being provided on cloud. From a consumer perspective,
there is a lot of ISV uptake on cloud. ISVs are looking at leveraging cloud to get that critical reach to their customers and to make services available in a rapid and elastic model. There are some low-hanging fruits in the area of the cloud. Most of the providers are quickly taking on Infrastructure as a Service. How is IBM tapping this opportunity? How is it partnering with ISVs and service providers? We are running multiple programs. For instance, we work with service providers and have a unique partnership program. We work
together on their business plans, and help them in areas that are technocommercially attractive to them. We take a hard look at their business models and offer them a build, operate and manage cloud model. For instance, we helped a large data center services provider in Bangalore to build the complete platform so that they can start delivering services to their customers. To begin with, they will start off with IaaS and then scale up that model to offer other services. So we offer fairly straightforward private cloud implementations and we added the managed services
According to a study by Zinnov, the market for cloud computing services will touch USD 4.5 billion by 2015. Service providers, ISVs, vendors, and data center specialists want to tap this huge opportunity by preparing their offerings for cloud services, such as IaaS or SaaS. And IBM wants to help service providers roll out their offerings quickly and cost effectively. Services providers find the huge cost of infrastructure a stumbling block. Also, there are challenges in managing the middleware and retaining skilled employees. Dharanibalan Gurunathan, Executive, Offerings Management and Development, Global Technology Services, IBM India/South Asia updates InformationWeek on a platform that helps ISVs get around these challenges. Some ISVs are already using this platform to deliver cloud solutions to customers in the banking and HR segments
informationweek july 2012
element behind it. Then there is the IBM Developer relationship program where we encourage and support ISVs. We help them port their applications and get the application ready on IBM and non-IBM platforms. In India, we have 300 - 400 ISVs who develop programs and offer services on the IBM platform. A number of these providers are aspiring SaaS providers. But they struggle to deliver services as it calls for huge investments in infrastructure. Then there are challenges in managing the middleware and retaining skilled
employees. They also want the ablility to scale infrastructure up or down depending on the business. They have peaks and troughs, and they seek help from partners like IBM for the troughs. Thatâ€™s why we created the IBM SmartCloud Enterprise offering. Here we provide IaaS, compute and storage. And we have the ability to expand with images of IBM software on it. So this is a built-to-purpose model. And ISVs are leveraging this to offer services to their end customers. For instance, there is a SaaS provider in Gurgaon that has 50 - 60 customers. Their capability is on the
IBM SmartCloud Enterprise. It offers a pay-as-you-go pricing model and enterprise grade features like VPN and private IP. How is IBM helping user companies get on to the cloud? What is the value that it offers? We helped a co-operative bank in Bangalore, through a partnership with an ISV called Infrasoft, which is doing a lot of work in the core banking space. We do a joint sales force exercise and pitch to banks together. We try to sell the IaaS services and the
In India, we have 300 - 400 ISVs who develop programs and offer services on the IBM platform core banking software comes from this partner. The customer signs up for the core banking features and functionality. And we provide a shared Infrastructure as a Service cloud offering, which is managed and hosted by us. So, it is a good value proposition for the banks as they do not need to spend money upfront for acquiring the equipment and the licenses. They get a per branch, per month/year pricing model. The banks can immediately have infrastructure and when they add branches, they get a predictable IT spend for each new branch that is opened. Apart from banking, we have done a fair amount of work with HR IT providers or ISVs. What is IBMâ€™s data center strategy? Our strategy is to pick up long-time contracts with data center providers and we have our equipment in a cage. We have dedicated connectivity to that data center, so we are able to manage our infrastructure from an IBM command center. u Brian Pereira firstname.lastname@example.org
july 2012 i n f o r m at i o n w e e k 41
Reliance Communications dials in to open source for competitive advantage One of the biggest players in the Indian telecom sector is setting a precedent for other telecom companies by aggressively adopting open source By Srikanth RP
ven as Indian telecom companies continue to add subscribers at a fast pace, the increased competitive scenario is enforcing new demands on telecom service providers. Telecom service providers now not only have to provide the most costeffective rates, but also have to innovate to quickly rollout new services. Any plan to rollout a new service calls for balancing high capital expenditure with the flexibility and agility of a technology platform that allows a telecom service provider to do so. Globally, an increasing number of telecom service providers are looking at open source software — as it not only reduces the overall cost of ownership, but also gives them the ability to customize and innovate in a better way, giving them access to the source code. In India, Reliance Communications has taken the lead, and has adopted a holistic open source strategy. The firm has been making extensive use of open source software in place of proprietary products to promote open standards and facilitate plug and play in IT operations. Explaining the strategic intent of adopting open source, Alpna Doshi, CIO, Reliance Communications, emphasizes that the company has traditionally been actively engaged with the open source community both as a consumer, as well as a contributor. “Reliance Communications is required to maintain a large IT infrastructure — both at data center and user desk side — to meet its business requirements. To manage operations of the scale, it is essential to standardize the technology stack. We opted for open standards so that we can benefit from the huge talent pool
informationweek july 2012
in the open source ecosystem, while contributing to it simultaneously.”
Addressing change management
Reliance Communications’ open source journey started two years ago, and is perhaps one of the best examples to show how a company can embrace the technology. Acknowledging the fact that adoption of open source could have technical and cultural challenges, the firm followed and showcased best practices of change management to identify and mitigate the envisaged risks of the transition. All stakeholders were taken into confidence before the actual cut over, which has resulted in reduced end-user issues. Additionally, the firm took a number of initiatives to popularize and increase acceptance of client side open source software among end users. It conducted a number of face-to-face and web training sessions on open source, promoting the benefits of using open
source through screen savers, posters, and in-house campaigns. FAQs were created and circulated to all users. To ensure proper support, the IT helpdesk and engineers were trained extensively. A forum including chat support was created that enabled the users to raise their queries, which were replied by experts. Open source champions were created on each floor and certified. These champions then trained other users.
In the next stage, key areas of operation were evaluated for their suitability of deploying open source software. The key areas identified were: client side software (Desktop operating system, mail client, office suite, virtual desktop and project management tools), Data Center Applications (Sever OS, proxy software and other intranet applications like FTP, DHCP, DNS, file server, print servers), Network Management System (NMS) to remotely monitor and manage
“With advances in IT, the advantages of proprietary applications will break down against the storm of innovation brought by open source products. Open source coupled with open standard platforms is going to be the hub of future innovation” Alpna J Doshi
CIO, Reliance Communications
network links and devices, and web hosting applications for internal and external websites. The choice for using open source software for user machines was made at a much more granular level. Instead of identifying functions, the role of each individual employee was evaluated with a perspective of suitability to proprietary software versus open source ones. The Standard Operating Environment (SOE) image carried only open source software. Exceptions were made for a few users, having exceptional requirements. For a large company like Reliance Communications, the biggest deployment of open source was on the client side. The firm chose OpenOffice, as the office suite and Thunderbird as the open source mailing client. Ubuntu OS was tested for its compatibility with all applications used in the company. Similarly, OpenProj was used by users involved in project management and tracking, while Ulteo was evaluated as a platform for virtual desktops. Among these projects, the biggest challenge was the deployment of OpenOffice. “Adoption to OpenOffice was a big challenge as there was huge inertia from both users and businesses. Businesses were requested to identify users for whom the use of OpenOffice usage was not practical. For the remaining users running in thousands, OpenOffice was deployed using Marimba,” states Doshi. Open source solution, ‘Scalix’ was implemented on the server side coupled with open source mail client Thunderbird. The firm was able to improve hardware efficiency by using these light-weight applications. On the data center side, various flavors of Linux were being used for Server OS like RHEL, Fedora, OEL and Cent. Choice of OS was determined by compatibility with other software. Today, use of Linux OS has been well accepted for business critical applications with adequate level of support available for enterprise grade versions. The firm also maintains several open source proxies for Internet access to thousands of internal users. The proxy is well hardened and allows
Open Source @ Reliance Communications Category
Project Management tool
Red Hat Enterprise Linux, Fedora, OEL and Cent
communication only on selective ports to preclude malicious attacks. URL and content filtering is also followed. Users are segmented based on their Internet usage requirements. The choice of software in the data center was made by evaluating the software on parameters of features, scalability, manageability, availability of skills in the market and compatibility with other software. Further, open source NMS has been configured for several thousands of links and devices. “We have developed additional features to these open source plug-ins which make it suitable for our business requirements, as well as makes the product more enriched. Extensive use of online forums was made to seek help in development, as well as contribute to the forum. We also customized it to create a Business Process Management (BPM) incorporating all devices such as servers, links and network devices for missioncritical services. In case of failure, alerts are sent to all stakeholders,” explains Doshi. A huge number of web applications were hosted on open source platforms like JBoss.
Open source – not a cost arbitrage issue
What’s more noteworthy about Reliance Communications’ strategy is the fact that the organization’s open source strategy has not revolved around cost, but around flexibility and innovation. “While cost saving can be an added advantage, our prime focus is on
propagation of open standards across the organization, which we believe is the way forward for enterprise as well as desktop applications. There have been instances where we have invested heavily on training and migration to adopt open standards,” states Doshi. Due to its focus on open source, the firm believes that in the next few years, Reliance Communications would derive multiple benefits of open standards like collaborative development, access to innovations and agility. “With advances in IT, the advantages of large, monolithic, proprietary applications will break down against the storm of innovation brought by open source community products. The ecosystem of enterprise support for open source application is also developing rapidly and is reducing the traditional risks associated with open source adoption in enterprises. Industry initiatives like ‘Wholesale Application Community’ (WAC), is a testimony to the fact that open source coupled with open standard platforms is going to be the hub of future innovation,” says Doshi. For an industry grappling with falling margins and increased competition, the large scale adoption of open source is a pioneering move, as it can improve the overall competitiveness of the company.
u Srikanth RP email@example.com
july 2012 i n f o r m at i o n w e e k 43
BOI uses SIEM to reduce false positives and boost security The proliferation of devices in the Bank’s data center yielded thousands of logs; it was impossible to manually decipher those logs and make logical conclusions about threats and attacks. So, the Bank of India opted for a solution that correlates various logs, analyzes them, and offers a single dashboard By Brian Pereira
n recent years, more banks have embraced information technology to offer customers services, such as Internet and mobile banking. As the nature of cyber attacks grow in sophistication and volume, banks have been compelled to invest heavily in data security solutions. Last year, the RBI issued detailed guidelines on IT governance, information security, and cyber fraud for the Indian banking industry. And SIEM (Security Information and Event Management) tools are a way to ensure compliance. Now every bank is in the process of deploying — or has just deployed SIEM. The Bank of India deployed an SIEM solution in 2010, becoming the first public sector bank to do so. SIEM tools provide real-time analysis of security alerts generated by network hardware and applications. They are also used to log security data and generate reports for compliance purposes. SIEM solutions are known for their superior log management capabilities and their ability to correlate events. Back in 2010, the Bank was facing many security challenges. As the number of devices increased in its data center, a voluminous number of security logs were generated. And because of the different types of devices, there was much diversity in the format of the log files, making it difficult to read logs and correlate all the recorded incidents. “Our data center and DR site has more than thousand devices, and each generates a lot of logs. There are various logs relating to systems, access control,
informationweek july 2012
“RSA’s SIEM solution has narrowed down the window between detecting an incident and the time taken to respond to an incident”
CISO & Head - Business Continuity, BOI
security events, etc. So it was becoming increasingly difficult for us to manually monitor the logs of all these devices,” says Sameer Ratolikar, Chief Information Security Officer & Head-Business Continuity, Bank of India. So the Bank looked for a solution that would correlate various logs, analyze these logs, and offer a single dashboard. The other challenge was coping with the growing sophistication of the attacks. Hackers use different modus operandi and there is also mutating malware — so it was becoming difficult to detect or trace the attacks. “At that point, we had a point or siloed approach to detect the attacks, and I was looking for a more intelligent way of doing this. So I would ask peers if they could trace the source of the attacks, if the same hacker or malware was also targeting other institutions, and what is the impact of the attack. We searched the history related to the attack. So all this information relating to the periphery of the attack gave me input in the form of a threat intelligence report,” informs Ratolikar. Apart from this, there were also
many false positives. So there were three main criteria that the solution had to address: threat intelligence, complexity of attacks, and analysis & correlation of logs. The solution had to determine if a particular attack was also directed at other systems such as routers, Internet banking system, intranet etc. Two other key criteria were simplicity in the dashboard and the reduction of false positives. Before deploying the solution, the false positives were 40-45 percent of the total incidents.
After an evaluation process, the Bank opted for RSA ‘s SIEM solution, called enVision. HP’s ArcSight was among the other solutions shortlisted. enVision is a centralized log-management service that enables organizations to simplify compliance programs and optimize security-incident management. “We found that enVision was simple to configure. It was also easy to deploy on various devices,” asserts Ratolikar. Ratolikar did not find it difficult to convince his management about the benefits of this product, and why it was
the right solution for the Bank. Managing more than 1,300 events per second (EPS) is a herculean task — and these alerts come from various devices in the data center. This can only be done by a robust SIEM log management solution. The management at the bank acknowledged this and gave its approval.
During the implementation there were challenges with router configurations. But with the support of HP (implementation partner) and RSA, these were resolved and the solution was deployed in three months. An expert from RSA was flown in to train five persons at the Bank. RSA’s enVision was first deployed in a non-production/UAT (user acceptance testing) environment, which is an isolated environment.
It has been 18 months since RSA enVision was implemented at Bank of India. Ratolikar and his team are satisfied with its performance. Apart from detecting many attacks, it has also reduced the time between an attack and a suitable counter response.
Also, the number of false positives has decreased drastically from 45 percent of all incidents to just 5-10 percent. In addition, the team now has a consolidated view of all the threats, with information gleaned and correlated from thousands of logs. “With this tool we detected attacks originating from China, Japan, N. Korea, Nigeria, and other African countries,” informs Ratolikar. “After you detect an incident, the time taken to respond is a crucial factor. With this SIEM solution that window has narrowed down. “ But despite all these attacks, Ratolikar has piece of mind. Calmly sipping his cup of lemon tea during an interview with InformationWeek, he says that all systems can withstand the attacks and continue to run smoothly. Should the worst happen, Ratolikar can easily switch over to his DR site. In fact, there are quarterly drills during which all operations are run from the DR site, and the primary site goes offline in a planned manner. Bank of India is now looking at more intelligence in the next version of this SIEM tool. It will offer forensics at the packet level. The tool under consideration is RSA’s NetWitness — a network forensics tool.
The bank was looking for a solution that could address three main criteria: threat intelligence, complexity of attacks, and analysis and correlation of logs. The solution had to determine if a particular attack was also directed at other systems, such as routers, Internet banking system, intranet etc.
u Brian Pereira firstname.lastname@example.org
july 2012 i n f o r m at i o n w e e k 45
How the Future Group transformed its supply chain The implementation of a WMS from Infor has helped the firm in ensuring minimum shrinkage and in maintaining a high level of inventory accuracy By Srikanth RP
n a dynamic market, every large retailer depends on a well-oiled supply chain network for managing and maintaining the high volume of inventory. As retail volumes increase, most retailers struggle for keeping track of the millions of Stock Keeping Units (SKUs) across different product categories. Future Group, India’s biggest retailer and owner of numerous successful brands, such as Pantaloons, Big Bazaar, Central and HomeTown, was facing a similar problem. As the volume of retail business handled by Future Group increased exponentially, it was facing a massive challenge in handling a large number of SKUs, spread across many different product categories including fashion, home, food, and furniture and consumer durables. Understanding the critical importance of the supply
chain to the company, the firm set up India’s first end-to-end supply chain services company, Future Supply Chains. Over the years, the firm developed expertise in supply chain management of consumer product categories, such as fashion, food & FMCG, home decor and furniture, consumer durables & electronics and general merchandise. The company has a footprint of over 6 million square feet of warehousing space strategically located across India. As demand grew, the company finetuned its systems and designed supply chain solutions. Today, the firm is the first organized intra-city transportation services company in India — carrying out not only B2B deliveries but also B2C deliveries in the form of thousands of home deliveries every day across the country, especially for furniture and consumer durables. Future Supply Chains had
“We are now able to handle a very large number of SKUs and ensure 100 percent stock accuracy. It is also possible to do stock corrections in real time since all entries are online”
warehouses spread across the country with very little infrastructure. As the warehouses were manually operated, it was getting extremely difficult for the firm to keep track of inventory. Retail stores started reporting lost sales because the goods could not reach the stores on time. The company responded to this challenge by first rationalizing the number of distribution centers and later consolidating to larger distribution centers. However, the number of SKUs and volumes had increased manifold, which necessitated a state-of-theart Warehouse Management System (WMS). For Future Supply Chains, having an effective WMS in place was a necessity and not a choice. “For any supply chain company, warehouse management is at the core of supply chain management,” states Samson Samuel, COO and CIO, Future Supply Chains. After evaluating a number of WMS products, the firm zeroed in on Infor, as it believed that it was more scalable and reliable than other products from competitors. And also because Infor had an ecosystem of existing partners, as well as strong presence across India in terms of manpower, offices and clients. The firm went in for an all-out approach, which ensured that the company was able to implement Infor SCM WM at 18 locations in a relatively short period of just 18 months.
Boosting supply chain efficiencies Samson Samuel
COO and CIO, Future Supply Chains
informationweek july 2012
Post implementation, the benefits of having a robust WMS is paying for the firm. Today, all 18 warehouses are
WMS enhances supply chain visibility
Stock replacement and refill rates have reached an efficiency level of more than 90 percent
Ability to ensure 100 percent stock accuracy as the stock is totally visible online
working at efficiency levels of more than 90 percent. A pull system based on actual demand has now replaced a push system that ran on forecasted demand. For example, previously, for categories like fashion, stock consisting of a few pieces of apparel could not be traced, whereas today even the last piece can be traced. The Infor WMS has also helped the company rationalize its warehouse structure design decisions. â€œThe company is now able to handle a very large number of SKUs and ensure 100 percent stock accuracy as the stock is totally visible online. It is also possible to do stock corrections in real time since all entries are online,â€? explains Samuel. The speed of processing too has improved. The company now receives goods on a sampling basis. Previously, the staff used to scan every single consignment as they were never sure of what a vendor is sending or not sending. Even minor errors in SKUs now get corrected online. Labor productivity and efficiency have increased significantly â€” through the
All warehouses are working at 99 percent efficiency
WMS, Future Supply Chains is now able to measure labor productivity and keep a track of what each person is doing or contributing. The WMS also ensures that the company has visibility of stocks across the distribution centers. While shrinkages have gone down, stock replacement and refill rates have reached an efficiency level of more than 90 percent. In addition, Future Supply Chains no longer has to carry out lengthy and manual stock checks or do a wall-to-wall count of what is where in the warehouses. Earlier it had to do all these activities by stopping the business for a few days; now it can do the stock check effectively without stopping the business. As a result of the WMS, today all warehouses are working at 99 percent efficiency. It has also helped in establishing a process based model of business, which ensures minimum shrinkage, high accuracy and guaranteed on-time distribution to the customer.
Post implementation of WMS, the company has registered significant improvement in labor efficiency and productivity. Future Supply Chains can now keep a track of what each person is contributing
u Srikanth RP email@example.com
july 2012 i n f o r m at i o n w e e k 47
How CRIS revamped its data center power and cooling technologies To meet the ever-increasing IT needs of Indian Railways, the Center for Railway Information Systems has revamped its data center and designed it on Tier III parameters with 99.99 percent reliability By Amrita Premrajan
enter for Railway Information Systems (CRIS) was established in the year 1986 by the Ministry of Railways to provide consulting & IT services to Indian Railways. With Indian Railways’ need for IT services growing by leaps and bounds, CRIS was required to scale up its services. The organization was required to revamp its data center, which was experiencing an exponential growth in power, cooling and space requirements.
Deputy Manager - Electrical, CRIS says, “To cater to the issue of electrical load of data center, our predecessor, Surekha Sahu, Chief Manager, Electrical, CRIS, planned the deployment of 4x120 KVA UPS (with 100 percent redundancy and equipped with IGBT-based system), which is 98 percent efficient compared to thyristor-based UPS. These UPSes were installed in N+1 configuration in two groups and UPS supply was provided through dual path — up to server rack level by the UPS feeders, sub-feeders and industrial socket. In this
At CRIS, initially the load of data center and facility was served together by single UPS, which was installed in March 1989. In February 1995, another UPS was installed. In this arrangement, data center load was disturbed in case any malfunctioning developed due to the facility load. In order to resolve this issue, a dedicated UPS was deployed for the data center and a separate UPS was deployed for the facility. With ever-increasing load, there was a need to closely look at the power distribution and management technologies in the data center. There was a need to evaluate the efficiency of existing thryristor-based UPS, and address the issue of lack of systematic arrangement of server racks, which were currently placed in a highly scattered manner. In the year 2006, the process of revamping the UPS arrangement to streamline the server racks was started, and by September 2008 both these issues were addressed. Elaborating on the whole process, Jugal Kishore Dhulia,
informationweek july 2012
with microprocessor-based precision air conditioning system and adopted the hot-aisle/cold-aisle containment arrangement. “Firstly, all the scattered server racks were streamlined in such a way that racks got cool air from perforated grills provided in front of racks to separate hot-aisle and cold-aisle formation. Also, the spaces in the racks, which were not used, were covered with blanking panels to improve the cooling at the data center. Such restructuring in the cooling arrangement enabled CRIS to achieve 30 percent energy saving,” informs Dhulia.
The road ahead
scenario, the redundancy is maintained up to server rack level. The UPS were divided in two groups, and each group had two UPSes. We planned UPS feeders and sub-feeders in one unit with a separate compartment.” Talking about the overhaul of UPS cabling, he adds, “Previously, power connections were made from inside the rack and the cables were jumbled, but now all network and power cabling lies on cable tray under the false floor.”
Revamp of cooling arrangement
In order to improve cooling efficiency, CRIS replaced the traditional centralized air conditioner system in use
With new projects coming up, the organization’s data center requirements are only growing. A separate area in the data center is now being prepared to address the future needs. Sharing insights on the future plans, Dhulia updates, “CRIS is planning to opt for the latest power management and cooling technology for upcoming projects. In this context, modular UPS and In-Row Precision unit according to load and cooling requirements has been planned for the demand that is being generated currently. We are also looking at deploying technologies that would enable rack-level monitoring of power and temperature. Our data center is currently designed on Tier III data center parameters with 99.99 percent reliability and we envision optimizing the data center in line with Tier IV data center for the future IT growth of Indian Railways.” u Amrita Premrajan
Where cloud works Six Flags and Yelp reveal how they’ve made the public cloud work for their businesses By Charles Babcock
ots of IT organizations are taking a hard look at public cloud services. Two cloud users, Six Flags Entertainment and Yelp, gave us a look at problems they’re facing adopting these services and the value they’re getting from them. Picking the right service for your business need is a process loaded with pitfalls. For example, will your public cloud provider’s data service match the needs of your application? Will your provider have enough network bandwidth to guarantee quality service, or will it fluctuate widely when other users’ activities hog the bandwidth? Consider how Six Flags Entertainment is using cloud services. To commemorate the 50th anniversary of its Six Flags Over Texas park this year, the company cut the online ticket price, announcing on its blog: “Yup, less than USD 20 will get you into the park’s first weekend.” (The usual online admission price is USD 40.) To get this price, customers had to buy their tickets on the Six Flags website. Both the blog and the website are hosted on Rackspace’s public cloud service, where Six Flags uses servers shared with other Rackspace customers, pays based on usage, and can quickly add capacity to meet demand. Prior to using Rackspace’s cloud service, a promotion like this would have required expensive newspaper, radio, and TV advertising. Six Flags runs 19 theme parks across the country that more than 25 million people visit a year, and it makes extensive use of the Rackspace public cloud service. Besides its website and WordPress blogging, Six Flags hosts three Facebook applications, e-mail marketing campaigns, and other “public” information there.
informationweek july 2012
Rackspace’s public cloud has a multi-tenant infrastructure where customers pay by the hour. The cloud “has the flexibility to be fired up quickly,” says Sean Andersen, Director of Six Flags’ interactive services. It’s easy to change things and bring in partners, he adds.
What Works In The Cloud
The company also uses Rackspace’s equipment dedicated to Six Flags’ use, because it considers that more auditable, private, and secure. Transactions, including ticket sales, accounting, and back-office systems, as well as analysis of sensitive business data, are kept at Six Flags’ own data center in Grand Prairie, Texas. Data that must be audited to meet compliance requirements and information that’s deemed sensitive to the USD 976 million-a-year public company’s business are kept out of the multitenant public cloud and held either on its dedicated servers at Rackspace or in Six Flags uses cloud services to find out what its customers really want from its parks and rides
its Grand Prairie data center. But the public cloud has been a boon for marketing. Amusement parks are a highly seasonal business, so “marketing is our lifeblood,” Andersen says. Business picks up in May and June as schools close, reaches a summer peak, and falls off, with a slight bump in October around Halloween. Last May, for instance, Six Flags started the month with about 300 GB of traffic per day. Traffic increased slowly until the week leading up to Memorial Day, when it spiked to about 600 GB a day. To capitalize on these traffic cycles, Six Flags must quickly get its message out about new rides and attractions, then get as many attendees as possible to repeat the message. The public cloud provides the elasticity that a seasonal business needs: When demand spikes in the summer, the cloud supplier provides more servers to meet it. The cloud also allows frequent changes without taking the site down. Much of Six Flags’ social marketing strategy relies on cloud computing. Getting people to talk about events, such as the 50th anniversary celebration, is deeply embedded in the company’s strategy. One of Six Flags’ Facebook applications running in the public cloud lets visitors to the Great Escape park in Queensbury, N.Y., post pictures of their families with Oakley, the Timbertown bear, and then comment on them. The goal is to attract more business as visitors share their experiences on the application. “We try to keep the excitement that fans create going without hitting them over the head to buy, buy more,” Andersen says. On its Facebook page, 2.5 million people have clicked on the “like” button in a Facebook app that tells about each theme park. (That may be
because clicking on “like” is the only obvious way to get to information on the parks—but a couple million “likes” are worth something.) By putting the Facebook applications in the public cloud, Six Flags is able to drive large bursts of traffic to the main website without needing to expand its data center, Andersen says. Six Flags CIO Michael Israel and other top execs back the cloud and social strategies, with Six Flags CEO Jim Reid-Anderson’s name on the May 27 blog touting the SkyScreamer, Dare Devil Dive, and ZoomAzon Falls additions to the parks this year. Six Flags also finds it easier to share systems with partners using the public cloud. Last year it did promotions with event producer Dick Clark Productions, which Six Flags co-owns, and Coca-Cola and Discover Card. It did that by giving those partners access to specific public cloud systems, which would have been more difficult for systems in the Six Flags’ data center, where there are more barriers to bringing in a new partner. Andersen wants to do more application development in the cloud. So far only a small portion of Six Flags’ app dev is done there. “We still do 90 percent of development in Grand Prairie,” Andersen says. Hosted applications are another area where Six Flags expects to do more. Rackspace offers SharePoint 2010 as a multitenant application in its Managed Hosting service, a separate business from Rackspace Cloud. Six Flags runs its own version of SharePoint in-house for BI applications but relies on the Rackspace application for general business purposes. Andersen would like Rackspace to become its hosted supplier of Exchange 2010, as well. If that were to happen, it would get Six Flags IT out of the business of patching, maintaining, and upgrading Exchange. “When Microsoft has a big fix, we don’t have to deal with it,” Andersen says “We have more pressing issues.” But for now Six Flags is stuck on an older version because its Exchange e-mail is tied into other systems, including one for its BlackBerry users and an archiving system for compliance. Six Flags IT has been too busy to upgrade Exchange
and its related systems. Overall, Six Flags proceeds cautiously with cloud services. It’s focused on the public cloud for applications and uses, from marketing to app development, that can benefit from the cloud’s elasticity. But revenuegenerating and compliance-related apps will likely stay on-premises.
Amazon Rates With Yelp
Yelp is a large-scale user of Amazon Web Services’ S3 online storage. By providing storage that’s expandable on demand, Amazon enables the rapid growth of Yelp.com, which serves up usergenerated content such as restaurant ratings and reviews. Recently Yelp went further, using Amazon’s EC2 online computing service running Elastic MapReduce to sort and manage its increasingly large quantity of data. The 7-year-old company had 61 million unique site visitors a month in the third quarter of last year. Site visitors have generated 22 million reviews of local businesses, and the site produces several terabytes of data a day. Yelp stores and analyzes its server log data in Amazon S3, thus taking Yelp’s largest computing task and putting it in a cloud that can better adjust to fluctuating demand for storage and analytical processing. Doing so has
top priority: supporting real-time, online user activity, says VP of engineering Mike Stoppelman. Yelp hosts its website in its own data center in San Francisco. Its IT group concentrates on managing website traffic and improving site functionality to generate more traffic and encourage visitors to review and comment more. Yelp also is a big user of the Apache Hadoop distributed data processing platform for Big Data sorting. When a Hadoop job runs, it needs multiple servers. No matter how big a company designs its Hadoop cluster, some developer will think of a job that exceeds it, Stoppelman says. However, Yelp developers don’t have a blank check to use as much Amazon computing power as they like. They must justify any use of Elastic Map Reduce, using an Excel spreadsheet that shows the projected cost per hour of the job to developers and business managers. A typical job uses a 10- to 20-node cluster. “When we need to collect stats for an important meeting, when the CEO or COO has impromptu requests, we’re able to satisfy them,” Stoppelman says. “You’re not pre-empting another production system. All these folks who need resources don’t have to clobber each other as we launch these jobs.” Creating a large server cluster in-house from scratch would involve a three- to four-month procurement process, he says. Stoppelman was formerly technical lead of Google’s AdSense Traffic Quality team, where knowing which ads to present to which user was a critical skill. “I learned a lot of these lessons at Google,” he says. “I didn’t want Yelp to end up going through the same experience,” where various business units competed for the same internal resources. Using the public cloud is “game changing” for Yelp, Stoppelman says. For now it’s taking a limited approach to the services it uses. But it plans to raise money through a public stock offering and expand its use of Amazon S3 and Elastic MapReduce as business grows.
Yelp IT focuses on spurring user input, not running systemsresources to focus on its freed internal
Source: InformationWeek USA
july 2012 i n f o r m at i o n w e e k 51
une 6 may have been the first official day of IPv6 operation on the Internet, but not everyone is ready yet to adopt the new protocol. Only about 1 percent or so of the Internet is now running on IPv6 the day after the switch to the new protocol was flipped permanently. But that’s actually a big jump, with 150 percent growth in IPv6 over the past year, according to Google, which estimates that half of all Internet users will be online via IPv6 in the next six years. Gartner predicts that by 2015, 17 percent of users worldwide will use IPv6, and 28 percent of new Internet connections will be IPv6. Vint Cerf, considered the father of the Internet, says he expects faster adoption rates now for IPv6, which has been in the making for more than two decades. “I anticipate rapid growth now that it is turned on and we left it on,” says Cerf, who is chief Internet evangelist at Google. “There are no more excuses. You have to be able to run IPv4 and IPv6 all the time, any time. For any ISP or edge provider or clients or servers, if you’re not capable of running IPv6, you are on notice,” Cerf said in an IPv6 Day postmortem press briefing that was conducted via an IPv6-connected WebEx. “You have to get going and get IPv6 running.” For most enterprises, IPv6
informationweek july 2012
arrives, but not
everywhere Last month marked a major milestone as IPv6 went live on the Internet — a look at some potential security hurdles for enterprises By Kelly Jackson Higgins adoption is not a done deal yet if they already have plenty of IP addresses and aren’t under any pressure to deploy it. And if it’s not deployed properly, it can incur security risks — another reason for taking it slowly, security experts advise. ISPs and network equipment providers, especially those in the consumer market, have led the charge to IPv6. Among the organizations that officially adopted IPv6 on IPv6 Day were Akamai, AT&T, Bing for Microsoft, Cisco, Comcast, Facebook, Google,
Internode, and Yahoo. So what should enterprises watch out for security-wise when making the switch? Failing to re-configure or upgrade firewalls and perimeter defenses to support the new protocol is one big no-no, according to James Lyne, Director of Technology Strategy at Sophos. He advises organizations to disable IPv6 altogether unless they are truly ready to go there so that attackers don’t exploit devices that run IPv6 by default. And there’s also the inevitable
Organizations should disable IPv6 altogether unless they are truly ready to go there to prevent any attacks on devices that run IPv6 by default
discovery of new vulnerabilities in IPv6, as well as organizations misconfiguring their IPv6 systems and leaving the door open for vulnerabilities and attacks. One example of a dangerous misconfiguration is when setting up tunneling between IPv4 and IPv6: It’s possible to inadvertently allow external traffic to flow through the tunnel freely, for instance. There are some other gotchas that IPv6 pioneers are experiencing. Ryan Laus, Network Manager at Central Michigan University (CMU), is working on an IPv6 rollout at CMU that will officially launch this summer. Like many universities, the catalyst for going IPv6 has been the explosion in mobile devices joining the campus network. “The last three years, we have seen such a huge growth in wireless devices that it was starting to really stretch [our IP address] allocation to the breaking point,” Laus says. CMU already has IPv6 enabled on its edge routers, and is working on ensuring its infrastructure can handle IPv6 both on the router and firewall end. Its intrusion detection system (IDS) is also IPv6-capable. “We want to make sure we have visibility into the IPv6 network as we’re building it out” for security and performance reasons, Laus says. One big concern is preventing traffic from tunneling IPv6 traffic through the university’s network. “The biggest thing is visibility,” he says. “We need to see what people might and might not be using and make sure IPv6 is handled in hardware. We can see that with [Lancope] StealthWatch, and can classify traffic on the IPv6 tunnel.” Laus says some organizations actually block IPv4/IPv6 tunneling altogether, but that wouldn’t work for CMU because many Asian countries use only IPv6, and the university needs to allow that traffic for research and operations reasons with users there. “[When] I feel confident that we have the security and monitoring things handled, [we will] roll out IPv6” fully, he says. For now, the internal network is hybrid IPv4/IPv6, and
by the end of the summer CMU’s website and external traffic will be IPv6-enabled.The university has experienced a few security hiccups with IPv6, including an odd incident where a user’s home Windows Vista laptop with the Internet Connection Sharing (ICS) feature enabled connected to the campus network via both the wired network and via wireless adapters. Internet Connection Sharing lets users share out their machines like a home router, and can answer DNS queries. The machine’s wired adapter had been registered on the campus network, but the wireless one was not. Because Windows Vista and Windows 7 by default select wireless over wired and IPv6 over IPv4, things got interesting. “[Sharing] does funny things to DNS requests,” he says. “It was sharing out its connection, and other machines on the same local network” with IPv6 enabled were directed to the laptop, which received their DNS requests, he says. Because wireless takes precedence over wired in IPv6 here, the machine returned the DNS response provided by the wireless card, which was the URL for CMU’s network device registration page. “Essentially, all wired machines on that local subnet with IPv6 enabled were only able to view the registration page, no matter what URL was typed into the browser. Machines with IPv6 disabled were not affected,” Laus says. But experts say security and other bumps like these come with the new territory. Chris Smithee, network security manager at Lancope, says it’s hard to say whether IPv6 will bring more security overall to the Internet. It seems to be a toss-up: “From a high level, it does appear to be more secure in the way hosts communicate,” Smithee says. “But there are not enough people trying to exploit it” right now to be sure, he says. “I feel anytime you make an advancement with something, it is a little more secure,” he says.
If organizations misconfigure their IPv6 systems, they leave the door open for vulnerabilities and attacks
Source: Dark Reading
july 2012 i n f o r m at i o n w e e k 53
Networking evolves, getting easier and more flexible New protocols SDN and OpenFlow look like promising ways to cut costs and increase automation By Greg Ferro
oftware Defined Networking and the OpenFlow protocol could change the way networks operate, making them more flexible and cheaper. SDN takes the decision-making about how network traffic should flow away from individual switches and routers and shifts it to a centralized controller or set of controllers. These controllers use software to build a picture of existing pathways through the network and tell network devices which pathways to use. OpenFlow is a protocol that implements SDN, describing how a controller communicates with other network devices. SDN potentially provides two major benefits. It could let IT more easily and quickly make changes to network configurations. It also could lower hardware costs if IT can replace expensive, intelligent switches and routers with fast but dumb commodity devices. However, SDN and OpenFlow have yet to be widely adopted, and networking pros may be reluctant to swap reliable, wellunderstood architectures for a new one. There also are existing protocols that are designed to address some of the same issues as SDN and OpenFlow without disrupting the networking model. Here’s a look at SDN and OpenFlow’s potential benefits and pitfalls, and how they compare with existing networking practices and protocols.
Networking equipment typically
informationweek july 2012
has three planes of operation: management, control, and forwarding. The management plane handles functions such as device management, firmware updates, SNMP, and external configuration via the command line. The forwarding plane (sometimes called the data plane) governs packet and frame forwarding of data payloads through the network device. The control plane consists of routing and switching protocols.
control plane. Next, it takes a given configuration from an administrator and logically renders it into OpenFlow entries. Finally, it sends those entries to the appropriate network devices, which add them to their forwarding tables, creating a new path through the network. Today, an administrator might have to reconfigure dozens of network devices to make changes to network paths. In theory, SDN and OpenFlow would let the administrator provide
While SDN and OpenFlow promise a more granular and open system, it remains to be seen whether they can replace existing management tools In a typical operation, the control plane uses routing protocols to build the forwarding table used by the forwarding plane. This forwarding table is delivered to the forwarding plane by the management plane as part of the device operating system. When an Ethernet frame arrives on the switch interface, the forwarding plane sends it to an output port. SDN and OpenFlow aim to replace or supplement this model by providing a prepared forwarding table from a centralized controller (see diagram, above). A controller in an SDN architecture is a software application that performs four functions. First, it presents a network management interface to the IT administrator. Then it maps out the network’s status and current
the desired parameters to a few controllers, which then reconfigure the appropriate devices. This would simplify the administrator’s job and make the network better able to respond to demands, such as prioritizing one type of application over another. In addition, in a conventional network, each vendor uses a different interface to configure its own devices. Thus, while the standards-based Border Gateway Protocol works the same on devices from different vendors, each vendor’s configuration interface can be very different, making the management plane of multiple vendors hard to operate. OpenFlow has a standardized protocol and API between the controllers and the switches, so communication is
consistent across all vendors, making it easier to manage disparate devices.
Network Management Challenge
Network management today is a muddle of products that require different modules to address tasks such as LAN switching, provisioning, and wireless management. Each module focuses on its own task without much awareness of other systems, making it difficult for IT to get a clear, real-time picture of the entire network. By using the centralized controller, SDN and OpenFlow provide better control and visibility. Network management also lacks APIs that are properly formatted for software developers. SNMP, the workhorse protocol for network management, just isnâ€™t suitable to provide rich, formatted data for applications to work with, nor is it suitable for configuring network devices. By contrast, OpenFlow provides XML-based data exchange between devices and controllers that can be extended and easily integrated into programming languages. This means network management tools should be able to tap into XML-based data, allowing for better visibility and management.
While SDN and OpenFlow promise a more granular and open system, it remains to be seen whether they can replace existing management tools. The technology is still immature and lacks sufficient adoption to be sure. Other standards might bring
Companies with enormous data centers will be the first to pick up SDN and OpenFlow as they can realize substantial savings in network hardware and administration cost similar benefits with less disruption. The relatively new standard Transparent Interconnect of Lots of Links (TRILL) changes enabling Layer 2 multipathing in Ethernet LANs, which helps eliminate choke points, increases usable bandwidth, and better supports the multidirectional movement of virtualized workloads. New tunneling protocols such as VXLAN/NVGRE are addressing the need for virtual server mobility. The soon-to-be-ratified IEEE 802.1BR, commonly known as EVB, will standardize virtual Ethernet bridging, which should let virtual and physical switches communicate about servers, VLANs, and quality-of-service
Where OpenFlow fits in
OpenFlow defines a messaging and instruction protocol between the control plane and the forwarding plane
Control Plane User Interface API (Not yet defined)
requirements. That would maker it easier for admins to respond when hypervisors move from one physical machine to another. There also are standards meant to make it easier to configure devices from multiple vendors. For instance,
the IETF has ratified XML-based protocols such NETCONF and YANG, which provide better capabilities for remote administration and can serve up richer data to management software.
A Long Road To SDN
The SDN and OpenFlowâ€™s potential benefits are significant: more flexible networks, simpler administration, and less expensive hardware. But the benefits are only promises. While several networking vendors support OpenFlow, and startup Big Switch Networks has an OpenFlow-based controller in a beta release, the protocol is still in its infancy. Companies with enormous data centers, such as Google and Verizon, will probably be the first to pick up SDN and OpenFlow because they can realize substantial savings in network hardware and administration costs. If the technology proves itself, it will be more widely adopted in mainstream IT environments. Pay attention now, though. Whether you completely revise your network to support a controllerbased infrastructure or adopt existing protocols to improve network flexibility and responsiveness, you may soon find that you no longer need to manually configure VLANs and switch ports. Source: InformationWeek USA
july 2012 i n f o r m at i o n w e e k 55
India discovers potential of cloud with CloudConnect CloudConnect, a comprehensive conference-cum-exhibition that focused on the entire ecosystem of the cloud, held between May24 -25 in Bengaluru, saw eminent professionals from the industry discuss the role of the cloud as an important business enabler. The speakers discussed the role of cloud as a game changer in an emerging country like India. With four keynotes, 65+ speaker sessions, three parallel conference sessions, real world workshops, panel discussions and cloud expo, CloudConnect proved to be the most promising event on cloud computing in India. Excerpts from sessions of some prominent speakers:
Chidambaran Kollengode Director – Cloud Computing, Nokia
In a session titled, ‘Architecting cloud infrastructures for Big Data Challenges,’ Chidambaran Kollengode, Director – Cloud Computing, Nokia, talked about Hadoop as an open source paradigm. He discussed out of the box techniques for solving Big Data challenges. He also gave real-world examples of Hadoop implementation in organizations he has worked with.
Vamsicharan Mudiam Strategy Leader, Cloud Computing Service, IBM India/South Asia
informationweek july 2012
Delivering a keynote session on the topic ‘SmartCloud - Directions 2012. Rethink IT. Rethink Business,’ Vamsicharan Mudiam, Strategy Leader, Cloud Computing Service, IBM India/South Asia discussed how cloud computing has changed the way we think about technology, economies of scale and innovation. He said that organizations are primarily looking at cloud because of its ability to optimize and make IT flexible and enable revenue generation. Mudiam also discussed how IBM has leveraged cloud to achieve business objectives.
Eric Yu President, Huawei Enterprise
Delivering a keynote session on the topic ‘Converging business requirements and the cloud’ at day one of CloudConnect event, Eric Yu, President, Huawei Enterprise said cloud as a concept helps in making life better for CIOs as it helps in addressing two of the biggest challenges IT is facing today — preventing information leakage and reducing power consumption. He discussed how organizations can build an open environment of cloud and deliver the demands of their business. He answered several questions like ‘Which applications should be shifted to cloud?’, ‘How secure is cloud?’, ‘Which cloud (private, public or hybrid cloud) should be adopted?’ and ‘Who to partner with?’
P Sridhar Reddy Chairman and Managing Director, CtrlS
In an interesting keynote session at day two of CloudConnect, P Sridhar Reddy, Chairman and MD, CtrlS, busted some common myths about the cloud computing technology.
Venguswamy Ramaswamy Global Head of iON, TCS
Delivering a session titled ‘How Indian SMBs can transform themselves using the cloud,’ Venguswamy Ramaswamy, Global Head of iON, TCS said cloud is the best technology for SMBs as the whole model is on OPEX than CAPEX, which improves an SMB’s capability to conserve cash and focus on growth. Ramaswamy also shared several interesting examples of how SMBs can use cloud to their advantage.
KP Unnikrishnan Marketing Director, Asia Pacific, Brocade
“Every organization should come up with its own cloud deployment strategy. Infrastructure and applications alone will not give results, hence it’s crucial to differentiate the building blocks (SaaS, IaaS and PaaS) based on the requirements,” said N Nataraj, Global CIO, Hexaware in his session titled ‘Cloud Lessons from the trenches: Our Journey to Cloud.’ Having deployed private cloud at Hexaware, which has resulted in the reduction of infrastructure maintenance cost by 30 percent and operational maintenance cost by 15 percent, Nataraj shared some important lessons learnt during the company’s cloud journey. “It is important to plan for redundancy in
In a session titled ‘Moving to the Cloud: Choose what works for you,’ Jayabalan Subramanian, CTO and Co-Founder, Netmagic addressed the common problem faced by many organizations while considering cloud computing — choosing the right model. He shared in detail how enterprises can evaluate which cloud model works best for their business.
Anna Gong Vice President - Cloud, Virtualization and Service Automation, Asia Pacific, CA Technologies
N Nataraj Global CIO, Hexaware
advance and this can be achieved by having the cloud at multiple data center and not one. It is also important to plan different layers like infrastructure, virtualization, administration and orchestration and workflow/governance properly,” advised Nataraj.
Jayabalan Subramanian CTO and Co-Founder, Netmagic
In her keynote session at day two of the event, Anna Gong, Vice President - Cloud, Virtualization and Service Automation, Asia Pacific, CA Technologies discussed the impact of Consumerization of IT on cloud computing.
In an insightful session on the topic ‘Efficient cloud networks for better economics,’ KP Unnikrishnan, Marketing Director - Asia Pacific, Brocade discussed the role of true Ethernet Fabric in allowing data center teams to create efficient data center networks.
july 2012 i n f o r m at i o n w e e k 57
Six misconceptions about infrastructure-as-a-service
Cloud computing is surrounded by many misconceptions. This article busts six most common myths about infrastructure-asa-service
http://www.on the web How to embrace the shift to the cloud
ompanies have progressed beyond the initial euphoria about cloud into experimenting with some form of cloud in their IT portfolio. This has been accompanied by concerns about potential teething troubles that have been reported in the media about this powerful transformation for IT. Many of the perceived problems have been caused by misconceptions about cloud computing. Here are six of the most common myths about infrastructureas-a-service:
Myth #1: Cloud is too complex
The biggest misconception about cloud computing is to think there is magic to it. There isn’t; it’s just a great industrial approach to computing. With a public infrastructure-as-aservice cloud offering, such as Tata Communications’ InstaCompute, customers have the same secure, industrial-grade platform — whether they are large companies or a three-man company. They can get on-demand solid robust servers, storage, and networking. Public cloud companies have more agility to put up and scale up services that are available as and when they need them, working on what really differentiates their business. A public cloud IaaS is a standard, off-the-shelf computing platform. Because it is standard, it limits the options at the compute infrastructure level, but it really takes off a lot of extra bother.
Read article at:
Myth #2: We’re on the same page about what the cloud 58
informationweek july 2012
To ensure success, make sure everyone has the right expectations about the type of cloud computing being considered, whether it is Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS). It is important to understand the capabilities and the limitations of the specific cloud computing service you might buy. For example, at Tata Communications, we encourage customers to do a trial first, in order to help ensure everyone understands how to use InstaCompute and what it actually is. Companies need to be clear about which capabilities are included, and which are excluded. Look for a cloud provider with a free trial so a realistic evaluation of the available services and offering can be easily accomplished. For any IaaS cloud computing, it is the application integrator’s and operator’s responsibility to know the architectural advantages and limitations of the specific serverstorage-network security platform you will use.
Myth #3: Cloud providers are obliged to help when I encounter problems
Yes and no. Providers for different types of cloud services provide different types of support. An infrastructure-as-a-service which offers server, network, storage, and security services would include help on how to use these infrastructure services as a platform, but the IaaS offer would not include services to integrate and operate a particular application. Not all providers are equal, either.
Myth #4: I’ll just put the application on cloud and it’ll work
Whether the servers are in the cloud or a corporate data center, integrating and deploying applications means working through good, solid disciplined IT processes. If everything is done systematically, with review points at key stages, there is a high chance of success. Sandboxes can be created in the cloud for multiple cycles of evaluation, including scaling up, running load tests, and using real people and real-life situations in a pilot. In any server environment, the application integration architecture and design can result in suboptimal performance, reliability, and security; it may be the result of a poorly configured operating system or application. So, yes, putting an application onto servers in the public cloud will work, but proper architecture, design and testing is needed. A corollary of this is — not every application is well-suited for a public cloud.
Myth #5: Software off the cloud needs maintenance but software on the cloud looks after itself
Buying infrastructure-as-a-service only means buying the use of servers; applications on those servers are still a corporate responsibility. Standard IT processes are required to maintain the health of the application. Software has to be monitored, and the data generated has to be managed. Without proper governance, companies could easily lose track of what’s where. Backups and disaster recovery still need to be implemented, but they are typically much easier to do on the cloud.
not unique risks for the cloud. Infrastructure-as-a-service includes server, storage, network, and firewall services at the platform level. The security risk is dominantly at the application layer, the same as on private premises. Backdoors to the data may be opened through a poorly configured operating system or poorly managed applications with wrong access configurations; neither comes about as a result of being on the cloud. As public clouds are accessed over the Internet, the preferred approach is encrypted data in transit or in storage, which is equally important for web-facing data on a private network. One cloud-related risk may be through distributed denial of service (DDOS) attacks, either directly or indirectly. Public clouds have multiple tenants off the Internet. Just as someone might not be able to get to his flat because the neighbor has too many visitors clogging up the common hallway, a denial of service attack affecting another customer could slow things down for a cloudbased business. Such problems can be avoided by selecting the right cloud provider. Look for a vendor with the ability to deal with denial of service attacks so they no longer have an impact. A strong, diverse backbone network, the ability to identify such attacks, and the tools to mitigate them are a must. In conclusion, transitions are always challenging but they are imperative to stay ahead of the curve. A well-managed public cloud implementation, especially for small to mid-sized companies or departments, can provide powerful solutions in a convenient way. Making sure companies are aware of these myths could take them to the next level and help CTOs seek new ways to optimize budget while ensuring the latest in technology.
Buying infrastructureas-a-service only means buying the use of servers; applications on those servers are still a corporate responsibility
Myth #6: Cloud is not secure
The major security risks are
u John Landau is Senior Vice President - Technology and Services Evolution, Tata Communications
july 2012 i n f o r m at i o n w e e k 59
Technology: The shape of things to come
Let’s take a look at the future of technology to see what the future landscape presents as opportunities and threats to businesses
http://www.on the web Education 2.0: Mobile technology in education Read article at:
informationweek july 2012
he nature of technology is changing. There was a time, not long ago, when technology made it possible for doctors, engineers, designers, geologists, scientists, pilots, bankers and other professionals to push the envelope of their practice. Technology gave them tools to work faster. Technology delivered greater accuracy. And, often, it lowered the cost of the products and services even as it created or increased safety. Today, technology is pervasive. It is sweeping across society to make a deeper, broader and more direct connect with people. It is no longer confined to or controlled by privileged professionals. You and I are using technology — to read this paper, to distribute it, to make it searchable, to extract and edit portions of it, and to store it for quick retrieval. It’s interesting to note that we consider this completely normal. But pause for a moment. We can do an incredible lot more: we can all be producers and global distributors of the movies we make; we can be our own bankers online; we can make excellent lobster bisque just by looking up an expert’s video guide; we can instantly share our knowledge with the world using mobile technologies; we can be a DJ at a friend’s party; and we do not have to deal with ignorant or inefficient customer service officers any more — we just use an IVR system. Capability, expertise and control are moving from the core to the very fringes of society. This dramatic shift in the accessibility of technology is altering everything, from utilities, healthcare, banking, retail, transport, hospitality, media & communication to government and public infrastructure. But, things are not getting any simpler. Nano technology and IT are coming together in novel ways to create new capabilities. The capacity of our networks appears to be infinite. The ability of social media to generate thought, debate and opinion
that drives loyalty and business is boundless (even revolutionary). And we have barely begun to scratch the surface. If anything, we are moving towards unprecedented complexity. How do we, as technologists, manufacturers and service providers — who also happen to be consumers once we leave office — prepare for this? What does the future landscape present as opportunities and threats to businesses? What are the key developments that we need to reflect on today, so that tomorrow is safer, better and sustainable?
Craving an uncomplicated life
Intrinsically, human nature craves simplicity. It longs to experience an efficient and uncomplicated life. Take the mobile phone — it is a simple device that performs complex operations. The mobile phone’s form factor is unfussy and natural. It has a set of keys that everyone understands. It can communicate using a variety of channels from voice to text to images and video. It can be an entertainment center and show your location through simple triangulation or using GPS technology. One moment it can become a tool for social interaction, and the very next it can be used for sales and support or to record a crime. It can also be extend as an educational device or a medical gadget. Building simple technologies and applications is where the future lies. But with great simplicity, comes great backend complexity.
The Four Vectors of Change
The changes we are witnessing in technology are driving our attention and focus to four chief areas of interest. Firstly, at the intersection where the cyber world meets the physical world tremendous capacity is building up — increasingly intelligent machines in the public space that have idle or spare capacity can be used once they
are networked. Secondly, networks are changing from passive to interactivity, offering extraordinary control to users. Thirdly, as technology permeates society, user experience is moving to the forefront becoming top priority. And fourthly, as networks and technology deliver new capabilities, the threat to individual and corporate security will be a reason for mounting governmental and societal concerns. Machine2machine communication: Imagine you are sitting in a café and need computing power to solve a problem. Your phone could ping other phones in the vicinity, and check if it has permission to use their spare computing capacity. Using a dynamic network of intelligent machines, complex problems can be solved in real time at dramatically lowered costs. Technologies such as these can make products and services more affordable for the masses. Affordability is the key to address emerging markets and reach the bottom of the pyramid. Already, using affordable technology, the cost of delivering employee insurance in India through the Employees’ State Insurance Corporation (ESIC) is a mere USD 1.07 per person per annum. How will machine2machine communication affect costs? Machine2machine communication presents several revolutionary possibilities outside of enhancing and augmenting computing power. Take the case of a bank heist where the alarm system has been disabled. With machine2machine communication, the alarm system does not have to communicate over a phone line. It can use several ambient networks and devices to raise an alarm! Analytics and the semantic web: When you are in distress or trouble what is your first response? Chances are you flip out your phone and reach out to a friend or someone in the family. It’s unlikely that you connect to the web, looking for an answer. Can machines respond to human requests based on natural meaning? Today’s analytical engines are inching closer to realizing the semantic web. The amount of machine-readable data is growing exponentially, bringing us
more rapidly to the reality of a semantic web. Imagine a Close Body Network that monitors your physical condition and relays it over a Bluetooth network to the phone in your pocket, which in turn delivers the data over a 3G network to a remote hospital system. The system can analyze and respond to your own body metrics and sound an alarm if necessary. It can also compare your body metrics with that of similar people over other networks to create new real-time understanding of your condition. Such networks with a layer of analytics can touch end-consumers and have inconceivable impact on individuals in the areas of healthcare, hospitality, banking, financial services
accessed over the Internet, regardless of the format? In reality, user expectations will change in ways that call for a higher level of engagement but simplicity of use. Security: As networks proliferate and integrate to grow deeper, the society will have to face and address a new dimension in security. Networks have no borders and the state has no control over them. Personal data will be stored across devices, social networks, groups and organizations, making security a nightmare. Identities will be stolen, pictures morphed, and presence data will be manipulated — giving birth to new crimes and criminals. These developments will call for a more
Human nature craves simplicity — building simple technologies and applications is where the future of technology lies
and insurance, etc. User experience: At the center of use is the experience. iPod’s success can be attributed to its user experience (and, admittedly, the exquisite design). iPod is intuitive, simple, and effectively uses the human need to touch and feel everything. Tomorrow’s world will demand a severe extension of this type of user experience to encompass all the five senses — sight, sound, smell, touch and taste. How can technologies of today help improve user experience? The answer will provide the key to business success. For example, the music player in your car is capable of playing radio, CDs, music from pen drives and from a few other devices. Cars have a typical lifetime of a decade. Over this decade, entertainment technology is likely to change dramatically. So what happens when a new media format is introduced? Do you really need to rip and replace your hardware? Or will you be stranded with a music player that is rapidly getting outdated? Is it not possible to turn your “music player” into a simple device that plays content
stringent regulatory environment, better security standards and a new method of creating and managing watchdog bodies responsible for your digital security. Some of these scenarios may appear to be trend snapshots. Some of the scenarios may take decades to develop and mature. Others will simply fade out. No one can predict the future accurately. But the emerging picture is doubtless of a technological future that is simplified but which must be viewed with caution. As technology moves across the masses and deep into the fringes of society, it will create unprecedented innovation. As technology becomes more accessible, we will see the emergence of applications that serve smaller user groups, at lowered costs and with the same robustness connect enterprises, society, physical world and governments. The future looks great but needs responsible behavior at the consumer end. Like they say “with great power comes greater responsibility.” u Dr Anurag Srivastava is CTO, Wipro Technologies
july 2012 i n f o r m at i o n w e e k 61
Managing backups in virtual environments
With virtualization emerging as a top priority for IT administrators, the backup and recovery strategies need to change
http://www.on the web Virtualization makes DR automation possible Read article at:
informationweek july 2012
irtualization has “virtually” changed the IT world in which we all work and play. Why is virtualization so attractive to IT administrators? The answer is easy — there are many uses and benefits that we gain through virtualization. For starters, having a single server’s physical footprint represent many servers on the network is a boon for administrators looking at consolidating space and reducing operation costs. Having the ability to quickly stand up a VM copy of a major application or work server for patch testing is simply a game changer as it allows administrators to test during business hours. But what about backup of data in virtualized environments? Let’s take a step back to understand where we stand in the area of backup and recovery today. Symantec recently surveyed more than 1,400 IT professionals on their backup practices and ability to recover information in the event of a disaster. The findings strongly suggest that traditional approaches to backup are broken and a new approach is necessary. Here’s why: onfidence in backup is lacking — C especially virtual backups. The purpose of a backup is to have worry-free data recovery in case of disaster. Yet, 36 percent of respondents are not confident that their backed up data can be quickly recovered when needed. And when it comes to virtualized backup, 42 percent of respondents reported that their virtualization backups don’t work adequately. ackup SLAs aren’t being met. B One-third of respondents indicated that they’re either not meeting backup and recovery SLAs or are unsure if they are. Of those not meeting SLAs, 49 percent said they can’t meet them because they have too much data — due to the size of the backup, lack of
bandwidth, and the sheer volume of data. he current backup and recovery T approach is complex. More than onethird of the respondents noted that backup is extremely time consuming, and nearly as many said the same about recovery. In this scenario, it is evident that backup and recovery need to change. To address growing backup needs, and streamline the complex processes, enterprises anticipate making significant changes in the near future. Within the next 12 months, tape-based backups will decrease by one-third, and more and more organizations will be investigating appliance and cloudbased backup solutions. As many as 72 percent of the organizations surveyed said that they would change vendors if they could double the speed of their backups. Bring virtualization into the picture, and the paradigm changes significantly. I often speak with administrators who look for ways to simply protect their virtualized assets for the purpose of full recovery in the event of disaster — i.e. their backup solution is only working to back up, but does not truly embrace their virtual solution. What if we start taking the approach of having backup software actually use virtualization as a true extension of the recovery plan? Can we take virtualization to be a resource that can be leveraged as the platform for recovery for both physical and virtual servers alike? To answer this, we need to first understand the current environment. Sure, the world is going virtual in a strong way but this is not something that is going to happen overnight. Although many early adopters have moved forward to become nearly 100 percent virtualized, most administrators are still governing environments that are heavily
comprised of both physical and virtual server assets. As such, administrators need a solution that is not only purpose-built to work for their entire environment, but one that takes advantage of the virtual infrastructure, specifically to allow them to further leverage their IT investments. Furthermore, businesses need to be able to not only leverage their existing virtual infrastructure for items including instant recovery of any physical or virtual server that is protected, but also leverage the cloud to recover, test or migrate any virtual machine that is in the environment. And for good measure, imagine that when it is time to migrate that physical server to a new virtual body, you simply power on the virtual copy that was created and maintained as part of the standard backup of a physical server! This greatly simplifies the complexity that exists around backup today, and it means that in the near future, we can backup and recover our most important information literally at the touch of a button. To successfully embrace new technologies and increase confidence in backing up mission-critical data, organizations are recommended to follow these steps: 1 . Break the backup window. Eliminate out-of-control backup
windows by using solutions that accelerate full backups multifold, without sacrificing recovery time either. 2. Unite physical and virtual backups. Using a single solution for both environments will drive down operating costs, reduce storage, and accelerate recoveries. 3. Consolidate backup and recovery tools in a single appliance. Integrating backup, deduplication, and storage in a single solution will drive down operating costs and capital expense while simplifying dayto-day operations in the data center and remote offices.
Administrators need a solution that is not only purposebuilt to work for their entire environment, but one that takes advantage of the virtual infrastructure specifically to allow them to further leverage their IT investments
4. Fight infinite retention. Many businesses have tremendous legal risk and cost exposure resulting from overretention of backup tapes. Identify what information to archive and what to delete, based on relevance to legal discovery or compliance cases. This reduces the time and cost IT spends on eDiscovery and eliminates the need to keep backups forever. 5. Stop putting tapes on trucks. Combine deduplication with disaster recovery to transmit data over the network from the production site to the DR site instead of loading tapes onto trucks.
u Vijay Mhaskar is Vice President, Information Management Group, Symantec
july 2012 i n f o r m at i o n w e e k 63
key actions to reduce IT Infrastructure and operations costs
IT infrastructure and operations costs represent 60 percent of total IT spending worldwide. Here are 10 key actions that can help you bring down the expenses by 10 percent in a year
uring the recession, many major IT infrastructure and operations (I&O) upgrade projects were deferred, slowed or cancelled. Many IT organizations believe these projects need to be resurrected soon, whether to meet business needs or to ensure that I&O does not create serious downtime situations. But although growth in the demands placed on I&O organizations has nearly returned to pre-recession levels, I&O budgets have not, and nor are they expected to anytime soon. With I&O representing about 60 percent of total IT spending worldwide, and IT budgets remaining tight, it is no wonder that pressure to cut I&O costs remains intense. When it comes to reducing I&O costs, there is no single area where businesses should focus their efforts. The best results can be achieved by performing, as fully as possible, by 10 key actions recommended below. We predict that, by 2014, organizations that perform these actions fully will be able to reduce their I&O expenses by 10 percent in 12 months, and by as much as 25 percent in three years.
http://www.on the web Why you should push your servers to the limit Read article at:
informationweek july 2012
Defer Non-Critical Key Initiatives
I&O leaders need to re-examine their key initiatives to determine which ones to focus on as short-term priorities. There are three major questions to ask: Does the I&O key initiative strongly support a highpriority business initiative that needs to be completed in the short term? Does the I&O key initiative lower the I&O cost structure in the time frame required? Does the I&O key initiative lower risk by upgrading I&O to prevent major outages or severe performance deterioration?
Review Networking Costs
When it comes to I&O spending, the data center and the network claim the lionâ€™s share. As nearly half the network expenses go to telecom service providers, network managers must renegotiate contracts with these providers to ensure their contracted rates are market-based. Substantial steps can also be taken to optimize network costs by refining the design and sourcing of networks. I&O consolidation is closely related to standardization, integration and virtualization. In the past, the rise of distributed computing and other trends drove the decline of large data-processing sites. Now, however, data center are growing in importance, and we expect this trend to continue throughout this decade as server rationalization, hardware growth and cost containment drive the consolidation of enterprise data-processing sites into larger data centers. Servers run at a very low average utilization levels (less than 15 percent). Virtualization software increases utilization, typically by four times or more, which means that, for any given workload that can be virtualized, a company can typically reduce its number of physical servers four-fold. Conservatively, this means hardware and energy costs can each be more than halved. As with consolidation, virtualization can be applied to many I&O platforms: Unix
servers, storage, networking and client computing.
Reduce Power and Cooling Needs
In the past, newly built data centres contained huge areas of pristine white floor space, consumed large amounts of power, and had uninterruptible power supplies and water- and air-coolant systems. Given the cost of mechanical and electrical equipment, as well as the price of electricity, this type of design no longer works. Fortunately, new approaches to design mean that new data centres can now use significantly less space and power and cost much less.
Contain Storage Growth
Computing, networking and storage capacities are growing at double-digit rates annually, with storage capacity growing by far the fastest. Gartner predicts that by 2016, organizations will install 850 percent more terabytes than they had installed in 2011. But throwing terabytes at the problem is no longer a viable solution. With capacity growth far outstripping cost declines, tighter control is required. Multiple approaches need to be taken, including the use of storage virtualization, automated tiering and storage resource management tools.
Push Down IT Support
Streamline IT Operations
IT support for end users and the organization typically accounts for about 8 percent of IT spending, and most I&O organizations have at least four tiers of support, each with a different cost point and level of expertise. To reduce costs, organizations need to drive support calls down to the lowest tier that can satisfactorily resolve usersâ€™ issues.
I&O accounts for approximately 50 percent of the total enterprise IT head count, and most
I&O staff are involved in operational processes of a day-to-day and tactical nature. To contain head count and associated costs, these processes need to be streamlined and made as efficient as possible. This typically entails implementing ITIL, the de facto standard framework for IT operations. The principal goal of ITIL is to improve service management and quality, but it has also been known to reduce operating expenses.
Enhance IT Asset Management
By itself, IT asset management (ITAM) does not reduce I&O costs. However, it is a very effective tool for identifying and assessing cost reduction opportunities. ITAM can help determine the life of certain assets, defer upgrades, and eliminate or combine software licenses, as well as replace certain maintenance service contracts with a time-and-materials approach. IT asset repositories are generally the most effective tools to help in this endeavor. These tools can maintain dates, manage changes to assets, and send reminder e-mails to ensure the life cycle process is managed proactively.
Sourcing is perhaps the most strategic decision I&O leaders are facing today. The decision is not as simple as whether to outsource or insource all I&O. IT leaders can make separate sourcing decisions for virtually any I&O component, system or function. The key decision criteria are those controlling aspects of strategic and critical importance to the business, playing to the strengths of available staff, defining clear lines of demarcation, keeping the number of vendors to a small, manageable number, and determining what makes solid financial sense.
Virtualization software increases utilization by four times or more, which means that, for any given workload that can be virtualized, a company can typically reduce its number of physical servers four-fold. This means hardware and energy costs can each be more than halved
u Jay Pultz is Vice President and
Distinguished Analyst at Gartner
july 2012 i n f o r m at i o n w e e k 65
Technology & Risks
To patch or not to patch
Keeping the system updated with the latest patches is an important cyber security advice. But have you ever thought what would happen if the update service is compromised?
http://www.on the web Can hackers target a pacemaker? Read article at:
informationweek july 2012
yber security advice generally consists of three things — use a strong password, use up-to-date anti-virus software, and most important, always keep your system updated with the latest patches. And since there is hardly any software that does not need patches, it has become an accepted and routine task. But have you ever thought what would happen if the update service is compromised? Well, this actually happened through a man-in-the-middle attack, which delivered a malicious executable signed with a ‘rogue, but technically valid Microsoft certificate’ for spreading Flame, the spy malware which infected computers in Iran and other countries in the Middle East for at least two years before detection. The malware spread impersonating the Microsoft Update. The computers implicitly trusted the certificate that had signed the patch and thus allowed it to be downloaded and installed. Microsoft has now revoked that particular certificate, studied the vulnerability that allowed this to happen, and issued a patch. A patch to correct a patch! An unrelated advisory (reproduced below) from FBI has warned travelers about malware getting installed on laptops through software updates on hotel Internet connections. “Travelers attempting to set up the hotel room Internet connection are presented with a pop-up window notifying the users to update a widely used software product. If any user clicked to accept the update, malicious software was installed on the laptop. “ There are a number of attack tools that can spoof software update prompts. One of them is the toolkit Evilgrade, which facilitates attackers to install malicious programs by exploiting weakness in the auto-update feature of many popular software titles — and is capable of hijacking the update process of more than 60 popular programs. Notable among them are Skype, VMware, Winamp, Java and VirtualBox. The attacker targets programs that
don’t implement digital signatures on their product updates. This allows the attacker to impersonate the source and fool the user in believing that a genuine patch is being downloaded. If the software vendor has used a cryptographic key, which has not been compromised, and if the signature verification process cannot be bypassed by using a tool like Evilgrade, the update process could be trusted. But, as it happened in the case of Flame, the cryptographic key itself might be compromised. How do you protect your computer in such a scenario? There is not much you can really do in the worst case scenario where the trusted certificate itself has been compromised. We can only hope that this does not happen too often. As the news items says, Flame was probably created by a ‘nation state actor’ for cyber espionage and infected less than 1,000 computers in a specific geographic area. Of course, the fact still remains that there was a vulnerability, which was exploited by this ‘nation state actor,’ but it could also have been exploited by other clever attackers. In more plausible scenarios of attack tools, we can protect ourselves by taking some precautions: • Do not use an untrusted network, wired or wireless, to update software. • Do not respond to pop-ups, which mysteriously appear on your screen urging you to update the programs • Update the software programs only when connected to your trusted network. • Download the software directly from the vendor’s website. • If you are using auto-update feature, disable it when you are travelling. So far we used to be wary about phishing, now we need to be even more worried and careful about patching. u Avinash Kadam is an Information
Security Trainer, Writer and Consultant. He can be contacted via e-mail firstname.lastname@example.org
The CIO wears two hats: Isn’t IT enough?
Like Rick Roy, a growing number of CIOs now run IT plus another major business function. Here’s why CUNA Mutual Group pointed Roy at procurement, real estate, physical security, and facilities
LOGS Chris Murphy blogs at InformationWeek. Check out his blogs at:
ick Roy, CIO of CUNA Mutual Group, sees several similarities between running the IT and procurement departments, both of which he’s in charge of for the financial services company. For starters, your team’s help isn’t universally welcomed. If a department’s leaders are happily buying, say, temp services from one vendor, they’re not necessarily excited to hear that they need to start buying through a centralized group — even if they understand that having one contract probably means leverage to get a better price. That’s a lot like the conversations around shadow IT. “We [in procurement] do run the risk of showing up on someone’s doorstep with a message of ‘we’re from corporate, and we’re here to help,’” Roy says. “But that’s not so different from what we do in IT.” Roy has been leading procurement, real estate, physical security, and facilities for more than a year, while retaining his CIO duties. It started when the since-retired CFO, Jerry Pavelich, wanted a tighter grip on purchasing and approached Roy about taking over procurement. “He caught me a little bit by surprise,” Roy admits. Putting IT and procurement under one exec can make sense for a number of reasons, Roy says. One is the reality that if there’s a procurement project moving forward, there’s probably an IT component and IT staff involvement early in the process. That’s particularly true at a financial services company like CUNA Mutual, which provides financial services such as insurance and investments to credit unions. The IT leadership already is involved in negotiations for a lot of contracts for equipment, software, and services, so it has a level of expertise. That includes knowing the right questions
to ask about data control, security, and privacy, as well as questions around liability and service level agreements related to uptime and other performance factors. Thanks to the new arrangement, Roy thinks CUNA has gained better teamwork in a few areas. For example, the IT security and physical security teams have started working more closely, in particular as they think about wireless network security and places in the building where different people — employees, contractors, visitors — might try to access wireless networks. And yes, of course this kind of teamwork can and does happen without putting one person in charge of two staffs or making any organizational changes, and CUNA Mutual has a strong culture of collaboration. But Roy’s a realist: “We all know how it works: When things are hard-wired in an org structure, you are aligned.”
Isn’t Being CIO Enough?
Roy’s in the minority with regards to procurement — just 12 percent of the CIOs in our InformationWeek 500 last year also have responsibility for procurement (up from 8 percent in 2010). Much more common is for CIOs to have formal responsibility for telecom (64 percent), business process management (32 percent), or innovation (30 percent). Nine percent are in charge of global business services. Anecdotally, we’ve seen a few CIOs recently add a formal “digital” role, usually bringing together the growing opportunities emerging in mobile, e-commerce, and customer analytics. Among the high profile IT leaders with procurement responsibilities is John Hinshaw, HP’s VP of global technology and business processes. Hinshaw, a former Boeing CIO, was brought in by CEO Meg Whitman
july 2012 i n f o r m at i o n w e e k 67
Global CIO last year and given a broad portfolio, including procurement, shared services, real estate, and sales operations, as well as IT. Procter & Gamble CIO Filippo Passerini is also President of P&G’s Global Business Services unit, which includes more than 170 different services that are used by business units across the company, from human resources to facilities management. Such two-hat CIOs are often longtime execs , like Roy and Passerini, who understand the company’s business operations and goals far beyond technology. This situation also signifies a deep IT leadership bench — if the CIO can’t or won’t let go of some of the daily IT operations, neither job will be done well. But does it also signal that the company takes IT operations for granted? Is “only” running IT not seen as a big enough of a job? Roy doesn’t think a dual role waters down the importance of IT. Most CIOs take a general manager’s view of the whole company while running IT and working as part of the executive team. However, “there aren’t a lot of people who are paid to work across the entire organization,” Roy says. Shared services like procurement make sense for a CIO to run, because IT itself is one of the biggest shared services, and so the CIO is used to working with every part of a company. However, CUNA Mutual does have a team dedicated to IT strategy and architecture — and that’s essential, Roy says, so that he knows people are focused on the long-term view for technology. “Anytime I start to think IT is on autopilot, I start to get paranoid,” Roy says. “What are we missing?” Roy highlights three examples where CUNA Mutual is getting benefit from IT working more closely with another shared service. Physical security: We already mentioned the cooperation related to assessing and securing wireless networks. The IT and physical security teams have not at this point linked employee badges that grant building access to network access, which is controlled using RSA tokens. But if
informationweek july 2012
CUNA Mutual moves network access to a software based network access system instead of tokens, a link between physical and network access controls might be worth considering. Facilities: The company is now doing more comprehensive disaster recovery testing, Roy says, instead of testing in silos. Individual business units and facilities management had plans for where people should go if a building couldn’t be accessed, but IT didn’t have always have plans to get those people access to a network and applications. This spring, CUNA Mutual did a mock disaster drill, and Roy says, “It was very successful, because we found issues.” Procurement: One third of CUNA Mutual’s controllable expenses go through this group — not only factors like technology, but also real estate and facilities, marketing costs, employee benefits, travel, and temporary staffing. Looking at those expenses companywide located savings — like one temporary help vendor that had nine different contracts using different rates for essentially the same type of staffing. CUNA Mutual bundled those to negotiate a lower rate. Placing the CIO in charge of procurement does raise the potential problem of checks and balances when IT is doing the buying. As Roy bluntly puts it: “We’ve had to be careful, because we don’t want the fox in the hen house.” Checks and balances include having the legal department separate from the procurement organization. There’s a clear policy on who must sign off on different types of contracts — often four to six different people. And while the vendor management office reports to Roy, those staffers aren’t a part of the IT organization. “They’re fully authorized to raise their hands and say ‘This deal doesn’t hunt,’” Roy says.
Two-hat CIOs are often longtime execs, who understand the company’s business operations and goals far beyond technology. This situation also signifies a deep IT leadership bench — if the CIO can’t or won’t let go of some of the daily IT operations, neither job will be done well
u Chris Murphy is Editor of
InformationWeek. Write to Chris at email@example.com
Can IT be trusted with personal devices?
Mobile device management as a path to security is a fundamentally flawed strategy. You must manage the data
LOGS Art Wittmann blogs at InformationWeek. Check out his blogs at:
ost IT teams weren’t prepared for the BYOD challenge, and they’re not handling it well. This assertion is borne out by our Mobile Security Survey, which shows that security education is still underfunded and underappreciated and that there’s an ongoing mismatch between the mobile device management features IT deems to be important and what’s in end users’ best interests. To illustrate just how pernicious the wrong BYOD policies can be, here’s a hypothetical: A worker decides to buy an iPad so that, among other things, he can record and store pictures and movies of important events. Perhaps he manages to catch his baby’s first steps or his daughter’s piano recital, or he uses the iPad to store hundreds of family vacation pictures. Being a proactive employee, he brings the iPad into work, to use for sales presentations and such. The IT organization tells him that before he can put any company data on the device, even what’s freely available on the company website, it’ll need to install some software that will enforce passwords (No. 1 on our list of most critical MDM security functions). The app will also perform remote locking and wiping of the device, offer some malware protection, and deliver security updates (Nos. 2, 3, and 4 on the list). The software will require password changes every few months, enforce minimum standards for length and complexity, lock the device after a given time, and if too many failed password attempts occur, wipe the device (the top 5 password policies desired by IT pros). Now, suppose one of the employee’s young children plays with the iPad, exceeds the number of failed password attempts, and the device is wiped. No baby’s first steps, no piano recital, no pictures from the family vacation. While technology can play a part in protecting the company while letting
employees use their own devices for business purposes, most IT teams are creating an insane set of rules for no apparent reason. That same employee could have e-mailed the sales presentation, which probably isn’t encrypted or password protected, to his Gmail account, uploaded some product shots to Dropbox, and used the device for work without IT’s involvement. And there’s often incentive for employees to do just that, because IT’s policies are onerous at best, and at worst downright counter to the employee’s interests. If software can’t tell the difference between company data and employee data, it has no place on a personally owned device. Further, MDM as a path to security is a fundamentally flawed strategy. You must manage the data. The data is what the company owns and values. But of course, data management involves user training and classification. For too many IT teams, it’s easier to use a blunt instrument. There’s a bit of good news in our survey: While only 32 percent of respondents have had a security awareness program in place for two or more years, 18 percent have recently added one, and an additional 25 percent say they’ll get one in place in the next 12 months. Plenty of cloud-based backup services can add a layer of protection for both company and personal data. No doubt users represent a security risk, but they’re also first line of defense — if you take the time to clue them in on best practices. Explain how securing corporate data can help protect them as well; if their smartphone is stolen, they may want to nuke it. But don’t put device-wipe time bombs on their systems unless you want to explain why all their personal data is gone and there’s nothing they can do to get it back. u Art Wittmann is Director of
InformationWeek Analytics, a portfolio of decision-support tools and analyst reports. You can write to him at firstname.lastname@example.org.
july 2012 i n f o r m at i o n w e e k 69
Down to Business
IT as profit maker
In defining your company’s core operations, is IT one of them? Many non-IT companies have patented, trademarked, or copyrighted at least one tech innovation
LOGS Rob Preston blogs at InformationWeek. Check out his blogs at:
informationweek july 2012
he practice of vertical integration has been on the outs for a decade or two, as companies shed or outsourced ancillary operations in order to focus on their “core” expertise. That thinking has extended to IT, especially in the era of cloud computing: Why tie up internal resources building and managing data centers, infrastructure, and applications, the argument goes, when third parties can provide that technology more efficiently and effectively? The decision comes down to your company’s definition of core business. Is Ford a car and truck manufacturer, or is it the operator of one of the world’s most sophisticated supply chains, requiring a wide range of IT competencies from start to finish? Is Amazon.com an online retailer, or is it a for-profit technology company whose world-class infrastructure underpins a variety of external as well as internal businesses? Scores of companies not only are innovative users but also committed builders and sellers of IT systems, software, and services. NYSE Technologies, for instance, is pitching a range of transaction, infrastructure, and data services and software, mostly to other financial companies. Union Pacific, the largest railroad company in the U.S., now generates USD 35 million to USD 40 million in annual revenue by selling, leasing, and licensing various technologies it owns and/or develops. For example, it was going to buy communications radios for its locomotives from a specialty manufacturer, but the engineers who work in UP’s technology R&D lab said they could do the custom electronics for less. By developing the 8,000 radios it needed in-house and farming out their fabrication to a contract manufacturer, UP not only saved USD 7 million to USD 8 million, says CIO Lynden Tennison, but the subsequent sale of about 5,000 of those radios to a couple of competitors generated enough money to more than
cover development costs. Such examples, while not the norm, aren’t the rare exception either. Consider that in 2011, 26 percent of InformationWeek 500 companies had patented, trademarked, or copyrighted at least one tech innovation. If IT truly is intertwined with the business, then in-house IT expertise — whether it’s for sale or competitive advantage — must be cultivated. It’s certainly wrongheaded to suggest that most IT work is a mere commodity best left to cloud vendors, outsourcers, consultants, hosting companies, dev shops, and other outsiders. The challenge is to find the middle ground — build best-in-class technical competencies and consider off-loading the true commodity work to others. But sometimes it makes financial sense to keep even the commodity stuff in-house. Before your IT organization jumps into selling its technology, ask several basic questions: l Have you run pilots to validate the business approach? l Does that business have adequate development, sales, marketing, and capital support? (Or would you be better off partnering with a company with this expertise?) l Will the projected revenue make a material difference? Is the CEO and board committed to your creating and running, say, a USD 5 million-ayear business that distracts you from leading IT for your multibillion-dollar company? l Is everyone on board that you’re not selling the company’s competitive advantage? “In most cases, the value proposition of these services needs to be linked to the overall value provided by the company,” says Dave Bent, CIO of office supplies distributor United Stationers. “The combined value needs to be greater than the sum of the parts.” u Rob Preston is VP and Editor-in-Chief of InformationWeek. You can write to Rob at email@example.com.
Published on Jul 1, 2012