Page 1

Alert_DEC2011.indd 18

11/21/2011 11:05:27 AM

From The Editor-in-Chief

“No. Not storage, please! It’s such a dull subject for a discussion.”

No Room for Legacy Vendor or technology lock-in is no longer the bogeyman it used to be.

This was what a CIO told me when I brought up the CIO Focus Storage event a few months ago. His point was that since “one box equals another, where’s the fight, brother?” Having recently returned from the multi-city event series, I can look him right back in the eye and gleefully say: storage does get people excited; that it lends itself to passionate debate; and some amount of humor too! Twenty CIOs offered their perspectives on homogeneous and heterogeneous storage infrastructures across the events in Mumbai, Bangalore, New Delhi, Chennai, Hyderabad and Pune. While it was great to see our panelists didn’t hold back any punches in putting forth their points of view, the level of audience participation was also heartening. How scary is vendor lock-in? How Homogeneous solutions are does one make the call to get rid of legacy winning the favour of CIOs and consolidate? Does a mix-and-match trying to reduce complexity approach help lower costs while giving and cost. CIOs the flexibility to deal with diverse user requirements? Is TCO really lower with a homogeneous system? These were just some of the issues that found great resonance at the event. While all CIOs agreed that the choice of ecosystem was ultimately a function of business imperatives and organizational requirements, each city offered a unique outlook on the issue ranging from support to outsourcing to simplicity to interoperability (see Page 50 for details). I must point out that the panelists weighed a tad more heavily on the side of homogeneous solutions (having a single or dominantly single vendor architecture). I suspect that this is a reflection of how complex, expensive to run and difficult to manage some legacy infrastructures have become. So much so that vendor or technology ‘lock-in’ was a non-issue for these CIOs, who pointed to the consolidation taking place in the storage space. And, wonder of wonders, these weren’t guys from banks or software services firms or telecom companies — a bunch of them were veterans from organizations that made motorbikes or cigarettes or antibiotics. Many CIOs shared clever storage strategies at the CIO Focus Storage events. They also came up with smart lines. But I prize one statement above all. “Two best-of-breed systems do not necessarily equal a best of breed solution.” That was Col. Arvind Saxena of Consilium Software. Do you agree with him? Write in and let me know.

Vijay Ramachandran Editor-in-Chief


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Content,Editorial,Colophone.indd 2

Vol/3 | ISSUE/11

4/16/2008 11:47:49 AM

content april 15 2008‑ | ‑Vol/3‑ | ‑issue/11

42 I how to back up without crashing As virtual machines proliferate, IT administrators have to be careful to choose where they park their backups so that it doesn’t bog down network performance or send licensing costs skyrocketing. By deni Connor

40 I sEVEn storagE truths You’ve got to be in the know of storage truths to make use of them. By Beth Schultz




28 I Flying in Formation The storage at this airline company doesn’t work harder — just smarter. By Joanne Cummings



22 I Finding data in an EmErgEncy Kindred Healthcare CIo Rick Chapman faces a torrent of data, 400 terabytes strong and growing. Storage, he says, has become a hidden cost of healthcare. But he’s employing a new cost-cutting weapon: virtual tape library technology. By Thomas Wailgum


36 I FibrE channEl’s FunEral march Experts predict that ultimately iSCSI over 10G Ethernet will dethrone Fibre Channel. Isn’t it time to start planning a migration? By Barbara darrow


30 I onE company, onE Vision, onE truth As Nationwide grew, its data became siloed and scattered, making it increasingly difficult for the company to get an accurate picture of its finances. Here’s how it brought all that data into focus. By Thomas Wailgum


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Vol/3 | ISSUE/11


(cont.) departments trendlines | 9 Data Recovery | When Hard Drives Go Bad Quick Take | V. Narayanan on SRM Voices | Who Wants a Chief Storage Officer? Security | Plugging Flash Drive Security Gap Technology | Make Way for Solid-State Flash Drives Opinion Poll | Picture This Disaster Recovery | DIY Hard Drive Rescue Staff Management | Storage Shuffling IT Jobs Security | Hard Drive Encryption’ Achilles Heel Research | IDC: It’s More Than We Thought Alternate Views | Unified or Diverse

essential technology | 56 Standards | Make Interoperability the Goal

Column by Mario Apicella IT Management | New Backup Tools for You Feature by Beth Schultz

From the editor-in-Chief | 2 No Room for Legacy

By Vijay Ramachandran

now onlinE Ajay Kaul, CEO, Domino’s Pizza, talks abouthow IT supports his brand.

7 8

For more opinions, features, analyses and updates, log on to our companion website and discover content designed to help you and your organization deploy IT strategically. Go to


executive expectations VIEW FROM THE TOP | 44 Ajay Kaul, CEO, Domino’s Pizza, says that in a company where a minute too late is money lost, IT is the base on which speed is built. Interview by Kanika Goswami

1 5

It management GIFTS OF OPEN SOuRCE | 15 From piles of open source file systems, file servers, storage networking software, and benchmarking tools, here are the best of the lot. Column by Mario Apicella & Paul Venezia


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Content,Editorial,Colophone.indd 6

Vol/3 | ISSUE/11

AdviSorY BoArd

AdverTiSer index

AbnAsh sIngh mA nAgEmE nT PublIshEr Bringi dev PrEsIDEnT T & CEO louis d’mello EDITOr IA l EDITOr-In-ChIEf Vijay Ramachandran

CIo, mphasis

Gunjan Trivedi sPECIAl COrrEsPOnDEnT Kanika Goswami ChIEf COPY EDITOr Sunil Shah COPY OPY EDITOr Shardha Subramanian

AlOK KumAr


45, 46 & 47

Global Head-Internal IT, Tata Consultancy Services AnwEr bAgDADI



Senior Vp & CTo, CFC International India Services Arun guPTA







Customer Care Associate & CTo, Shopper’s Stop ArvInD TA T wDE

D EsIgn & PrODuCTIOn


Vice president, Britannia Industries

rEsIDEnT T EDITOr Rahul Neel mani AssIsTAnT EDITOrs Balaji Narasimhan

ADC Krone

AlAgAnAnDAn bAlArAmAn

Vp & CIo, mahindra & mahindra

CrEATIvE DIrECTOr Jayan K Narayanan sEnIOr DEsIgnErs Binesh Sreedharan Vikas Kapoor, Anil V.K Jinan K. Vijayan, Jithesh C.C Unnikrishnan A.V, Suresh Nair

AshIsh K. ChAuhAn president & CIo — IT Applications, Reliance Industries C.n. rAm Head–IT, HdFC Bank

DEsIgnErs mm Shanith, Anil T pC Anoop, prasanth T.R Vinoj K.N, Siju p mul mEDIA DEsIgnErs Girish A.V, Sani mani mulTI PhOTOgrAPhY Srivatsa Shandilya PrODuCTIOn T.K. Karunakaran T.K. Jayadeep mArK ETIng A nD sAl Es vP sAlEs (PrInT) Naveen Chand Singh vP sAlEs (EvEnTs) Sudhir Kamath brAnD mAnAgEr Alok Anand Sukanya Saikia mArKETIng Siddharth Singh, priyanka patrao, disha Gaur bAngAlOrE mahantesh Godi Santosh malleswara Ashish Kumar, Kumarjeet Bhattacharjee, B.N Raghavendra, DElhI pranav Saran, Saurabh Jain, Rajesh Kandari, Gagandeep Kaiser mumbAI parul Singh, Rishi Kapoor,pradeep Nair, Hafeez Shaikh JAPAn Tomoko Fujikawa usA larry Arthur; Jo Ben-Atar EvEnTs vP Rupesh Sreedharan mAnAgErs Ajay Adhikari, Chetan Acharya pooja Chhabra

Dr. JAI mEnOn









Group CIo Bharti Enterprise & director (Customer Service & IT), Bharti Airtel mAnIsh ChOKsI Chief-Corporate Strategy & CIo, Asian paints m.D. AgrAwAl dy. Gm (IS), Bharat petroleum Corporation rAJEEv shIrODKAr Vp-IT, Raymond rAJEsh uPPAl Chief Gm IT & distribution, maruti Udyog PrOf. r.T. KrIshnAn Jamuna Raghavan Chair professor of Entrepreneurship,

This index is provided as an additional service. The publisher does not assume any liabilities for errors or omissions.

IIm-Bangalore s. gOPAlAKrIshnAn CEo & managing director, Infosys Technologies PrOf. s. sADAgOPAn director, IIIT-Bangalore s.r. bAlAsubrAmnIAn Exec. Vp (IT & Corp. development), Godfrey phillips sATIsh DAs

sIvA v rAmA vA A KrIshnAn

Printed and Published by N Bringi Dev on behalf of IDG Media Private Limited, 10th Floor, Vayudooth Chambers, 15–16, Mahatma Gandhi Road, Bangalore 560 001, India. Editor: N. Bringi Dev. Printed at Rajhans Enterprises, No. 134, 4th Main Road, Industrial Town, Rajajinagar, Bangalore 560 044, India

IFC & 7

CEo, Creative IT India

CSo, Cognizant Technology Solutions

All rights reserved. No part of this publication may be reproduced by any means without prior written permission from the publisher. Address requests for customized reprints to IDG Media Private Limited, 10th Floor, Vayudooth Chambers, 15–16, Mahatma Gandhi Road, Bangalore 560 001, India. IDG Media Private Limited is an IDG (International Data Group) company.


ChInAr s. DEshPAnDE

Executive director, pricewaterhouseCoopers Dr. srIDhAr mITTA md & CTo, e4e s.s. mAThur Gm–IT, Centre for Railway Information Systems sunIl mEhTA Sr. Vp & Area Systems director (Central Asia), JWT

Corrigendum In our ones to Watch Special (march 15, 2008), Veneeth purshottaman inadvertently referred to as HeadTechnology, Shoppers Stop. His designation should have read HeadTechnology HyperCITY Retail. Also, the name of George Fanthome, Vp-Chief Solutions Engagement, (mobility), Bharti Airtel’s was misspelt as George Fantome. The errors are regretted.

vv v. .v. .v v.r. bAbu Group CIo, ITC


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Vol/3 | ISSUE/11






R e c o v e R y Once a hard drive fails or has been damaged, attempts to fix the device without proper expertise will likely inflict more damage and put stored information in greater jeopardy, storage experts say. Kroll Ontrack released a list of common hard drive revival gaffes that the data recovery vendor warns against. That list of no-nos includes using a hairdryer to ‘dry out’ a wet hard drive, cracking open a drive to ‘swap out’ the parts thought to be bad, and banging the device against a desk Although stubbornness and inquisitive human nature share some blame, the effort to save money is the biggest culprit leading untrained individuals to try their hand at data recovery, said Greg Schulz, an analyst at the StorageIO Group. "In the race to save cost, people may forgo a data-recovery service and instead


spend time to rebuild and restore a drive that actually ends up costing more in the long run," said Schulz. Kroll claimed that more than 30 percent of non-recoverable disk drives are caused by human error rather than hard drive malfunctions. The data recovery company said there are usually two types of people who attempt to fix non-functioning drives: novices with no disk drive or storage device knowledge and highly trained individuals who are “very motivated to fix the problem," said Jim Reinert, vice president of data recovery and software products at Kroll Ontrack. Reinert said hard drive owners underestimate how complex a spinning hard drive is and wrongly believe its parts can be easily interchanged with off-theshelf components. He said these higher-capacity storage devices feature new levels of drive-specific

IllUStratI on by a nI l t

When Hard r Drives Go Bad rd

customization and factory fine-tuning that are not easily duplicated. The thing almost nobody does is back up critical data before any work on a suspect computer is started. That is the most common and detrimental mistake all users make. —By Brian Fonseca

Quick Take

V. Narayanan on Storage Resource Management Data begets more data. Like tackling the Medusa, the answer to today’s overwhelming data problem is not to attack it piecemeal for that will only create new monsters. Sunil Shah asked V. Narayanan, AGM-IT for Tube Investments, if he thought SRM was the key and this is what he had to say:

q u i c k ta k e

Do you think enterprises will invest in storage resource management? I think it depends on the mindset of the CEOs of these organizations. That said, I also think it is the job of the CIO to impress on the management what is important to business.

Do you think storage resource management is important? Very much. With data growing every single day, proper storage and the ability to retrieve data in the shortest possible period has become of utmost importance. Everything depends on how fast you are able to retrieve data for customers and stakeholders. And, by everything I mean business. IT is an enabler to the business. Do you see any other benefits to SRM beyond the quick retrieval of data? Safeguarding data would be one of the benefits. By this, I mean to safely retrieve data after a crash. Business continuity is very important. Data cannot be re-created. We, for instance, are currently planning our DR site.

Vol/3 | ISSUE/11

How would you sell SRM to your CEO? I would show him the pros and cons of SRM and also point to the direction in which the world is moving. I would tell him where our contemporaries and our competition are moving to. I would talk about what we could offer that is cutting-edge vis-à-vis our competitors. Nowadays, information is money. I would ask him to look at the money aspect. I would explain that the faster you can pull out information, the better. For example, every morning, I get an SMS update of what has happened in my company the previous day. This includes production and dispatch data. It is also shot out to other important decision makers. This helps us keep in touch with the business. V. Narayanan REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Who Wants a Chief Storage Officer? Data is exploding at a rate never seen before. And IDC says that CIOs will be responsible for 70 percent of that data. Now with the ever increasing e-discovery requirements hanging above the heads of CIOs, is it time to give storage its own specialist function? Is it time for the chief storage officer? Sunil Shah asked your peers what they thought. Voices

“Yes, a storage chief is a must-have today, mainly because of the amount of hacking going on. The security of data is very important. E-discovery is also driving this need.” trendlines

J. Selvaganapathy General Manager, TNSC Bank

“Yes, storage is now a specialist task. It’s one thing to store and another to retrieve. The most important thing is the speed of retrieval. Also, remember the CIO and the CSO were not specialist functions at one time.” Raghuram D. CTO, Advanced Technology Research, Ramco

“A chief storage officer is not needed. At the moment everything we need is already in-built. The data center is already

protected. To have someone exclusively for storage is not required.” G. Prakash Deputy Manager Systems, Surana


Trendlines.indd 10

Lend Your


Write to

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Plugging Flash Drive Security Gap S e c u r i t y Workers in the state of Washington's Division of Child Support are getting state-owned USB flash drives as part of a move to eliminate the use of unsanctioned thumb drives. External flash drives used by field workers hold the names, dates of birth and Social Security numbers of children served by the agency. They may also hold client tax documents, employer records, criminal histories and passport data. The state began rolling out 200 SanDisk Cruzer drives late last year after recalling suspect devices used by workers in the agency's 10 field offices. Most of those had been purchased independently by employees, causing myriad problems for the agency, said Brian Main, the division s data security officer. "We do periodic risk analysis of our systems, and one of the things that came up is the use of thumb drives. They were everywhere," said Main. "We had a hard time telling which were privately owned and which were owned by the state." The Cruzer Enterprise drives provide 256-bit AES encryption and are password-protected, Main noted. The agency also plans to use SanDisk's Central Management and Control software in its Olympia headquarters. The Web-based management software can centrally monitor and configure the miniature storage devices and prevent unauthorized access to them. Larry Ponemon, chairman of Ponemon Institute LLC, a research firm, said that most organizations are too enamored of the convenience, portability and low cost of USB flash drives to consider security issues. "I think a lot of organizations are asleep at the switch. They don t see this as a huge problem. It obviously has the potential to be the mother of all data-protection issues," Ponemon said. Main said the agency first looked at Verbatim America LLC's thumb drives but ultimately chose the SanDisk technology because of its support for Microsoft’s Windows Vista operating system. Workers in the agency's training operations are getting 4GB devices to store large presentations and screenshots, while enforcement personnel will get 1GB drives, Main said.

—By Brian Fonseca

Vol/3 | ISSUE/11

4/15/2008 5:36:44 PM

Make Way for Solid-State Flash Drives Storage Strategies Now. Customer-facing applications, financial services or any transaction-intensive environment could benefit, she says. "It's going to be expensive, but for a company that needs to be able to respond quickly to customer requests, or find data quickly it's worth it," Connor says. Pliant will not sell its technology directly to enterprises. Rather, it will license its Enterprise Flash Drive devices through OEM agreements. Pliant's founders and management team have plenty of experience in the hard drive industry. Founders Mike Chenery, Doug Prins and Aaron Olbrich all have experience at Fujitsu, while Olbrich — Pliant's CTO — also worked for IBM. Jim McCoy, Pliant's chairman, co-founded both Maxtor and Quantum. While the cost-per-gigabyte of hard drives has improved dramatically over the years, McCoy argues that hard drive performance itself is flatlining. At the same time, demands for performance in

enterprises are increasing, so IT executives use three or four times more hard drive space than they need for capacity in order to get a higher rate of input/output operations per second, he says. "Now when they overprovision at three or four times, they're paying not only for extra hard drives but for power consumption on those hard drives," McCoy notes. Pliant is combining commodity chips along with proprietary controllers that will let enterprises obtain high rates of input/output operations per second, bandwidth and reliability levels, while avoiding overprovisioning of storage, according to Ahola. Enterprises taking advantage of this storage would likely use it in combination with hard drives, which cost less and would be used for data that's not as frequently accessed, Pliant executives say. Pliant did not release pricing details for its upcoming product. —By Jon Brodkin


Start-up Pliant Technology is building solid state flash drives for enterprises, joining an emerging market that EMC has dubbed ‘tier zero’ and is designed for high-performance applications such as data mining and online transaction processing. The success of flash-based thumb drives has led to innovations that are lowering the cost of flash-based storage, making it suitable for enterprise use, Pliant officials say. Traditional hard drives won't be disappearing any time soon, but "in the next few years I think we'll make significant inroads into the very high-performance end" of the storage market, says Pliant CEO Amyl Ahola, the former CEO of TeraStor and vice president at Seagate and Control Data. Tier-zero (or enterprise flash as Pliant calls it) is in the early adoption stage and is the highest-performance, highest-availability, and highest-cost type of storage on the market today, says Deni Connor, head of analyst firm


Picture This

437 billion images were captured by digital cameras in 2006, according to IDC.


incurs some corporate responsibility – for example, images stored on work PCs.


Infog raphics BY pc ano op

How important is storage and information management to your company’s future?

of all digital information we produce and store is created by consumers. Source: CIO Research

Vol/3 | ISSUE/11

Trendlines.indd 11

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


4/15/2008 5:36:46 PM

do-iT-yourself hard drive rescue R e c o v e R y Data rescue specialist retrodata has announced what it claims is the first device for recovering damaged hard disk platters that can be successfully used by non-experts. Called the System p. EX (for platter extraction system), the 75-kilogram device uses laser-guided positioning to help it accurately extract platters from any 3.5 inch hard drive with minimal user intervention. What's unusual is that such devices normally require highly skilled operators, whereas the System p. EX can be used by a relative novice at a data recovery company. according to retrodata, the benefit for corporates is that it will allow smaller data recovery companies to compete against the often expensive services offered by larger companies, which could help to drive down prices. the UK-based company won’t release photographs of the product until it has been fully patented, but did say that it would work on any drive with up to five platters, possibly more. It would also accommodate drives with internal shock-absorption damping of a type that might physically defeat rival systems. "only the largest of data recovery companies have tools available that even allow this process to take place," said retrodata’s Duncan Clarke, who also invented the machine. the System p. EX is slated for release next month, at an approximate cost of rs 2.78 lakh per unit. this includes a 10-year warranty excluding the cost of occasionally replacing precision components within the machine. asked whether hard disk recovery was really as critical as it once was, Clarke responded that the "age of ubiquitous backup" was a myth. "let me assure you that even multinational corporates are capable of either forgetting to back up, or their backups are corrupt. there would be no such thing as 'data recovery' if everyone backed up," he said. "Some companies willingly pay to have critical data recovered; factor in emergency turnaround, and this figure can be doubled or trebled."

—by John E. Dunn 12

Trendlines.indd 12

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Storage Shuffling IT Jobs s t a f f M a n a g e M e n t The growing flood of data that enterprises create and consume is doing more than giving rise to new storage technologies. It's also changing who is responsible for storage within IT departments. Demand for storage capacity has grown by 60 percent per year and shows no signs of slowing down, according to research company IDC. New disclosure laws, which require more data to be preserved, also are making storage management a bigger job. "With the sheer complexity of some companies' information infrastructures, you wonder whether one person can really get their hands around it all," says Pund-IT analyst Charles King. The job has grown beyond taking care of storage arrays, he says. "It's really requiring storage administrators and executives, including CIOs, to think of it in a more holistic way." The turning point for some IT departments seems to be the shift to centralized storage. The University of Pittsburgh set up its first SAN and started moving its data out of servers and into its network operations center, says Jinx Walton, director of IT. Until then, every time a group in the IT department set out to meet a need on campus, the university's IT development team would assess how much storage was needed for the project and purchase it. The individual group would then manage that storage."Whoever was responsible for the project was responsible for the storage," Walton says. But that was inefficient. Buying storage for individual servers and investing in additional disks when the servers filled up was expensive and a distraction, she says. After centralizing most servers in the network operations center (NOC), the university started building a SAN there and that was shared by all the project managers. Purchasing and management of storage shifted from the development realm to the NOC. Pund-IT's King thinks many more companies will face these kinds of challenges as storage grows rapidly in importance as well as in terabytes. "People are still trying to get their heads around how to do this," King says. The falling price of storage equipment only makes things worse, he added. "Anybody can afford enough data storage to get themselves in trouble." —By Stephen Lawson

Vol/3 | ISSUE/11

IllUStratIon by MM Sh an Ith



Hard Drive Encryption's Achilles Heel reboots the system from a portable hard disk, which includes software that can examine the contents of the memory chips. this gives an attacker a way around the operating system protection that keeps the encryption keys hidden in memory. "this enables a whole new class of attacks against security products like disk encryption systems that have depended on the operating system to protect their private keys," halderman said. "an attacker could steal someone's laptop where they were using disk encryption and reboot the machine ... and then capture what was in memory before the power was cut." Some computers wipe the memory when they boot up, but even these systems can be vulnerable, halderman said. 10 minutes or more," halderman said. hardware-based encryption would probably reduce the risk, halderman said, but he agreed that "it's a difficult problem." hard-drive makers Seagate and hitachi both offer hardwarebased disk encryption options with their hard drives, although these options come with a premium price tag. tRenDlines

If you think that encrypting your laptop's hard drive will keep your data safe from prying eyes, you may want to think again, according to researchers at princeton University. they have discovered a way to steal the hard drive encryption key used by products such as Windows Vista's bitlocker or apple's fileVault. With that key, hackers could get access to all of the data stored on an encrypted hard drive. that's because of a physical property of the computer's memory chips. Data in these DraM (dynamic raM) processors disappears when the computer is turned off, but it turns out that this doesn't happen right away, according to alex halderman, a princeton graduate student who worked on the paper. In fact, it can take minutes before that data disappears, giving hackers a way to sniff out encryption keys. for the attack to work, the computer would have to first be running or in standby mode. It wouldn't work against a computer that had been shut off for a few minutes because the data in DraM would have disappeared by then. the attacker simply turns the computer off for a second or two and then secuRity

—By Robert Mcmillan

IDC: It's More Than We Thought

Vol/3 | ISSUE/11

Trendlines.indd 13

aren't needed when a call is over, and surveillance video that isn't saved." But the gap between available storage and digital information will only grow, making it that much harder for vendors and enterprises to efficiently store information that is needed. In 2011, there will be nearly 1,800 exabytes of information created, twice the amount of available storage, IDC predicts. EMC's president of content management, Mark Lewis, doesn't think we'll ever hit the point where the world's available storage is exceeded by the amount of information we need to store. "With the price points of storage continuing to decline, I don't think we're ever going to create some kind of storage shortage," he says. Here's a quick look from IDC at how a few businesses and industries contribute to growing data volumes: —Wal-Mart refreshes its customer databases hourly, adding a billion new rows of data each hour to a data warehouse that already holds 600 terabytes.

—The oil and gas industry is developing a 'digital oilfield' to monitor exploration activity. Chevron's system accumulates 2 terabytes of new data each day. —YouTube's 100 million users create nearly as much digital information as all medical imaging operations.

Il lUStratI on by Unn IK rISh nan aV

R e s e a R c h Digital information is being created at a faster pace than previously thought, and for the first time the amount of digital information created each year has exceeded the world's available storage space, according to a new IDC report. The amount of information created, captured and replicated in 2007 was 281 exabytes (or 281 billion gigabytes), 10 percent more than IDC previously believed — and more than the 264 exabytes of available storage on hard drives, tapes, CDs, DVDs and memory. IDC revised its estimate upward after realizing it had underestimated shipments of cameras and digital TVs, as well as the amount of information replication. The 2007 total is well above that of 2006, when 161 exabytes of digital information was created or replicated. "We're not actually running out of storage space," IDC notes, "because a lot of digital information doesn't need to be stored, such as radio and TV broadcasts that consumers listen to and watch but don't record, voice call packets that

—By Jon Brodkin REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


alternate views

BY Ba l a j i Na r a s i m h a n

Unified or Diverse? NUS Vs. a SAN and NAS combination

“I don’t think that CIOs should put all their eggs in one basket. They should wait until NUS stabilizes.” K.T. Rajan Director- Ops, IS & Projects, Allergan India


Storage is like an insurance against failure. I would therefore

P hotos by Srivatsa Shandi lya

prefer having a system that is based on SAN and NAS in lieu of NUS because two separate systems can provide me and my organization with more rugged architecture. I feel that NUS has yet to gain maturity. Right now, I would view it as a single point of failure, and therefore, I would seek to avoid it. With a mix of SAN and NAS, I get better fallback. This is important for me because I am responsible for business continuity. I don’t think that CIOs should put all their eggs in one basket. They should wait until NUS stabilizes. Right now, it is still a recent technology, and it is still in the process of evolution. Of course, CIOs still have to choose between SAN and NAS, and the deployment mix. If you want a real-life comparison, consider a petrol car and a diesel car. If you want a car for yourself to drive, then you would want to choose a petrol car because it may cost less upfront. However, if you were buying a car for commercial purposes, then you may want a diesel car because you get better mileage. Similarly, CIOs should judiciously adopt SAN or NAS as per their requirements.


Trendlines.indd 14

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

“Vendors are doing a lot of research in NUS and releasing products only after testing them. So, I don’t think that CIOs have much to fear.” Pankaj Shah DGM – Systems, Adani Wilmar

Sure, it is risky to choose NUS because it is still new. But, at one level, I think that this decision depends upon a CIO’s ability to take risks. I think that it is good to take risks for the betterment of the business. But CIOs should ensure that the risks they take with NUS are calculated ones. Let me give you an example. In late 2003, we were, I think, the fifth or the sixth company in India to choose the Itanium. Many people asked me why I was going in for something new, but I had faith in the vendor, so I went ahead. Even with NUS, this is bound to be true. A lot of vendors are doing a lot of research in NUS, and they are releasing products only after testing them. So, I don’t think that CIOs have much to fear. Ultimately, CIOs should be able to trust their vendors. The best way to get there is to select a vendor that the market trusts. Some vendors are known for the amount of money they spend on research and development, while others are known more for their abilities in marketing. CIOs should judiciously choose vendors with a strong focus on leadership in technology. This is insofar as the vendor angle is concerned. Additionally, I think that, NUS, if properly implemented, may provide CIOs with better control over storage. They might be able to manage allocation and provisioning with greater efficiency.

Vol/3 | ISSUE/11

4/15/2008 5:36:54 PM

Mario Apicella & Paul Venezia  

I.T. Management

Gifts of Open Source From piles of open source file systems, file servers, storage networking software, and benchmarking tools, here are the best of the lot.


Illustrat io n by ANIL T

ombining ‘Open Source’ and ‘storage’ in the same sentence used to trigger a sardonic grin, but no longer. The availability of free and open software is as true today for storage as it is for operating systems and applications. The future of Open Source storage software looks even brighter, considering recent developments such as Sun's donation of OpenSolaris, along with a wealth of storage technologies, and the Aperi project, a heavyweight-backed effort to create an Open Source suite of storage management applications. Spearheaded by IBM, Aperi also has the support of other key storage vendors including Brocade, Cisco, Computer Associates, Emulex, Fujitsu, LSI Logic, NetApp, Novell, and YottaYotta. Why would these vendors share their expensive software development efforts with an Open Source community? Sure, there's no question they want something in return — more users, more control over technology developments, more control over standards. Nevertheless, gifts of Open Source are a welcome development in a fragmented market such as storage where standards work well for hardware but don't seem to apply to software at all. Storage needs fewer technology schools and more real standards. Open Source and community development have the potential to bring that about. Open Source also has the potential to turn the storage marketplace upside down. Despite the plethora of vendors and storage solutions on the market today, you'll find little differentiation in hardware. In fact, many vendors share the same basic hardware and toss their management software

Vol/3 | ISSUE/11

Column Best of Open Source in Storage.indd 15

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


4/15/2008 4:05:55 PM

Mario Apicella & Paul Venezia

I.T. Management

on top of it. Some vendors don't offer hardware at all, opting to use commodity servers as their physical platform. After all, an Ultra 320 SCSI drive isn't exactly rare. If we're at a point where the hardware is nearly immaterial to a solid storage platform, then what's to stop an Open Source storage solution from making a dent in this market? Nothing.

And the Award Goes to That said, let's get on with our awards. Our first storage Bossie goes to ZFS, or Zettabyte File System, introduced with Solaris 10 and made available to the Open Source community in OpenSolaris. NFS is not quite dead or obsolete yet (in fact it's still improving with Version 4 in the make), but eventually NFS has to give way to ZFS.

for Open Source storage networking software AoE Tools. The steady rise in iSCSI deployments will stoke the fires of debate with rival Fibre Channel for years to come.

The AOE Boon While that discussion heats up, AoE (ATA over Ethernet) is quietly making progress. AoE brings the simplicity of non-routed Ethernet to a storage network, moving data between a host and a target device with little overhead and guaranteed delivery. The AoE protocol also has a mechanism to prevent conflicts among multiple initiators accessing the same data at the same time. AoE Tools, a package of AoE client applications included in most Linux distros, makes AoE free and easy to deploy. Well, you will need some hardware,

Gifts of open source are a welcome development in a fragmented market such as storage where standards work well for hardware but don't seem to apply to software at all. Storage needs fewer technology schools and more real standards. ZFS has so many innovative features that it may take some time before storage admins can wrap their arms around it. Imagine never having to check your systems for data integrity because it's guaranteed by the file system. Add built-in logical volume management and RAID management, then stretch capacity to a galactic dimension, a one followed by 21 zeroes. In short, ZFS hides many dirty details of storage administration, and it can scale as far as it is physically possible to go. As we have noted, ZFS is the best file system we've ever seen, and it belongs to the Open Source community. Just don't expect to find it in your favorite Linux distro quite yet.

Bossie for Best Server FreeNAS takes our Bossie for best Open Source NAS server. FreeNAS is far and away the most mature Open Source NAS platform, built on a FreeBSD base and backed by an active community. Providing CIFS, NFS, FTP, iSCSI, RSYNC, and AFP (Apple File Protocol) support, not to mention software RAID 0, 1, and 5, FreeNAS covers just about all the bases for storage, and wraps them in an attractive Web management interface. To get in this game, all you need is a server and some disk. Even better, FreeNAS can be easily installed on a Compact Flash drive or a USB key, so none of the core OS actually lives on the storage drives, thus making it far less vulnerable to hardware failure. Its performance is dependent on the hardware used, and it's not likely to beat an EqualLogic iSCSI SAN in a headto-head, but for free it can't be beat. We also have a Bossie 16

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Column Best of Open Source in Storage.indd 16

available from vendors such as Coraid. Our remaining storage Bossies go to benchmarking hall-of-famers Iometer and IOzone, which win for best block I/O tool and best file I/O tool, respectively. Countless thousands of admins — or even many more according to the download statistics of SourceForge — have used one tool or the other to put vendors' performance claims to the test, or to simulate the impact of a new application load on the storage system. Easy to use and available on multiple platforms, both applications are as indispensable to an IT shop as a measuring tape to a tailor. Finally, there are a few areas we are keeping our eyes on but aren't yet prepared to pick any winners. One of these is backup and recovery, where Amanda is well-known and there are too many up-and-comers to mention. Also on the watch list is NTFS Mount (Solaris), UFS Reader (WinXP), a project that is still in beta but promises to break down the wall between Solaris and Windows, allowing either operating system to directly access the file system assets of the other. Now that's what community is all about. CIO

Mario Apicella is a senior analyst at InfoWorld. Paul Venezia is senior contributing editor at the same magazine. Send feedback on this column to

Vol/3 | ISSUE/11

4/15/2008 4:05:55 PM

Steve Duplessie  

Applied Insight

From Marginalized to Virtualized Weighed down by data, IT isn’t moving at business-speed and is being treated like the fat boy no one wants on the basketball squad. Virtualization can get you back on the team.


Illust ration by BINESH SREEDHARAN

e've all heard the line about ‘realigning IT with the business’, which is sort of like saying we want our ‘pivot to make better passes to the shooters’ — duh. But as crazy as that sounds, it's reality — and it isn't getting better, it's getting worse. Business thinks IT is slow and unresponsive. IT departments know that the business is totally unappreciative of the fact that while they want to support the business as much as possible, they are effectively doing so wearing handcuffs and chains. IT is like Houdini — the fact that it can get anything done is magic to me. Every year for 15 years the gap between the two has widened. Now it is about to fracture forever potentially. The issue du jour is, now, instead of just complaining about IT, business units are making decisions and acting completely outside of IT with regards to information access applications and tools — and then expecting IT to quickly provision and support those applications. Information access applications include every business facing application — from Word to a trading system to CRM to e-discovery. Priorities such as regulatory compliance and legal are especially hot now. Business critical applications — those designed to extract incremental value from existing information — are taking a backseat to the application of spit and chewing gum. IT shops are starting to remind me of those poor men in the engine room in Titanic. The result is that IT is becoming further marginalized in the eyes of the business. IT is forced to say no to business requests, as it simply cannot bring new applications online in any short-

Vol/3 | ISSUE/11

Column - From Marginalized to Virtualized.indd 17

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


4/15/2008 4:07:03 PM

Steve Duplessie

Applied Insight

term window because of legacy issues. As ‘hot’ applications are brought online, they further stress IT resources as they tend to be implemented in a stovepipe fashion — where the business unit only cares about that application but not in context to the impact it may have on other back-end IT operations. The business unit is therefore acquiring these tools and services, and handing them off to IT to support after the decisions have been made. The situation today is becoming flammable. The business wants to be able to react to requirements quickly without having to be overly concerned for IT and its ability to deliver. The business unit wants known costs for known services in a known time frame — and the ability to add or delete service levels based on costs and requirements. The business unit believes it is mandated to act, so as IT pushes back, the business unit moves ahead regardless. IT wants to be able to fulfill all the requirements of the business unit, but it must attempt to do so within the encumbrances it has — from people to power and cooling to space. IT has been addressing the independent acts of the business unit in one of a few basic ways: 1. IT attempts to support the timeline demands of the business by creating yet another stovepipe operation — intentionally keeping the infrastructure, data and operations separate from the mainstream. While all recognize this is the most expensive, least efficient and worst case scenario from the ability to create common data value, protection, usage and management, it is more often than not the solution IT is getting ‘jammed’ with from the business unit. 2. IT attempts to support the demands of the business but requires the new information access application to adhere to existing IT standards operationally, and preferably with better utilized, shared infrastructure assets and people. This will always take longer, require greater planning, testing and implementation, and require downstream regression testing on what cause and effects the new application will have on existing processes, people and infrastructure. 3. The business bypasses IT altogether and either sets up the application as an external service offering, or worse, brings the product in house with no IT involvement at all and says, 'Surprise!' According to ESG Research's November 2007 study E-Discovery Requirements Escalate, in the archive/e-discovery/ litigation support market only 7 percent of the time does IT make the decision to use funds to build out infrastructure, tools, applications and processes to support e-discovery mandates, whereas 37 percent of the time the legal department makes those decisions — with no involvement upfront from IT at all. Examples such as this are becoming more common

as unknown business unit requirements continue to appear — causing an increased rift between an already tenuous relationship involving the core business and internal IT. The results aren't good. They are bad for business, but to facilitate change, we must acknowledge and understand the realities within the cycle. 1. The business unit has a requirement. 2. The business makes a decision on e-discovery tools and policies. 3. IT is handed a mandate from the business to implement and support the decision. 4. Even if the implementation is flawless, a new stovepipe has been created. a. That application only looks for data that it ingests, requiring decisions to be made as to what that data is and how to get it into the system. b. Applications such as this may crawl existing data sources to ingest, but must be directed as to the specific data types. c. Applications such as this normally only support one or two different data types — e-mail, for example — but not

Business units are making decisions and acting completely outside of IT with regards to information access applications and tools — and then expecting IT to quickly provision and support those applications.


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Column - From Marginalized to Virtualized.indd 18

database/transaction records, or unstructured data living outside the core data center. 5. A discovery request from the new application ‘archive’ is only successful if the request contains all of the relevant data, which rarely (if ever) exists entirely within that archive. 6. IT tends to attempt to evolve the new application stack into existing processes for backup/recovery, disaster recovery and so on, stressing existing systems and processes. It is easier for already taxed IT personnel to add to existing operating processes versus creating new ones, regardless of the applicability. The reality of this is that IT is already operating above capacity. The business unit views IT's inflexibility and time to service delays as unacceptable, and as such begins to make decisions independently of IT. The businesses may gain accelerated time to deployment of the new application, but has little knowledge of the fact that they may be causing more damage than good overall. I worry about the inevitable long-term effects of this cycle, but recognize that most will not have the time or luxury to concern themselves with such things. In the short term, IT is being further removed from the decision-making process for

Vol/3 | ISSUE/11

4/15/2008 4:07:04 PM

Steve Duplessie

Applied Insight

Data center virtualization is needed so that business no longer needs to be concerned with IT and its idiosyncrasies and IT no longer needs to say no ad nauseam. Then IT does not end up and reacting constantly in a no-win situation. business unit information access decisions. The result is that IT ends up having to support the goals of the business unit, but it has no control or a limited amount of control over the decision processes and the effects of those decisions on IT's overall ability to deliver services.

Fret Not, A Solution Is Here The fundamental problem of running IT as a service bureau is rigidity; that is because infrastructure is stovepiped, complex, requires hyper-specialization at every element, and has incalculable points of interdependencies. The concept of ‘fluidity’ is abstract at best. In an ideal world, the data center would simply be a collection of infrastructural resources capable of morphing into virtual stovepipes in turn capable of delivering on the immediate and long-term needs of the business and to be malleable in semi-real time to deal with unknown new requirements or unforeseen events. In short, data center virtualization is required such that the business no longer needs to be concerned with IT and its idiosyncrasies and IT no longer needs to say no ad nauseam. If the data center were ‘liquid’, IT could say yes first, bring up the application and pick up the pieces as a background task. Do you remember when RAID first became popular and all the Oracle DBAs demanded that their stuff sit on raw devices? Sooner or later we just said, 'OK,' and then did what was right — gave them a virtual device and told them it was raw. The benefits they derived, the business derived and the poor IT slob derived far outweighed the little white lie we had to tell. This is the same theory on steroids. Server virtualization technologies are the first infrastructure layer that begins to enable this reality. By creating a server infrastructure that provides for virtual machines, server fluidity is enabled. Virtual machines can move between physical machines at will and even automatically in the event of failure, new performance criteria, or any other new event or issue. Server virtualization means that at least from the perspective of ‘always having a machine ready for the unknown’, we can appear fluid. Being able to provide a virtual server to a business unit on a moment's notice is nice but limited. It doesn't address all the other issues downstream. It is a good start to begin to alter the perception of IT and to close the gap by providing a can-do answer upfront, but it will only slow the back-

Vol/3 | ISSUE/11

Column - From Marginalized to Virtualized.indd 19

end problems. What is really required is to stop the primary focus from being exclusively on infrastructure and begin to focus on what really matters below the business unit application — the data. The business application doesn't care about infrastructure; it assumes infrastructure can support its requirements. The business unit cares about the data associated with that application, while the overall corporation needs to care about the data from a holistic perspective. Nobody outside of IT cares about infrastructure. IT needs to focus on how the data can be best managed since storing, manipulating, finding and protecting data is the baseline reason for IT's being. Data virtualization is the next thing. Applications connect to information via infrastructure. Infrastructure change interrupts that connection. By creating a virtual connection between the application and data, we can solve most of today's primary IT problems and re-establish a tighter bond between IT and the business. Since the business owns the application, it should decide which requirements it needs to perform its stated objective and not IT. IT should own the data, not information, but data. The individual applications create and manipulate that data which becomes information when utilized. When the business unit executes on its own without IT, IT ends up controlling nothing and reacting constantly in a no-win situation. As long as IT can say, 'Yes, we can provide you a way to execute your application and provide you access to your data based on your requirements,' the business will gladly change its perception and hand off infrastructure and data control to IT. Here's how I see it working in the real world. In the previous example, the legal department chose an e-discovery application (glorified search engine) and created corporate governance policies that got shoved into IT. Everything in the solution ended up in stovepipes, which means it is invariably riddled with holes. In the new world, saying yes by applying data virtualization along with infrastructure virtualization starts with one simple rule from IT to the business: Your application must house its data ‘here’. ‘Here’ is a virtual data abstraction interface that accepts any and all types of data — from any and all types of applications — in one common virtual place. Want to have your e-discovery tool query against our e-mail data? Point it here. Want to search across e-mail and structured transactional data? Point it to the same place. Want to write new data generated by a new application or an old Word file? Then click 'save' and here is where it will be. If there is only one virtual place to put all data, then there is only one virtual place to find all data. Behind REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


4/15/2008 4:07:04 PM

Steve Duplessie

Applied Insight

that data abstraction, IT still has to do all the hard things it's always done, such as decide what data is going to reside where, for how long, how to protect it and so on. But if it can be done ‘fluidly’, then change suddenly isn't paramount. If you can react to changing infrastructural requirements without the business unit calling, then like the proverbial tree that fell in the woods, did it even happen? I suggest that if the phone isn't ringing, things are good. Server virtualization enables fluidity of virtual machines executing application stacks so that if a failure occurs or if new powerful machine technologies come out they can be integrated dynamically, and based on priorities we might move a virtual machine to a whole new environment without the business unit knowing or caring. Server migration, high availability, disaster recovery, performance optimization and asset utilization/optimization are all functions within change states that normally cause disruption — or at the very least they cause the phone to ring. Virtualization enables the automation and fluidity beneath that abstraction layer to be invisible.

Data Virtualization is Not Storage Virtualization Storage sits at the bottom of the data layer, and like the rest of infrastructure, should also be virtualized. By creating basic data abstractions, logically all data can exist in one place, making it easier to perform any application or data operation function. Data layer services, such as database management,

expensive devices disposable and forget all the skills and tools you know and love.' By creating a data virtualization approach, you don't have to throw the baby out with the bathwater — you can simply buy time to do it the right way.

What's Required? In simple terms, what you need to enable this infrastructure is a global virtual data access layer that encapsulates and centralizes data management functionality in one place. Ideally, this virtual data ‘portal’ will present itself as whatever the application wants it to be regardless of the type of data it spits out. It would ingest the data and route it to whatever appropriate underlying infrastructure meets the business unit requirements. As a central data management engine, it would be able to apply universal and object specific policies — such as retention, protection, security, categorization, performance/ life-cycle management and so on — based on a menu of options the business unit chooses — each with a known cost. Consider data as either ‘dynamic’ or ‘fixed’ — a nonchanging digital asset. I'd suggest that every single data object lives in this layer once it becomes fixed. It may be physically relegated to offsite, offline media, but to the application or the business unit, it is available — until a policy states that it must be destroyed. In that way, when legal wants to bring a new e-discovery search tool online in the future, it can point at all of the living corporate data, not just portions. When the marketing department wants to mine data for business

By creating a virtual connection between the application and data, we can solve most of today's primary IT problems and re-establish a tighter bond between IT and the business. logical provisioning, file system management, performance optimization, protection and so on are functions that can be more easily addressed simply because all data exists in one virtual location. IT managers would continue to have to operate and optimize the physical storage layer beneath, but by creating a fluid data abstraction layer, they are able to mitigate the physical effects of change, which results in less negative visibility and fewer phone calls. One of the reasons storage virtualization has been slow to move upstream is that specialized skills and knowledge about devices and functions within this layer are lost when the abstraction moves above those devices. For example, if your storage administrators are gurus at managing and operating EMC Clariion arrays, enabling them to see those Clariions as generic disk storage would not offer enough benefit to outweigh losing the ability to utilize the administrators' specific skills and tools that they acquired to manage those devices. It is a losing proposition for the industry to take highend, proprietary equipment and say 'now you can treat these 20

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Column - From Marginalized to Virtualized.indd 20

intelligence and value, it points its gun at the one place where everything lives. Imagine how much easier this could make it to garner new value from old data and destroy the chasm that exists between the business and IT at the same time. This approach provides for IT to re-evaluate and completely alter faulty processes, and enable consistency and speed in our ability to deliver services to the business. From media management to regulatory compliance, all the tactical and difficult IT functions that cause us to say no, so often could now be centrally managed and controlled, enabling IT to say yes first, and dynamically make the necessary changes happen without being a drag on the progress of the business; and that will be one happy day for a lot of people. It could happen — heck, it should happen, and when it does somebody is going to get very, very rich. CIO Steve Duplessie founded Enterprise Strategy Group and is a regularly featured speaker at shows like Storage Networking World. Send feedback on this column to

Vol/3 | ISSUE/11

4/15/2008 4:07:04 PM

Fat, bloated and heavy. The storage problem is symptomatic of the excesses of an indulgent world. But the clean-up is on — the green push is one example. New storage technologies and new drivers like compliance make this a great time to be proactive. Now, it's your move. 22 | E-discovery | Finding d data in an EmErgEncy 28 | IlM | FLy L Ing In FORmAt Ly A IOn At 30 | Master Data Management | OnE COmpA mp ny, OnE VIsIOn, OnE tRuth mpA 36 | Network Storage | FIbRE ChAnnEL’s EL’s FunERAL mARCh EL 40 | Storage Management | sEVEn stORAg ORA E tRuths ORAg 42 | Back Up | hOW tO bACk up WIthOut CRAshIng

Vol/3 | ISSUE/11

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8



Kindred Healthcare CIO Rick Chapman faces a torrent of data — 400 terabytes strong and growing. Storage, he says, has become a hidden cost in healthcare. But he's employing a new cost-cutting weapon: virtual tape library technology. By Thomas Wailgum

As Kindred Healthcare CIO Rick Chapman begins to discuss storage strategies, he asks a

revealing rhetorical question. "The question you should ask me is: why would I be talking to you about this, as opposed to the VP of data center operations?" The simple answer, he explains, is that Kindred's infrastructure and storage costs sit high atop his agenda — and his executive committee's — right now. As wave after wave of new data flows in — electronic patient records, e-mails, insurance and billing files, and government-mandated documentation — Chapman feels the squeeze on the overall cost and performance of his IT operations, he says. 22

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Reader ROI:

Why storage is becoming boardroom conversation The problems tape back-up poses in e-discovery How a virtual tape library can help

Vol/3 | ISSUE/11


Storage is suddenly in the C-suite spotlight. "All of a sudden, what used to be a mundane technology component and what the executive committee or the board wouldn't have cared much about and would have trusted to me and my subordinates, now we have to be more transparent, and be able to defend the purchases and the cost structure of the infrastructure," Chapman says. Things used to be different, he says. "But now I have to make sure we're as cost effective and still able to provide the business service reliability for the company."

Hospital i.t. Chapman is no small-time CIO, nor is he one of those CIOs more comfortable in the back room than in the boardroom. Kindred Healthcare is the largest for-profit long-term healthcare provider in the United States with Rs 16,000 crore in revenues, 52,000 employees and 600 facilities in 39 states. Chapman is not only the CIO but the chief administration officer, and he sits on the executive committee. He has worked in top IT spots at healthcare giants Columbia HCA and Humana in the past. So Chapman knows the complexities of healthcare from both the business and IT sides, and what he sees on the IT side of late leaves him in awe. "We're growing [data volumes] by 40 percent a year, and now we have over 400 terabytes — just unimaginable volumes of data from what we had just a few years ago," Chapman says. "We've seen exponential growth, and it keeps pressure on the storage platforms." As a result, he's vowed to eliminate traditional tape backup systems entirely in the next few years.

tHe 'save everytHing' Mentality Chapman and other CIOs in healthcare face a vicious cycle of mandatory document retention that feeds on itself. Government regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), force every organization and every person to save every electronic medical record and e-mail for longer and longer periods of time. The 2008 HIMSS survey (Healthcare Information and Management Systems Society) r e fl e c t s CIOs' concern ab o u t the topic. In looking at 2007 and 2008 survey data, 'government regulation and compliance issues' showed the greatest year-over-year increase among all categories when CIOs and 24

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

51% of in-house corporate lawyers in the US used outside e-discovery vendors this year compared to 37 percent last year. In the UK, that number jumped from 8 percent last year to 71 percent this year.

Source: Fourth Annual Litigation Trends Survey

IT leaders were asked to choose the business issue that would most impact healthcare during the next two years. In addition, healthcare institutions now face pressure from their legal departments to be able to produce all documents that relate to malpractice, and Medicare fraud and abuse cases. "This has caused us to have to keep all e-mail forever, and other documents that are critical, because they're subpoenable and discoverable in legal cases," says Chapman. "Saying that we deleted them is not a good reason anymore." The sum total of all this storage could be overwhelming for many healthcare organizations. A Frost & Sullivan Healthcare Storage Report predicts that by 2010, medical organizations will have to hold nearly 1 billion terabytes of data, which is roughly the equivalent of 2 trillion file cabinets' worth of data. Of course, in this ‘save everything’ environment, storage gets expensive and resource-intensive. "It's mind-boggling and completely non-productive," Chapman notes. "It's that hidden cost of health care that you don't see at times."

tales of tHe tape One critical piece of this unwieldy storage puzzle is that Kindred needs fast and easy access to backup, archiving and disaster recovery systems. For example, Kindred Healthcare often receives subpoenas to provide "select e-mails from executives who were having conversations about a certain topic during a certain time period," Chapman says. "All of this is kind of recent, and all this has ratcheted up in the last five years as more [legal and government] attention has come to this industry." Like many companies, Kindred has historically relied upon tape backup systems and massive, centralized data libraries for archiving and recovery. Tape, however, has its limits, as storage volume grows. In turn, the nightly batch processes, or once-a-day backup, are nearly impossible. "It takes more time to back up and recover than we have time available in the data center overnight," Chapman says. Concerns regarding tape's limitations are widespread among healthcare CIOs. John Halamka, the CIO of Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School, writes on his blog that storage backup and data recovery are at the top of his list of things that will keep him awake in 2008.

Vol/3 | ISSUE/11


"Tape backup, which has been in use at BIDMC for decades, suffers from a variety of problems," Halamka writes. "Tape backups are timeconsuming. Tapes are fragile and require physical security when transported. The time required to retrieve and recover from tape stresses our service availability objectives." In a recent survey of storage professionals, nearly one-third of respondents rated their inability to meet storage-restore objectives as the greatest vulnerability of their data protection strategy. Both Halamka and Chapman are moving to new storage technologies to help solve their tape problems. "It's a matter of survival," Chapman says.

tHe antidote Kindred is implementing virtual tape library, or VTL, technology, which is an appliance that uses data storage virtualization technology to improve compression on disks. A key facet of VTL is data ‘de-duplication’, an approach that reduces a big portion of redundant data that can hog disk space and provides quicker access to data, as compared to tape. For example, each time a document is backed up, only the changes that a user makes to that document are saved — not the entire document or copies of the document. VTL technology, Chapman notes, is proving to be more cost-effective for Kindred than storage area network (SAN) services from big providers such as EMC. "[VTL] is low-profit stuff," he says, "and they want you to stay on their high-profitmargin storage devices." Chapman is using VTL technology from vendor Sepaton, and Halamka is rolling out similar technology from Data Domain, which he describes in his blog. Both of them plan to rid their organizations' reliance on tape as soon as possible. Halamka wants to completely eliminate the need for tape in his data center in two years. And Chapman says: "In five years we don't want any tape."

tHe Cost of CoMplianCe Enterprise compliance cost can be broken down into two broad areas: staff costs, and tools and infrastructure costs. The typical enterprise is spending between 2 percent and 3 percent of its entire IT budget on staffing for compliance. (The median IT budget for benchmark participants of this Nemertes Research survey was Rs 80 crore.)

HoW long do you retain data? 9% 9%


9% 27%


27% Forever 18% 10 years 9% As required by law

repurposing your storage funds

9% Law, plus upto

‘Doing more with less’ has become a familiar refrain for almost every business today, but this is especially true for healthcare. In the 2008 HIMSS survey, respondents said that the most significant barrier to successful implementation of IT was (for the eighth year in a row) a lack of financial support or budget.

9% Seven years 27% Other


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

At Kindred Healthcare, Chapman says he's on a mission to "fund more business initiatives out of reducing the base IT infrastructure costs." "We can find money out of our existing spend to fund more work on frontline business initiatives, like application development," he says. This newfound agility, as he calls it, will give IT more credibility and help grow the business.

Walking tHe talk One example: Chapman is taking the wealth of historical data that Kindred maintains (such as patient, billing and financial information) and providing it to his users for business analysis. Powerful data warehouse analytic tools "allow people to scan this massive data and convert it into actionable information to deal with a business issue," Chapman says. "That's been how we literally pay for what we have to spend on storage, because there's big value in that." Last year, for instance, the Medicare Payment Advisory Commission (MedPAC) had proposed a "very onerous" rate restructuring and reimbursement plan for long-term care hospitals, Chapman says. Kindred Healthcare and other similar organizations had 60 to 90 days to react to the new regulations and give their input on why this was or was not a good plan. (It was not a good plan for Kindred, he says.) "In a couple of days, working with our reimbursement departments, we could model that new reimbursement [plan] and apply it to the entire previous year's activity in the hospital division, and come out with what the impact would be," Chapman says. That information, mined from Kindred's systems, was then given to Kindred's government relations representative, who ended up giving testimony before the House Ways and Means committee. In the end, the proposal was altered, saving Kindred "50 percent of what was going to be a giant cut in reimbursement," Chapman says. "Not every example is as dramatic as that one was, but it's that kind of direct 'connect the dots' to an outcome that affects the bottom line of the company." CIO

five years

Source: Nemertes Research

Send feedback on this feature to

Vol/3 | ISSUE/11

Information lifecycle Management

The storage at this airline company doesn’t work harder – just smarter. By Joanne Cummings

IllUSt ratI on by Un nIkrIShnan aV

When you tell him that on average, storage admins manage

between 30TB and 60TB each, Samuel Turner smiles. As United Air Lines' manager of storage utility services, he has good reason: his staffers each manage triple that amount. “They manage nearly 200TB per person," Turner says, noting that although his staff is pushed, such amounts are doable, at least for the time being. Turner says he can get away with those numbers because over the past few years United has worked hard to simplify and virtualize its storage environment. “Now, we’re a utility service. And much like a utility, I work to optimize the storage capacity and resources within the organization so we can better manage our dollars associated with storage," he says. “We look at the needs not only of the high-end users but also the low-end services and applications with an eye toward trying to centralize, optimize and drive higher efficiencies through reuse of structured services. And that lets us reduce our overall cost per unit of storage for the whole environment." But getting to this point hasn’t been easy. Like many large organizations, United had a hodgepodge of storage arrays and disks in place as it took advantage of vendors that were cutting each other on price. “The rub is 28

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Reader ROI:

How simplifying storage can reduce cost The importance of knowing your data’s lifecycle What to do when business can’t help with ILM

Vol/3 | ISSUE/11

Information lifecycle Management

we needed to have resources trained on the nuances for each particular brand we used because they all operate just a bit differently," Turners says.


The FlighT Plan Faced with such inefficiencies, United made a strategic decision to narrow its pool of storage players to one or two high-level brands. The idea was to reduce costs by simplifying. Turner says his team gravitated toward EMC and Hitachi Data Systems. Eventually, it decided to go with Hitachi as its main vendor, primarily due to the high-level features inherent in Hitachi’s TagmaStore Universal Storage Platform and its new Tiered Storage Manager (TSM) software. The TagmaStore is a large array that can handle 332TB of internal storage and as much as 32 petabytes of total storage capacity. The unit not only offers virtualized internal storage suitable for Tier 1 highperformance storage services, it also can virtualize attached storage from a variety of vendors on the back end, enabling United to build virtualized Tier 2 and Tier 3 storage services. “The Hitachi became the front line, the gateway and entry for all of our equipment in the storage network, as well as all of the hosts and servers out there," Turner says. “It lets us shop and somewhat commoditize the disk storage below the Tagmas, but it also let us integrate it so that way we can still maintain that single pane of glass. We can manage our environment holistically."

ilM SPellS SavingS A key to building this tiered infrastructure at United was understanding the airline’s storage requirements and applying smart information lifecycle management strategies. This meant understanding what data is generated by the applications, how much is stored, for how long, how critical it is and how quickly users need access to it. Unfortunately, United’s various lines of business had trouble providing the desired level of detail. “Parts of the organization are not yet at a point where they can communicate those requirements or even begin to understand the nature of the data itself enough to be able to share those requirements with us," Turner says. Turner’s group decided to tackle the problem another way, using the Hitachi TSM software to gauge the activity of the various storage volumes and build out a tiered infrastructure from there. The software draws patterns and, based on criteria United provides, indicates which data is eligible for tiering. More active volumes fall into the higher tier

Vol/3 | ISSUE/11

Ways to Shield Yourself From the Data Explosion


Transform your relationships with business. These are the groups that will classify data, set retention policies, deal with the public if data is lost. Leading companies embed staff in line departments, charge for IT services, and routinely meet with external customers.


Spearhead the development of policies for data security, retention, data access, and compliance. Extend these policies to business partners.


Rush in new tools and standards. Storage optimization, unstructured data search, database analytics, virtualization will be needed to make the information infrastructure as flexible, adaptable, and scalable as possible.

Source: IDC

while data that’s not used often may be relegated to a lower-cost, lower-speed tier, or even archived. Turner says he can use the tiering infrastructure to purchase storage smarter. “Instead of spending $50,000 and getting 2TB of high-performance disk, I’m going to spend $50,000 and get 10TB or 15TB of Tier 3 disks," he says. “Or instead of getting 2TB of high performance, I’ll get 5TB of Tier 2 and 10TB of Tier 3, or whatever makes sense."

San SMarTS United also is playing smarter on the storage-area network (SAN) side of the equation by leveraging Onaro’s SANscreen tool to track and analyze SAN connections, ensuring optimal performance. “Before we had SANscreen, we were constantly having server admins call us saying they lost the SAN connection, or that something’s wrong with the SAN," Turner says. “We’d scramble around to figure out what’s going on, only to find that the problems were due to changes taking place at the host level. They would change drives, swap out HBA cards, or sometimes disconnect themselves from the network, and then forget a path was missing." With SANscreen in place, those problems are instantly obvious. “SANscreen sets a baseline of the SAN," he says. United defines rules that say how many connections or paths a given server should have to the SAN and then SANscreen checks the paths every few minutes to make sure all is well. “If something changes, we get an alert, and the SANscreen engine does an analysis and gives us its best guess about what could be the issue. It’s a nice tool." United further stretches its admin dollars by providing read-only, browser-based SANscreen access to server admins. “They can do some selfservice and take a look and see the configuration of the SAN and if their server still has its path," he says. “And that also lets me minimize the number of team members I have to support the SANs." Turner says the next step for United and its storage utility strategy is to offer service-level agreements to users based on their specific requirements. “So it’s more about what your needs are for response time, availability, redundancy and the like, and then we map that to our infrastructure," he says. “Ideally, we’ll come up with several packages of services that are appropriate for our company, and then offer those packages as needed. That’s what we’re working toward." CIO

Joanne Cummings is a freelance writer in North Andover, Massachusetts. Send feedback on this feature to

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Master Data Management

As Nationwide Insurance grew, its data became siloed and scattered, making it increasingly difficult for the company to get an accurate picture of its finances. Here's how it brought all that data into focus. By Thomas Wailgum

Between 2000 and 2002, in a span of three short years, Nationwide Insurance got a new CEO, CIO and CFO. Jerry Jurgensen, elected by Nationwide's board in 2000 to replace the retiring CEO, was hired for his financial acumen and his ability to transform a business's culture. Michael Keller was named the company's first enterprisewide CIO the following year. He had 25 years of IT experience managing big infrastructure and systems integration projects. In 2002, Robert Rosholt replaced the retiring CFO and joined the others in Nationwide's Columbus headquarters, bringing along deep experience in all things financial. The three were old buddies who had worked together at financial giant, Bank One. Now they held the reins at Nationwide and their goal was to take its dozens of business units, selling a diverse 30

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Reader ROI:

The benefits of master data management Rules for implementing an MDM solution The tools Nationwide used

Vol/3 | ISSUE/11

Master Data Management

set of insurance and financial products, to a higher level. In 2001, Nationwide was profitable to the tune of Rs 552 crore and board members had billion-dollar aspirations for that line item. But to get there, Jurgensen needed financial snapshots of how Nationwide was doing at any given moment. And getting them wasn't so easy. In fact, it was almost impossible.

The Fog oF Finance "When you're dealing with 14 general ledger platforms and over 50 applications," Rosholt says, "it was enormous work to get the financials out." The problem lay knotted in a tangle of systems and applications, and some 240 sources of financial data flowing in and around Nationwide's business units. The units had always run independently, and that's how financial reporting was handled. "There was a variety of [financial reporting] languages," Rosholt says, which affected Nationwide's ability to forecast, budget and report. "It was difficult," says Rosholt, to ask, "How are we doing?" Keller's situation was no better. "One of the first questions I was asked when I joined was, How much money do we spend, total, on IT?" Keller recalls. "The answer was, we didn't know. It took weeks to put that answer together." Jurgensen wanted to be able to run Nationwide as if it were one unified enterprise. He wanted, in Rosholt's words, "to do things that are common, and respect the things that are different. And that was a big change." Indeed, the transformation the company embarked upon in early 2004 was daunting — a master data management makeover that would alter how every Nationwide business reported its financials, how accounting personnel did their jobs, how data was governed and by whom, and how the company's information systems would pull all that together. The goal was simple: one platform; one version of the financial truth. Simple goal. But a difficult challenge.

WhaT is MasTer DaTa ManageMenT? Master data management projects come in all shapes and sizes. Most often, MDM addresses customer data management requirements, hence the term customer data integration, or CDI, which is often used interchangeably with MDM, though many contend the concepts differ. But MDM, as it's now used, boils down to a set of processes and technologies that help enterprises better manage their data flow, its integrity and synchronization. At the core is a governance

Vol/3 | ISSUE/11

83% of organizations suffer from bad data for reasons that have nothing to do with technology. Among the causes of poorquality data are inaccurate reporting, internal disagreements over which data is appropriate and incorrect definitions rendering the data unusable.

Source: The Data Warehousing Institute

mechanism by which data policies and definitions can be enforced on an enterprise scale. The result is much more than just clean data. MDM offers companies a tantalizing vision: a "single version of the truth" gathered from vast databases of internal assets, says James Kobielus, principal analyst of data management at market researcher Current Analysis. Heard it all before? "MDM is a relatively new term for a timeless concern," Kobielus concedes. That hasn't tempered vendor enthusiasm. Vendors of all stripes — BI, data warehousing, data management, performance management, CRM, ERP — are rolling out their disparate products under the MDM banner. Forrester reports that MDM license and service revenue from software vendors and systems integrators will grow from Rs 4,400 crore in 2006 to more than Rs 26,400 crore in 2010. Even with all the vendor buzz, research conducted last year shows that CIOs are struggling with data management: 75 percent of 162 CIOs surveyed by Accenture said they want to develop an overall information management strategy in the next three years in order to 'leverage that data for strategic advantage.' But a Forrester Consulting survey of 407 senior IT decision makers at companies with more than Rs 1,000 crore in annual revenues found that manual efforts remain the dominant approach for integrating data silos. That's because an MDM transformation is as much about mastering change management as it is about data management. As Kobielus says, "In the hypersiloed real world of enterprise networking, master data is scattered all over creation and subjected to a fragmented, inconsistent, rickety set of manual and automated processes." Good master data governance can happen only when the various constituencies that own the data sources agree on a common set of definitions, rules and synchronized procedures, all of which requires a degree of political maneuvering that's not for the faint of heart. Nationwide began its finance transformation program, which included its MDM initiative, called Focus, with its eyes wide open. The executive troika of Jurgensen, Rosholt and Keller had pulled off a similar project at Bank One and thought it knew how to avoid the big mistakes. That, in part, is why Rosholt, who had ultimate say on the project, would not budge on its 24-month time line. "The most important aspect was sticking to discipline and not wavering," he recalls. And that's why the technology piece was, from the outset, the last question to be addressed. "It wasn't a technology project," insists Lynda Butler, whose VP of performance management position was created to oversee Focus (which stands for Faster, REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Master Data Management

Online, Customer-driven, User-friendly, Streamlined). She says that Nationwide approached MDM first and foremost as a business and financial project. Nationwide considers the project, which made its deadline, a success, although everyone interviewed for this article stresses that there's more work to be done. Says Keller: "There's a foundation to build on where there wasn't one before."

geTTing sTarTeD on MDM "Fourteen general ledgers, 12 reporting tools, 17 financial data repositories and 300,000 spreadsheets were used in finance," says Butler. "That's not real conducive to 'one version of the truth.'" Early in his tenure as CEO, Jurgensen's concerns about the company's financials weren't limited to the timeliness of the data; he was also worried about its integrity and accuracy. He and other execs knew that faster access to more comprehensive data sets would allow for better trend analysis and forecasting decisions, and strengthen budgeting, reporting and accounting processes. For example, because Nationwide had such a variety of businesses, the company carried a lot of risk — some easily visible, some not. "So, if equity markets went down, we were exposed," notes Butler. "But we didn't realize that until the markets actually went down. We needed some enterprise view of the world." One of Nationwide's subsidiaries, Nationwide Financial Services, is a public company and has the requisite regulatory and compliance responsibilities (such as Sarbanes-Oxley), but the rest of Nationwide Insurance is a mutual company owned by its policyholders, and doesn't have those requirements. Rosholt says the entire company will move to Sarbox-like requirements by 2010. The Focus project provided a kick-start to unifying the rest of the company's financials to accommodate more stringent accounting practices. Executives also knew that common data definitions among all the business units would provide comparable financial data for analysis (which was difficult, if not impossible, without those definitions). "We needed consistent data across the organization," Rosholt says. "We were looking for one book of record." 32

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

"When you're dealing with 14 general ledger platforms and over 50 applications, it was enormous work to get the financials out." Robert Rosholt, CFo, Nationwide insurance

The Focus project team began envisioning the scope and plan in January 2004. Rosholt handed off dayto-day responsibilities on the finance side to Butler, his 'change champion,' who had worked in corporate headquarters and in a business unit. "I had the dual perspective and could see both the needs of the businesses versus the needs of the enterprise," she says. "I could play devil's advocate with myself." CFO Rosholt went back to his Bank One roots and recruited Vikas Gopal, who had proven his mettle on similar projects, to lead the IT team. All together, Butler and Gopal would have 280 project management, finance and IT folks working on the transformation.

VasT ProjecT, DeFineD rules Rosholt was the business sponsor (with Jurgensen providing high-level support), and he laid down several rules at the start. The first was that he was not going to budge on the 24-month schedule. "When you take longer, you don't get that much more done; you just burn people out, spend more money, and it's more frustrating," Rosholt says. "So you set stretch goals and go after it." With no wiggle room on the time line, the team, with Rosholt's encouragement, followed what it refers to as the '80/20' rule. It knew that it wasn't going to get 100 percent of the desired functionality of the new MDM system, so the team decided that if it could get roughly 80 percent of the project up and running in 24 months, it could fix the remaining 20 percent later. "If we went after perfection," says Rosholt, "we'd still be at it." Keeping in mind that no one would get everything he wanted, the Focus team interviewed key stakeholders in Nationwide's business units to understand where their pain points were. "We went back to basics," says Gopal. "We said, 'Let's talk about your financial systems, how they help your decision making.'" The team then determined where senior management wanted to focus and presented it with a choice of 10 different financial competencies. "Do we want to be the best company that does transaction recording? Or enterprise risk? Or analytics?" Gopal says. In other words, people were introduced to the concept of making trade-offs, which allowed the Focus team to target the system's core functionalities and keep control over the project's scope.

You saY ToMaTo; i saY ToMahTo ToM After interviewing the key stakeholders and identifying the

Vol/3 | ISSUE/11

Master Data Management

and reclassification process is 'never ending' because core functionalities — business planning, capital there are always "people coming back with creative optimization, risk management, analysis and ideas on how you can improve the workflow [of the interpretation, record and reporting, organizational MDM system] to work better with the applications." management, stakeholder management and accounting policy management — the next thing the team did was create a data governance system. The Tool TiMe system instituted repeatable processes and specific It was only after the requirements, definitions and rules for compiling, analyzing and reporting the parameters were mapped out that Gopal's group financial data on both a business-unit level and an began looking at technologies. Gopal had two rules to enterprise level. The process would take place on guide them: first, all financial-related systems had to be a daily basis and would touch all of the back-end subscribers to the central book of record. Second, none systems (for example, the PeopleSoft ERP system) and of the master data in any of the financial applications the front-end (Hyperion and Microsoft could ever be out of sync. financial applications). So the Focus team's final step was "Pre-Focus, there was no data to evaluate technologies that would governance," Butler says. "We had follow and enforce those rules. The to put in some policies, rules and team reasoned that it had neither procedures [for managing the data] at the time nor the inclination to invent the top of the house, which at times has MDM technology at Nationwide. "We had a contentious relationship with the wanted to start off on the right footing Nationwide Insurance is a diversified business units." from a TCO perspective; with only 24 insurance and financial services company. Nationwide formed a data months you don't have a ton of time governance group whose members, to build a lot of stuff," Gopal says. His from finance and IT, would be the team sought out best-of-breed MDM Headquarters: Columbus, ohio 'keepers of the book of record,' the toolsets from vendors such as Kalido Revenue: Rs 6,400 crore in assets; rules of the MDM system, Gopal notes. and Teradata that would be able to tie Rs 840 crore in annual revenues The group's charge was to figure out into their existing systems. CEO: Jerry Jurgensen how each business unit's financial Gopal wasn't overly 'worried about CFO: Robert A. Rosholt data definitions would transform into [technology] execution' because he CIO: Michael Keller data sets that could be standardized had assembled this type of system Employees: 36,000 and imported into the MDM financial before and knew that the technology IT Employees: 5,500 system. But first, because there were solutions on the market, even in the Focus Project Fast Facts: 280 team hundreds of sources and classifications most vanilla forms, were robust members worked 1.2 million man hours of data, it was critical that the various enough for Nationwide's needs. (including overtime) over 24 months business-unit stakeholders on the data What did worry him was —Source: Nationwide governance team agree on definitions. If Nationwide's legion of financial there are two different ways to classify employees who didn't relish the idea one data set — for example, if one unit of changing the way they went about calls a Nationwide product 'Standard their work. Auto' and another calls the same product 'Std Auto,' or similar differences in defining 'purchase order,' The culTure Wars 'invoice' or 'customer' — then the system is worthless. Though Jurgensen, Rosholt and Keller weren't "You simply cannot have both," Gopal says. involved in the day-to-day minutiae that Another, more complex example is how business accompanies a massive project such as Focus, it units defined geographic information. Gopal says was never far from their minds. A transformation that different applications had geography rolled unlike anything the 36,000 Nationwide Insurance up differently (for example, Eastern/Western or employees had ever seen was at hand. Rosholt knew Northeast/Midwest). But in various applications, he had to make one of the most important sales of his Illinois could have been in the Western region and in career. "You have to sell the vision, and the benefits," others it could have been in the Midwestern region. he says. The most difficult part was getting everyone "Aggregating data from various sources, taking in to take their medicine because it was good for the the 'rolled up' level, made [achieving] an enterprise enterprise. "In some businesses, it wasn't a 'win[view] very difficult," Gopal says. Even with a mature win,'" Rosholt says. "In the smaller, more compact data governance program, he notes, the classification businesses, they'd say, 'I've got a very simple system

InsIde natIonwIde

Vol/3 | ISSUE/11

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Master Data Management

and we've been working on this for 15 years. Why are you disrupting my life?'" And to a degree, Rosholt felt their pain. At the beginning of the program, Nationwide formed a 'One Finance Family' program that tried to unify all the finance folks around Focus. Executives were also able to identify those employees who were most affected through weekly 'change meetings' and provide support. In addition, executives placed dedicated communications personnel who were responsible for communicating and managing change through the meetings and media channels. The Focus team had to remain resolute. The overarching theme, that there would be no compromise in data quality and integrity, was repeated early and often, and execs made sure that the gravity of the change was communicated before anyone saw any new software. Finally, in March 2005, with three waves of planned deployments ahead of it, the team started rolling out the new Focus system. One of the first businesses to make the transition was one of CIO Keller's divisions, Nationwide Shared Services, which handles document services and sourcing, among other functions. (His IT division also was an early adopter.) "We were a guinea pig," he recalls. "We had pretty good [financial] systems and were able to do what we needed to do, preFocus. We wanted parity to do what we did before. It's a harder sell to people who weren't getting the business benefits." But it was clear that Focus was the better — and only — way.

WeDnesDaY nighT Pizza Transformational IT projects, with spirited kickoff parties, awkward executive speeches and quirky gifts for the project team (T-shirts saying ‘I Survived Focus’) are infamous for a precipitous drop in enthusiasm shortly after launch. Both Rosholt and Keller have seen their fair share of good and bad projects. "We've seen it all," Rosholt says. Keller's experience not only taught him about the need for high-level sponsorship but also the necessity for creating a forum for ongoing discussions. This was especially critical for Focus because, by nature, MDM transformations require continuous debate and dialog regarding changes to how employees 34

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

"one of the first questions I was asked when I joined was, 'how much money do we spend on IT?' The answer was: 'we didn't know'." michael Keller, Cio Nationwide insurance

describe, manipulate and share the very data that defines their jobs. Thus was born 'Wednesday Night Pizza', a weekly gathering of the executive steering committee members (CFOs, finance controllers, and IT leaders and project managers). The meetings started at 5 p.m. and occasionally ran until 10 p.m. "Rarely did it last less than two or three hours," Rosholt says. "We would bang through all the issues that changed every day." Team members sat face to face, over pizza, and worked through key change management and data governance challenges, always sticking to the 80/20 rule and always with a silent nod to the immutable 24-month deadline. "Having the clarity of what outcomes you need and having [Rosholt's] decision making weekly was absolutely critical," Keller says. Or as Gopal puts it, "We needed that hammer to remove some of the bottlenecks." "There were so many lessons learned on the issues around change management and communication and how you line people up to do their jobs," says Rosholt. On those front lines, the work was grueling. "There were some days we wondered if there would be a light at the end of the tunnel, and were we ever going to get through this?" recalls Butler. "We had two years. The clock was ticking." In fact, according to Keller, the 280 or so members of the IT team eventually would put in 1.2 million hours, including overtime, on the project. But by fall 2005, there was light at the end of the tunnel. The team could see the new business processes and financial data governance mechanisms actually being used by Nationwide employees. And it was working. "They saw the value they were creating," recalls Butler. "The 'aha' moment came when we finally got a chance to look in the rear-view mirror."

aT lasT, The roi That happy ‘aha’ moment didn't last long. More than a year after the final wave of Focus was rolled out to the last business unit, the team is still working to finish that remaining 20 percent of functionality, those requests that were set aside during the initial push and watching as data volume on the system reached more than 150 million financial transactions per month. "Do [the executives] have things they would have loved for us to go after, to

Vol/3 | ISSUE/11

����������������������� ���������� ��������������������� ������������������� Master Data Management

get to the next level of evolution? I think so," says Gopal. "We've come a hundred miles, and we have another hundred to go." The first benefit of the transformation that Rosholt mentions is something that didn't happen. "You go through a project such as this, in a period of extreme regulatory and accounting oversight, and these things can cough up more issues, such as earnings restatements. We've avoided that," he says. "That doesn't mean we're perfect, but that's one thing everyone's amazed at. Our balance sheet was right." Next, Rosholt notes that users of the Focus system are experimenting with its new features. For example, Nationwide's Scottsdale Insurance business unit will soon be able to identify and analyze its most profitable customers. "All of this used to be in an Excel spreadsheet that they always had to reconcile into the financials," Rosholt says. "Now we can understand what value proposition we bring our customers and what value proposition our customers bring us. This is a big change for our industry." Nationwide now also has the flexibility to change the regional structures in how it designates its core lines of businesses. For example, Rosholt says that when executives decide to install a new geographic-based reporting structure in the property and casualty insurance operations, moving a business line from one state or region (Midwest) to another (Northeast), the process would have taken nine months, he says. Now it takes just a couple of days. The end result is that execs and business-unit managers can now get a clearer and more accurate picture of how Nationwide's state and regional lines are doing — and get it much faster. Last, but certainly not least, Rosholt can now give Jurgensen a more immediate, accurate and comprehensive picture of Nationwide's financial health. It took nearly 30 days to close Nationwide's books for 2006. With the new system in place, the amount of time necessary has declined significantly: 19 days to close Q1 2007; 16 days for Q2; 15 days for Q3. The target is to get that down below 10 days by 2010. As for the goal to grow Nationwide's net income, the company recorded Rs 4,400 crore in 2005 and topped that with Rs 8,400 crore in 2006. For those on the Focus team, that's the kind of financial news that is always a pleasure to deliver to senior executives. "It's nice," says Butler, "when they can see their financials in a timely manner." CIO



�������������������������������������������������������������������� ������������������������������������������������������������������������� ���������������������������������������������������������������������� ������������������������������������������������������������������������ ��������������������������������������������������������������������� ������������������������������������������������������������������������� ��������������

��������������������������������������������������������������������������������� ��������������������������������������������������������������������������������� �����������������������������������

��������������������������������������������������������������� ���������������������������������������� ��������������������� �������������������������������������� ��������������������������������� ��������������������������������������� ������������������������������������� �������������������� ���������������������������������������� ���������������������

��������������� � ������������������������������������ ������������������������������������ � ��������������������������������������� ����������������� � ������������������������������� � �������������������������������������� ����������� � �����������������������������������



�������� ���������������������� ������������������� �������������������������������������������������������������� ���������������������������������������������������������� ������������������������������ ��������������������������������������������������������������������������������������������� ��������������������������������������������������������������������

Thomas Wailgum is senior writer. Send feedback on this feature to

Vol/3 | ISSUE/11

Feature -03 - 1 Company 1Vision 1Truth.indd 35

4/15/2008 3:20:05 PM

Storage Infrastructure

Experts predict that ultimately iSCSI over 10G Ethernet will dethrone Fibre Channel. Isn’t it time to start planning a migration? By BarB Bar B ara Darrow

Fibre Channel is the king of enterprise storage-area-network technologies. It's fast, it can handle long distances, and it's got strong vendor support. ISCSI, however, is the heir apparent. When it comes to new SANs, add-ons to existing systems or departmentallevel installations at large enterprises that have Fibre Channel, customers increasingly are choosing iSCSI. And when iSCSI over 10 Gigabit Ethernet comes online, the biggest remaining hurdle to adopting iSCSI storage — its perceived slow performance — will fall. At that point, iSCSI will become the storage interconnection transport of choice across the enterprise. How soon until that happens? Analysts expect support for 10G Ethernet will be built into enterprise storage arrays and servers within the next three years. This means IT executives need to start learning about iSCSI 36

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Reader ROI:

Why iSCSI will dominate The importance of changing alliances today How cost and the lack of expertise will drive change

Vol/3 | ISSUE/11

Storage Infrastructure

now, begin asking their storage vendors about their iSCSI road maps and begin planning for an orderly migration to iSCSI. The ascendance of iSCSI is backed by four reasons: Cost. An iSCSI storage solution running on familiar Ethernet infrastructure costs a fraction of a high-end Fibre Channel solution in terms of the technology and the expertise needed to run it, IT experts say. Staffing. Finding good Fibre Channel talent can be a challenge, and the scarcity drives up the cost. "It's hard to hire people with Fibre Channel expertise," says Andrew Reichman, an analyst with Forrester Research. Compliance mandates. The growing list of industry and government mandates about the handling of data — Sarbanes-Oxley, credit card regulations — is driving companies to think out their storage and archiving policies carefully. The need to digitize documents, from simple forms to X-rays, likewise motivates companies to get their storage houses in order as inexpensively as possible without sacrificing utility and reliability. Virtualization. "Server virtualization is a big driver," says John Sloane, analyst with Info-Tech Research Group. Many midsize companies that may not have invested in network storage because of cost now look to consolidate more of their Windows and x86 architecture with VMware. "To get the best benefit from VMware [for] disaster recovery, high availability and advanced data protection, you're really driven toward putting the virtual-machine files and data on a SAN," he says.

Cases in Point When VMware added iSCSI support last year, another hurdle to adoption fell away. That means companies that "may have been on the fence about purchasing network storage or staying with directattached storage now have a trigger that helps them see networked storage," Sloane says. The confluence of these trends has led Burton Group analyst Nik Simpson to refer to Fibre Channel as "dead technology walking." Many customers aren't waiting for 10G Ethernet; they're finding plain old Ethernet has more than enough horsepower to get the job done. That's the case for the IT department of Clackamas County, Oregon, which has moved from Fibre Channel to an EqualLogic iSCSI SAN. "Our Fibre Channel stuff is now completely gone except for one Brocade switch, [which] we bought specifically to manage IBM tape drives," says Chris Fricke, senior IT administrator of information services for the county. "Now, everything is on iSCSI

Vol/3 | ISSUE/11

32% Of existing NAS customers believe that they will replace their NAS with iSCSI to some extent over the next three years.

47% Of existing NAS customers say that they will not replace existing NAS infrastructure but will deploy iSCSI as new buildouts.

Source: Enterprise Strategy Group

SANs: our normal file storage, our document imaging, our Exchange System and our databases," he says. "It's considerably cheaper not to have to deal with special cards to get it to work, and we didn't have to train people on new technologies. Our primary business goal is not baby-sitting our storage infrastructure," Fricke adds. Fricke isn't on 10G Ethernet yet, but he's building his storage network with 10G Ethernet in mind. "We'd made the decision that Fibre Channel wasn't working out and iSCSI was the bomb," he says. "We had to look at the entire market, so we did the evaluation. The biggest driver was cost. With Fibre Channel, what we had was 1GB host bus adapters (HBA), a 1GB backplane. To upgrade all that really is a forklift upgrade to pull it out and bring in four gigs or whatever. For us, that would have been at least a half-million [dollars] and not feasible. So we brought in EqualLogic for $50,000, and we'll grow that as we need it." "Analysts have been talking about iSCSI on 10-Gigabit Ethernet for three or four years. It's taken awhile, but now it really is gaining traction and coming up the ladder," adds Info-Tech's Sloane. Scott Christiansen, IT director for Leo A. Daly, an international architecture and engineering firm in Omaha, is also impressed with iSCSI SANs. The advantage of moving from a complete network-attached storage (NAS) environment to an iSCSI vs. a Fibre Channel SAN is cost "first and foremost," he says. Indeed, Fibre Channel switch-and-HBA combos easily run to Rs 1.6 lakh per unit, whereas standard iSCSI SANs can operate with off-the-shelf cards.

the Daly show In Leo A. Daly's case, a Fibre Channel solution would have cost six figures compared with less than half that for an iSCSI solution, Christiansen says. The cost savings of iSCSI over Fibre Channel happen at every level, from the cabling up through the switch ports on the IP switches, Forrester's Reichman says. Best Best & Krieger, a Riverside, California, law firm, is making the transition from HP Fibre Channel technology to iStor Networks storage for e-mail archiving. The move to iSCSI was a no-brainer, especially because the Fibre Channel infrastructure was aging. "For us, going from Fibre Channel to iSCSI was an upgrade," says Tim Haynes, senior manager of IT for the 400-person law firm. "We were pretty well maxed out on the Fibre Channel. To grow that would have been very expensive, and it gets even more complex to have a highly available storage system with Fibre," he says. REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Storage Infrastructure

Overall, iSCSI brought simplification. The use of standard networking gear for storage traffic is a major benefit. "We're a Cisco shop, and we put in a 1GB Ethernet switch, and that's it. If it goes down, it's easy to move to another port on another switch, whereas to have a Fibre Channel switch sitting on the shelf as a failover, or even having one online for redundancy, is very complicated." The Gem Group, a Lawrence, Massachusetts, business specializing in promotional wares, also moved from Fibre Channel to iSCSI. It was outgrowing its Xiotech Fibre Channel implementation and evaluated Fibre Channel and iSCSI replacements last year. Its requirement for failover quickly ruled out four of the initial eight contenders on cost grounds, says Brian Smith, technology manager for Gem. The four remaining solutions — from EMC, Xiotech, Compellent Technologies and EqualLogic — all appeared to be in the same price range. After Gem Group looked at the cost of management, as well as upfront price, however, only Compellent remained. Going with iSCSI meant that Gem Group, with a small IT staff, could rely on its existing TCP/IP and IP expertise. "Pricing was huge, but also, with just 18 people on the IT staff, we all wear a lot of different hats, and I was the only one who knew Fibre Channel after administering it for four years. It's difficult to manage, and learning it takes time. We're growing our business, putting in a new ERP system, and

we don't want the added expense of Fibre Channel expertise," Smith says.

38% Of existing Fibre Channel customers who have deployed iSCSI believe that the latter will replace Fibre.

Source: Enterprise Strategy Group

the Case for fibre Channel Of course, just as few companies have ripped out mainframes in favor of PC-based servers, enterprises will not forklift out Fibre Channel for iSCSI. "Fibre Channel will be around for a long time to come," says Tony Asaro, analyst for Enterprise Strategy Group. "There's a ton of investment in Fibre Channel in time, money and resources," he says. Asaro adds that religious and political fiefdoms within companies can prolong a technology's life span. There are storage constituencies within organizations that have bet on Fibre Channel and will defend it to the end. The result doesn't have to be an all-or-nothing proposition. Scott Winslow, founder and CEO of Winslow Technology Group, a Boston storage specialist, estimates 10 percent to 15 percent of his customers are on iSCSI, 30 percent on Fibre Channel and about 60 percent on a combination. A common misperception about iSCSI SANs is that they can run on the same Ethernet backbone as other traffic. That is technically true, but the thought of such commingling is anathema to some experts, who cite security concerns. In reality, the recommended implementation for iSCSI storage is to run it on a separate Ethernet

How Happy are you witH iSCSi SaNS? An Enterprise Strategy Group survey asked customers about their organization’s satisfaction level with the following attributes of its iSCSI SANs. NEtWORk PERfORmANCE 29% Very Satisfied 58% Satisfied 10% Neither Satisfied nor Dissatisfied 3% Dissatisfied

EAsE Of INstALLA stALLAt stALLA ALLAtION 19% Very Satisfied 55% Satisfied 18% Neither Satisfied nor Dissatisfied 8% Dissatisfied


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD



24% Very Satisfied 52% Satisfied 17% Neither Satisfied nor Dissatisfied 7% Dissatisfied RELIAbILIty fOR mIssION CRI RIt tICAL APPLICA PPLICAtIONs 17% Very Satisfied 55% Satisfied 24% Neither Satisfied nor Dissatisfied 2% Dissatisfied 2% Very Dissatisfied

22% Very Satisfied 54% Satisfied 16% Neither Satisfied nor Dissatisfied 4% Dissatisfied 4% Very Dissatisfied CAPItAL tAL COst tAL 27% Very Satisfied 45% Satisfied 26% Neither Satisfied nor Dissatisfied 2% Dissatisfied

Vol/3 | ISSUE/11

Storage Infrastructure

network. Even in that instance, the costs are less than with Fibre Channel, because IT staff is dealing with the same set of protocols and management tools across data and storage backbones. The Gem Group maintains separate storage and data networks for security and performance reasons "but the switches are interconnected," Smith says. "We have the data center and one data closet, so we have some physical servers and some virtual servers in the secondary closet along with a secondary SAN that's connected at fiber-optic speeds. That let us do longer distances between data sites without degradation," he says. In addition, although iSCSI on Ethernet runs on standard cards, performance-boosting and pricier HBAs still are often necessary to take advantage of 10G Ethernet. Thus, the real cost savings of moving from Fibre Channel to iSCSI may be less dramatic than some proponents say. Winslow Technology's Winslow estimates in some cases the savings is more like 10 percent to 15 percent than the higher figures some pundits cite. It seems certain that the cost of Fibre Channel components will fall as iSCSI storage gains ground, further narrowing the price gap. Although purists may not be pleased, a hybrid Fibre-Channel-andiSCSI approach is finding acceptance in many sites. Winslow, who sells Compellent products, says several of his customers use Fibre Channel for their main storage repository but plug in additional iSCSI cards for failover storage needs.

Ă la carte menu items in the Fibre Channel realm but are part and parcel of iSCSI. EqualLogic's storage solution brought Leo A. Daly advanced features, including snapshotting and bit-level replication between devices, for which they would have had to pay extra in the Fibre Channel world, Christiansen says. Snapshotting lets the system roll back to data at a set period in time in case of a failure. believe that Clackamas County's Fricke also stressed this point: iSCSI SANs will "When we bought Fibre Channel, we couldn't even afford snapshotting or replication." Both capabilities be new SAN are now deployed at no extra cost. buildouts and IT experts also say that with implementation knowwill not replace how, iSCSI can rival current Fibre Channel speeds. The Gem Group's Smith opted for HP switches that Fibre Channel. will support 10G Ethernet if the company needs to Source: Enterprise go that route, but he also went with enterprise-class Strategy Group connections rather than standard network interface cards to optimize performance. "We do multiple connections, on each server. I go with a two-port QLogic card with two 1GB connections. You can connect that into the SAN in active/active mode. People say Fibre Channel is faster at two or four gigs, but I already have two gigs with iSCSI," Smith says. Storage analysts see the writing on the wall. "We believe that iSCSI will be the dominant SAN interconnect over time," the Enterprise Strategy Group's Asaro says. "Although Fibre Channel C MPanies CoMPanies MP that haV ha e is the leading storageFibre Channel SANs networking interconnect, Fibre Channel SANs and NAS and iSCSI it is not ubiquitous because and iSCSI SANs SANs ultimately, it is expensive and complex." Companies that 25% 13% have implemented it see the value in terms of performance and reliability. "However, Fibre Channel has not 33% 43% reached universal adoption and therefore requires either complementary or replacement technology. 42% 45% This is where iSCSI plays a vital role," he says. Sloane of Info-Tech agrees. "There is tremendous growth opportunity. For iSCSI there is nowhere to go but up." CIO


will you move from a fibre CHaNNel SaN to iSCSi SaN?

exPerts agree: isCsi it will be

We will replace FC SANs with iSCSI to a significant extent

But even after giving Fibre Channel its due, the We will replace FC consensus is that iSCSI is the SANs with iSCSI to ultimate winner. some extent Analysts and users cite We will not replace the upfront cost of Fibre FC SANs with iSCSI– Channel components, but iSCSI SANs buildouts stress that specialized will be added. expertise continues to be a problem. Winslow agrees that companies adding to an existing storage infrastructure or moving from direct-attached storage, more likely will opt for iSCSI over Fibre Channel. Another iSCSI plus is that such important features as data replication and snapshots have been

Vol/3 | ISSUE/11

Barbara Darrow is a Boston-area freelance writer. Send feedback on this feature to

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Storage Management

You've got to be in the know of storage truths to make use of them. By Beth Schultz

Thanks to virtualization and a host of other technologies, storage has left its silo.

Its performance affects the whole computing shebang. Fortunately, new technologies that cross the boundaries of storage, management and compliance are smoothing over performance issues and easing the pain (and expense). But you've got to be in the know to make use of them. Here are seven storage truths that every IT person should understand.

IllUSt ratIon by bInESH S r E EDHaran


You might be spending too much money on storage and still not getting performance gains. Optimizing storage isn't about buying new stuff, says Mark Diamond, CEO at storage-consulting firm Contoural. It's about determining whether the data you've created is stored in the right place. This discussion goes beyond the basic concept of using inexpensive disk to store data and delves into how the disk is configured, especially when it comes to replication and mirroring. "We typically see that 60 percent of the data is overprotected and overspent, while 10 percent of the data is underprotected — and therefore not in compliance with SLAs [service-level agreements]," Diamond says. "Often, we can dramatically change the cost structure of how customers store data and their SLAs, using the same disk but just configuring it differently for each class of data."

2 40

Application-centric monitoring tools can help boost SAN performance. Users who get great performance out of their storage-area networks have discovered application-centered monitoring for storage performance. A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Reader ROI:

How new technologies can help cut storage costs How backup tools ease compliance issues

Vol/3 | ISSUE/11

Storage Management

For instance, the Affinion Group is testing a combination of Onaro's Application Insight and SANscreen Foundation monitoring tool. "We could be alerted in real time of any performance spikes and hopefully be informed of any issues that could cause an outage, before someone calls from the business line," says storage specialist Raul Robledo. "We wouldn't need to get inquiries or notification from individuals. We would get those right from a product that's monitoring our environment." A host of other products have entered the category of storage optimization, too.


Green storage technologies can cut energy bills without sacrificing performance. Storage isn't the biggest energy hog in the data centre, but new technologies can still help cut back on its power consumption by as much as 20 percent, users say. Even using storage space more efficiently can cut down on wasted capacity, experts say. This means spending less on storage in the long run. At San Diego Supercomputer Center, Don Thorp, manager of operations, looked to Copan Systems, one of a handful of relatively new, smaller green storage vendors. He reports that storage consumption is down by 10 percent to 20 percent since switching to Copan Systems last July. Many more such vendors are entering the market.


Advanced backup-management tools ease auditing and compliance. Over the last several years, numerous vendors have taken backups from boring to remarkable by rolling out fancy backupmanagement tools. Spun off from the broader storage-resource management market, these tools, of course, monitor and report on backups of products from multiple vendors. But they also give IT administrators an at-a-glance picture from a single console, in real time and historically. They can ease the auditing process and help create chargeback programs verify internal service-level agreements for backups. Heterogeneous backup-management tools are available from various niche vendors and the mainstream storage biggies.


Storage virtualization appliances can give you a single storage system for both backups and live storage. Just ask the University of Florida College of Veterinary Medicine (UFCVM). Over the last six months, the college has been putting its 7TB storage area network through its paces, using it for nearline backup and primary storage. UFCVM relies on storage virtualization manager (SVM), a

Vol/3 | ISSUE/11

In FacT

60% of the data is overprotected and overspent, while 10 percent is underprotected.

20% Is the amount by which new technologies can reduce data center power consumption.

95% Of all business communications is created and stored electronically.

virtualization appliance from StoreAge Networking Technologies, now owned by LSI. The SAN setup reduced backup times by half, and the project came in under budget, says Sommer Sharp, systems programmer for the college in Florida. Provisioning is a painless matter of moving volumes to any server that needs it, so live data can be managed as easily as backups.


Lawsuits are a fact of life and sloppy e-discovery can cost you millions. Recent surveys show that, on average, US companies face 305 lawsuits at any one time. With each lawsuit comes the obligation for discovery — production of evidence for presentation to the other side in a legal dispute. With 95 percent of all business communications created and stored electronically, that puts a heavy burden on IT to perform e-discovery, finding electronically stored information. In the US court system, the onus of e-discovery took on new weight on December 1, 2006, when amendments to the Federal Rules of Civil Procedure (FRCP) took effect. "With the amendments to the FRCP, the courts are saying, 'We know the technology exists to do this stuff. We want to see you take some reasonable steps to put processes and technologies together to do e-discovery. And if you don't, we're really going to hold you accountable for it,'" says Barry Murphy, principal analyst at Forrester Research. He cites the recent case of Morgan Stanley vs Ronald Perelman, in which Morgan Stanley was hit with a Rs 62.8 crore jury verdict, which hinged primarily on the company's lax e-discovery procedures.


Storage grid standards could put an end to proprietary storage management. The Open Grid Forum, a standards organization focused on Grid Computing, is working on a variety of standards for the compute, network and storage infrastructure, all the way from describing jobs to being able to move and manage data, says Mark Linesch, who heads the organization. Work is progressing around defining a grid file system and naming schemes, and developing a storage resource manager for grids. The group is collaborating with other standards bodies like the Distributed Management Task Force and the Storage Networking Industry Association. The ultimate goal is to enable proprietary storage vendors to make their gear interoperable. CIO


Send feedback on this feature to

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Data Management

As virtual machines proliferate, IT has watch where they park their backups so that they don’t bog down network performance or send licensing costs skyrocketing. By Deni Connor As more servers

are virtualized, backing up and protecting them becomes more of a problem. It's not enough for IT to back up each virtual server and its data. Protection is needed for the virtual server's image — its OS, configuration and settings — and the metadata on the physical server that identifies the virtual server's relationship to networked storage. The challenge is to choose the right virtual server backup option: Traditional agent-based backup software, which installs a software agent on each virtual machine to back it up. Serverless backup, which offloads backup processing from virtual machines (VMs) to a separate physical server. Snapshotting VMs with software included with the virtualization package to protect data and images. Writing scripts and executing them to quiesce (minimize the number of processes running on) the VM, back up its contents and restore the VM. A combination of agent-based software and cloning. Each virtual-machine backup approach has its advantages and disadvantages. Chief among the disadvantages is the effect on network performance and utilization. While virtualization can result in better utilization of server resources, backing up all the newly created VMs concurrently for a physical server can overwhelm the network and take resources from applications running in other VMs. By virtualizing physical machines you increase the number of servers contending for one bus. So Chris Wolf, senior analyst, Burton Group, suggests users only virtualize servers that contain a PCI-Express (PCI-e) bus. "When you have a shared I/O channel for all your PCI devices, traditional PCI devices can severely slow you down when you talk about six to 10 VMs sharing the same bus," Wolf says. "PCI-Express should be the 42

A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

Reader ROI:

What to consider when choosing a back-up technology The options in virtual server back up.

Vol/3 | ISSUE/11

Data Management

bus of choice for all new virtualization deployments, as it offers a transfer rate up to 16Gbps in full duplex, compared to PCI Extended, which has a maximum throughput of 4Gbs." Also to consider is the cost associated with agentbased backup software used in a virtual environment. Since most vendors of backup software require a separate license for each backed up VM, as well as one for the physical machine hosting the VMs, licensing costs can increase quickly. The advantage of agent-based backup software is that IT administrators are familiar with it, having deployed it for many years to back up the physical machines in their environment.

What Users say Increasingly, IT users are opting for a combination of methods to back up virtual servers. A common approach is to use agent-based and serverless backup for protecting data on VMs, combined with cloning or snapshot technology for protecting and recovering server images if hardware fails. One user who has adopted this combined approach is Jim Klein, director of information services and technology for the Saugus Union School District. Klein uses the Open Source Xen virtualization hypervisor to virtualize the blade servers in his environment. "We treat VMs like any other server by using backup agents from Bacula, an Open Source backup solution," says Klein. While Klein uses agent-based backup to protect the data on his VMs, he uses cloning technology to deal with server failures. "For the base virtual machine images, we store them on the host computers and replicate them to the failed server or store them on a network attached storage, or NAS device." The metadata for Xen, which describes how servers attach to storage resources, is stored in a database called XenStore, which is included with the Xen hypervisor, and can be backed up easily by simply copying files to a backup device. Art Beane, IT enterprise architect at IFCO in Houston, also has found that a combination of backup technologies works best for him. Beane uses NetApp's SnapManager software to snapshot the data on his NetApp SAN, and cloning to back up the servers attached to them. He has six physical machines virtualized with VMware Infrastructure 3 into 23 VMs. "Our backup plan is common to virtual and physical servers," Beane says. "No persistent data is allowed on a server, only on the SAN. The SAN gets a snapshot backup every two hours and a full backup daily."

Vol/3 | ISSUE/11

How conversant are you with these technologies? Implementation Begun Discussing Other




Consolidation play


storage VirtUalization


26% 47%

d data lifeCy CyC Cy yCle ManageMent


The system drives — both physical and virtual — in Beane's servers get imaged weekly using Acronis' TrueImage. In the event of a catastrophic server loss, the Acronis image can be restored either to a physical box or as a VM. This multilayered approach is the configuration Wolf most often recommends. "For smaller dedicated application servers, running the agent inside the VM is certainly ideal," Wolf says. "That needs to be combined with a policy and changecontrol process for the creation of storage and snapshots as well." In this way, when it comes time to recover files, all an IT admininstrator needs to do is bring online the backup copy of the VM snapshot and then restore the latest data files from the agent-based backup.



d data de-dUpliCation C Cation



39% Source: Symantec State of the Data Center 2007

Another method for backing up VMs is to use serverless or consolidated backup technology. In consolidated backups, backup processing is offloaded from the VM and physical server to a separate backup server called a proxy, thus helping to avoid any performance impairments. Consolidated backup is commonly deployed using a combination of VMware Consolidated Backup (VCB) and agent-based backup. VCB consists of a set of drivers and scripts that enable LAN-free backup of VMs. In a consolidated backup, a job is created for each VM and that job is executed on the proxy server. The pre-backup script takes a VM snapshot and mounts the snapshot to the proxy server directly from the SAN. The pre-backup script also quiesces the Windows NT file system inside the VM. The backup client then backs up the contents of the VM as a virtual disk image. Finally, the post-backup script tears down the mount and takes the virtual disk out of snapshot mode. Taking snapshots and cloning VM images has many advantages. Like agent-based backups, most IT administrators are familiar with them. Snapshot and cloning capability also is included in many virtualization packages such as VMware and XenSource, and with many traditional backup tools. "There's not a one-size-fits-all solution," Wolf says. "When you are dealing with large amounts of data such as in databases ... I prefer to configure VMs to use a raw LUN (logical unit) so the VM is not using a virtual hard disk, but actually mapping to actual storage resources on the SAN. That opens up the flexibility to use serverless backups, some of your snapshotting agents and all of the capabilities of your backup software that exist in the physical world." CIO Send feedback on this feature to

REAL CIO WORLD | A P R I L 1 5 , 2 0 0 8


Ajay Kaul, CEO, Domino’s Pizza, says that in a company where a minute too late means money lost, IT is the base on which speed is built.

Hustling IT Along

By Kanika Goswami

In the early 90’s, Domino’s outlets in the US called off their 30-minutes-orfree program after a woman sued the company when a harried deliveryperson hit her. Media pressure to safeguard the lives of pizza delivery people soon came to India. Suddenly, pizza chains eased away from their 30-minute guarantees. Some companies moved to a 39-minute promise while others decided to forgo the concept entirely. Domino’s in India, however, stuck to its guns. It decided to make it’s guarantee count while simultaneously taking the heat off delivery people. By using IT to tweak internal processes, the company took its focus on speed out of the street and into their stores. They made time — in addition to their pizzas — a differentiator that their competition can't beat. Ajay Kaul talks about this journey and how, with the help of IT, Domino’s introduced speed and accountability at every level.

CIO: How important is speed to Domino’s USP?

View from the top is a series of interviews with CEOs and other C-level executives about the role of IT in their companies and what they expect from their CIOs.


A P R I L 1 5 , 2 0 0 8 | REAL CIO WORLD

View from the Top with AD.indd 44

Ajay Kaul: Speed is a critical component of Domino’s USP. But, we relate more to ‘hustle’ than to ‘speed’. At Domino’s, we have a saying: ‘Hustle inside the store, not on the road’. We employ this motto to ensure that we deliver pizzas that are hot and fresh — in 30 minutes — without compromising the

safety of the employee who is making the delivery. We operate with ‘smart hustle’ and positive energy inside all our stores.

Where does technology fit in the hustle? Technology has an important role. It ensures that once an order is taken, it gets executed within 30 minutes. Through

Vol/3 | ISSUE/11

4/15/2008 3:46:18 PM

View from the Top

technology, we track load time, dispatch time and delivery time. IT helps us find patterns of systematic delays in specific areas or at specific times of the day, etcetera. Then these can be looked into and corrective action can be taken to improve our service to the customer. To do all this, we use our POS (Point Of Sale) software. The software is tailormade for the pizza delivery business. It has been designed externally and is meant primarily for ‘quick-service’ restaurants with specific focus on pizzas — 80 percent of their customers are pizza companies. Fortunately, the software dovetails into the system which we use in the US, the userfriendly one that uses a touch screen. Our aim is to scale up from our POS to the US system over the next one or two years.

Photo by ShIVay bh an darI

What benefits has the POS software given Domino’s? It has made the order-taking process simpler and faster and has also reduced the time it takes to train new employees. Initially, we tried to use our old infrastructure with the software, but we realized that the new software required better infrastructure if it was to deliver results. Therefore, we made significant investment in improving IT infrastructure at our stores. Providing training to employees to use the POS effectively and to use all its features still remains a challenge.

Ad 1 Size 21cm x 4.5cm

View From Top is brought to you by Cisco.

AjAy KAul expects I.t. to: support the company’s need for speed Reduce waste in the supply chain Help manage quality

View from the Top

How does it help you make better business decisions? The POS software helps understand consumer behavior. It tells us their preference of crust, toppings, etcetera. It also helps us customize offers for consumers. Marketing has already started using the POS database for CRM related activities. The database also helps us identify consumers who do not come back to us after eating once. It helps us understand them better and work out a plan to bring them back.

Do you use BI to determine your toppings? How else do you use this data? We normally study which pizzas on our menu sell better. This aids us in removing pizza combinations that are out of fashion and keep our menu simple. We also try to keep customer information updated. And, all these processes reside on our POS system. We have a pull-push mechanism through which this data goes onto a server that facilitates high-end analytics. Then, we re-feed all this data back to the store. There are some programs that run internationally and some that run locally. This data tells us what we should offer to customers based on their past preferences and how frequently they visit us, since we now have all these parameters. It’s fairly scientific. We have millions of customers and keep track of the toppings each uses and their visit-frequency, so that we know what to offer. To my mind, we have a best-in-class system. I doubt anybody in any

“We track the toppings and the visits of millions of customers, so that we know what to offer. I doubt anybody does this as well as we do.” — Ajay Kaul food service or retail food business does this as well as we do.

Managing inventory and the supply-chain is another challenge in the fast-food industry. How do you stay on top of that? We deal in perishable items so it is importance that we procure the right quantity,

at the right time or end up with wastages or contamination. With IT we can optimize procurement and dispatch of inventory to outlets spread 35 cities. To do this, we use our ERP combined with co-ordination between operations and procurement. Inventory is centralized region-wise where the commissary (equivalent to a factory or warehouse) of particular region procures and supplies all items needed by outlets according to pre-determined dispatch plans. Outlets forward items they need to the commissary, based on which the items are procured or produced and supplied to the outlets. The inventory position is constantly monitored using ERP and the purchase and manufacture of items are based on that.

Domino’s intends to ramp up to almost 500 outlets over the next three years. What challenges do you foresee? The key challenges in the current retail boom environment are manpower and rentals. At present, we have over 5,000 employees for over 182 outlets. It indicates that over the next three years we will need to triple our manpower. This at a time when retail has just started to take wings. Attracting, recruiting and training manpower is a first key challenge. Also the way the rentals have moved up makes the search for good locations at reasonable prices a difficult task. We have laid internal benchmarks in terms of ROI on all new stores and the increase in rentals will put considerably pressure on that.

Ad 2 Size 21cm x 4.5cm

Cisco is the world leader in networking technology. View from the Top with AD.indd 46

4/15/2008 3:46:25 PM

View from the Top

You spent two years in Indonesia with the delivery company TNT Express. What did you learn that you use today? Domino’s is 5,000-strong company with over 180 stores in 33 cities. My stint at TNT gave me an experience of working in a diverse culture. It taught me flexibility and resilience which is useful in dealing with the diversity of cultures, religions and work practices in India. Additionally, I also picked up good QSR (Quality System Review) practices that we utilize here. Indonesia is a few years ahead in terms of evolution of food services, thanks to the presence of QSR.

So, what kind of quality control measures does Domino’s have in place? In our kind of industry, where we have some of the most stringent food preservation norms, so we require a ‘cold chain’ right from the time material leaves the vendor. Until the material reaches us, it is stored under temperaturecontrolled conditions and even when it moves to our stores in 182 destinations. We have stringent internal control norms that look at temperature so we have data loggers on our trucks. It’s fairly intense. In our setup, the points where we need quality control continue along the chain, because temperature has to be maintained between 1 to 4 degrees Celsius. We have to meet international quality norms and about 400

managers play the quality controller role. IT helps us get our quality reports to a central place. We are also planning to give handheld devices to all quality auditors, so that they can be on the move and still keep updated to allow them to analyze problems and find solutions. We are also planning to backend this to our vendors so they get online access to our quality reports.

Domino’s Sales:

10 lakh pizzas a month Stores:

180 across 33 cities. Employees:

2,000 people Head of IT:

Ravi Gupta

In an organization like Domino’s, what’s the role of an IT head? An IT head in our company should and does think like a business manager. With the kind of IT advancements that are taking place, it is integral to the job of the IT head to study how IT can enable the execution of business. In the future, IT is not only going to assist the business, but also be central to customer acquisition, retention, and loyalty.

What technologies do you plan for in the near future? Domino’s in the US has launched a pizza tracker service. We make a ‘30-minutes or free’ offer and now if customers want to know which stage within these 30 minutes their pizza is at, we can tell them. Ten minutes after placing their order, customers will be able to know whether their pizzas are still in the oven or on the road.

In India, that service will be launched soon. All we require for that is a 100 percent seamless online connectivity. Within the next few months, our main server will have 100 this. Then customers will be able to order online, via SMS and we will be able to track their pizzas.

Do you see further growth of ready-to-eat foods in India and do you plan to join it? Due to an increase of double income families, the growing number of TV channels that are enticing people to stay glued to their TVs and the growth of nuclear families, people have less time to cook. Therefore, I expect the market for ready to eat foods to grow rapidly like it has in the US. We often say that our real competition is not from what anybody perceives as competition but from home food. At the moment, however, Domino’s is not planning for any diversification of any sort into the ready-to-eat business. CIO

Kanika Goswami is special correspondent. Send feedback to this interview at

Ad 3 Size 21cm x 4.5cm

Welcome to the human network. View from the Top with AD.indd 47

4/15/2008 3:46:27 PM


technology I LLUSTrATIon by MM Sh AnITh

From InceptIon to ImplementatIon — I.t. that matters

Using advanced backup management tools, IT execs can get a whole lot savvier about auditing, capacity planning, SLA compliance and more.


a P R i l 1 5 , 2 0 0 8 | REAL CIO WORLD

New Backup Tools forYou By Beth Schultz it management | Monitoring backups has always been one of those unglamorous IT chores. Over the last several years, however, numerous vendors have taken backups from boring to remarkable as they roll out heterogeneous backup-management tools that perform so many new functions. Spun off from the broader storage-resource management market, these tools monitor and report on backups across multiple vendors' backup products. In doing so, they can ease the auditing process. They create a way to implement chargeback programs for backups. They let network executives offer and verify service-level agreements for backups, and more. Heterogeneous backup-management tools are available from various niche vendors including Aptare, Bocada, CommVault and WysDM Software, as well as such infrastructure vendors as EMC and Symantec. An enterprise might be running EMC's Legato Networker, IBM's Tivoli Storage Manager and Symantec's Veritas Backup Exec, but with backup-management software, an IT administrator can get an at-a-glance, big-

Vo L /3 | ISSUE/11


essential technology

Bigger on Performance.

Smaller on Power Consumption. picture look at what's happened with all those operations from a single console, in real time and historically. Some vendors take a traditional client/ server approach to backup management. An example is EMC's Backup Advisor, in which agents sit on production servers, and backup hosts feed system information into the backup-management server residing on the network. More typical is the agentless approach, favored by Bocada, WysDM and others, in which backup-management software gathers statistics through scheduled polling. These tools are getting ever more sophisticated. Recently, start-up Illuminator released Virtual Recovery Engine (VRE), which coordinates reporting of backup applications and other data-protection technologies. In addition, the software associates that information with the application data, so IT executives get an easy view of the backups connected to every data set, says Yishay Yovel, a vice president with the vendor. The initial release provides interfaces to storage arrays and point-in-time copying, replication and backup applications from EMC and Network Appliance.

Better Service for SLA Management Given the rise of the dynamic, open New Data Center, products that provide a centralized, heterogeneous view of the data-protection infrastructure are a huge boon. Suddenly, monitoring SLAs is a lot easier. WysDM for Backups software, for example, uses a predictive-analysis engine to spot potential SLA problems. The engine learns the normal behavior patterns of the data-protection infrastructure, then flags discrepancies, says Jim McDonald, CTO and co-founder of WysDM Software, one of the pioneers in heterogeneous backup management. For example, if the engine notices the backup of financial data is taking five minutes longer each night, WysDM for Backups could notify IT that if it doesn't

Vol/3 | ISSUE/11

Seagate® Barracuda® ES.2 Hard Drive Seagate 5-Year Warranty

1-TB capacity PowerTrim™ technology for optimized power consumption Highest-reliability 7200-RPM drive Best-in-class rotational vibration tolerance

Seagate Authorised Distributors: Fortune Marketing Pvt Ltd: 011 - 26427627 Ingram Micro India Ltd: 044 - 22333071/73 Redington (India) Ltd: 044 - 42243535 Supertron Electronics Ltd: 033 - 22131221 Seagate Premier Partner- Sub Distributors: New Delhi: AVS Informatics Pvt Ltd: 41618595 Advantage Computers India P Ltd: 41618611 Exclusive Systems: 26474739 Mumbai: Chip-Com Traders Pvt Ltd: 23893178 Continental Computers: 23841999 Gautam Computers: 23819867 Kay Kay Overseas: 9833579364 Maxtone Electronics Pvt. Ltd: 23011434 Om Shakti Computers: 23803583

The highest capacity 7200 - RPM drives for 24x7 business-critical applications. The Seagate® Barracuda® ES.2 hard disc drive with up to 1 terabyte capacity is an enterprise drive that provides up to 33% more storage space. Seagate PowerTrim™ technology is an automated feature that reduces GB/Watt power consumption by 55% over previous generations. It can make a significant contribution toward reducing power draw as well as wear and tear on drive components. These SATA drives deliver the highest 7200 RPM reliability for 24x7 business-critical applications and are the perfect solution for high capacity enterprise storage applications such as the migration of mission-critical transactional data from Tier 1 to Tier 2 (nearline) storage where cost/gb and watts/gb are a primary concern. With capacities ranging from 250GB to 1TB, these drives are tailored to meet the demands of 24x7 enterprise operations.

Sanghvi Electronics Pvt. Ltd: 022 - 23894000 Sybex Marketing: 022 - 22014015-16 Chennai: I-Com Systems Pvt Ltd: 42027134 UMS Infotech Pvt. Ltd: 28410731 Bangalore: Mega Byte Corporation: 51143441 Ahmedabad: Aegis Infoware: 26731002-04 Nation Infotech Private Ltd: 26853895 Silverline Infocom: 27458021 Pune: Poonam Electronics: 25521745 Baba Infotech Pvt Ltd: 24485268 Data Care Corporation: 9520-30217777 Kolkata: Supreme Technologies Pvt Ltd: 22119355-60 Secunderabad: Shwetha Computers & Peripherals: 66387450-51

Nagpur: Silver System: 0712 - 2547622 Jaipur: Kanchan Computech P Ltd: 2229477 Cochin: International Marketing Co: 2383749 Lucknow: Omni Enterprises: 2202533 Kanpur: Well Known Computers Pvt Ltd: 2306925 Patna: Sharla Computer Technology Pvt Ltd: 2202881 Gurgaon: Digital Compusystems Pvt Ltd: 4114259 Ludhiana: Silicon Computers: 2413479 K C Computers: 0161 - 2450295 Karnal:Computer Network: 3091624 Chandigarh: Perfect Computer: 3018044

For more information Call Toll Free 1800 180 11 04 or visit us at /CIO/3A

Essentisl Tec.indd 57

4/15/2008 3:48:35 PM

essential technology

address the situation, it will fall out of SLA compliance in X amount of time. Another example: the engine might notice the absence of a nightly 3GB-file-system backup. Because that doesn't fit normal behavior — and could put IT out of SLA compliance — the software would issue an alert, he says. "This is the difference between just having technical output of backup information and providing business protection." Steve Frewin, storage administrator for TD Banknorth, a banking and financialservices company in Portland, Maine, and wholly owned subsidiary of the TD Bank Financial Group, says he is using backup-management functions within Symantec's broader Command Central Service software for a daily health check. That differs from what he had to do in the old days when a server didn't complete its backup within SLA windows: gather the backup's parameters manually. Having the historical perspective also helps him provide better advice when IT

say, 'Here you go — it's done.' We can customize reports to auditors' needs, and that's so much easier than writing long SQL statements to gather the information," says Brian Witsken, the lead storagemanagement systems engineer for the 30-hospital, non-profit healthcare system. Bocada, one of the first vendors to offer a heterogeneous backup reporting tool, has more than 400 enterprises using its software. According to Nancy Hurley, a vice president with the company, 75 percent of those users say Bocada reports are essential in helping them pass audits. Illuminator also pitches itself as an antidote for auditing headaches. With VRE, users can recover application data on request and show exactly what assets are in place and how the data is protected. It also shows the processes in place to fix any problems that occur in the data-protection environment. And if a company can show how well organized it is about its compliance processes, "who

Backup-management is easing many pain points.Chargeback is an example. Since the tools give an enterprise view, it's easier for ITto figure out whose data is taking up how much backup space. administrators ask how additional data volume would affect the backups. "I can now say, well, we're just barely meeting that SLA now, if you add [100 gigabytes], that's going to be a problem," he says. "Before, all I could say was, 'Well, I think we're out of room' — that doesn't go far with management." With these tools, auditing and compliance become painless, some users say. That seems to be the case at Catholic Healthcare Partners (CHP), which uses Bocada's Bocada Enterprise to manage its backup operations and ease audits. "We use Bocada to print reports and 58

Essentisl Tec.indd 58

A P R IL 1 5 , 2 0 0 8 | REAL CIO WORLD

knows, then maybe next time the auditor won't even ask [about the processes] and instead just agree to look at the reports," Illuminata's Yovel says.

Getting A Charge Out of Backups While that remains to be seen, heterogeneous backup-management software is turning into a salve for many other enterprise pain points. Take chargeback and billing, for example. Because the tools provide an enterprise view and offer customizable reporting, it's much easier for IT executives to figure

Advanced Back-up Management Tools

Spun off from the broader storageresource management market, these tools monitor and report on multiple vendors’ backup products. Aptare StorageConsole Bocada’s Bocada Enterprise CommVault’s CommNet Service Manager EMC’s Backup Advisor Illuminator's Virtual Recovery Engine Symantec’s Veritas Backup Reporter WysDM Software’s WysDM for Backups — B.S.

out whose data is taking up how much backup space. At CHP, Witsken intends to use a new reporting capability in the latest version of Bocada's backup-management software to institute a chargeback program for backup storage, he says. Because Bocada Enterprise 5.0, which began shipping recently, reports on server occupancy, Witsken will be able to use it for an at-a-glance view of how much data each business unit is occupying on the backup storage system, he says. "We are responsible for a certain amount of data. So, if we set total occupancy on our server at [1TB]; anything over that we will be able to charge back to the customer," Witsken says, noting that CHP hopes to have a chargeback program implemented within six months. At TD Banknorth, Frewin says centralized backup reporting has slashed the time it takes him to run a monthly virtual chargeback report from three days to a half-hour. While he doesn't use the reports for billing, he says they are critical in helping him understand

Vol/3 | ISSUE/11

4/15/2008 3:48:35 PM

essential technology

the data center demographics. "The reports help me figure out on an annual basis what it costs to maintain the dataprotection infrastructure and to assess how the business units are using those resources," he says. Those assessments are quite useful for planning, Frewin says. "The reports highlight who my big users are, and so if I have a change that's coming up or some other data-center activity, I know to look at the resources for those users first because they're the hardest to move. This doesn't necessarily mean they're any more or less important than others, but if you've got a job that runs longer, it's harder to move," he adds.

windows are available, and readily put a finger on problems in the data protection and backup environment. I couldn't just write scripts manually like I used to do," he says. Better troubleshooting also was a selling point about backup-management software for Peter Amstutz, chief of network design for the Defense Contact Management Agency (DCMA), in Fairfax County, Virginia. "We wanted to take as much of the human element out of backups as possible," he says. For that, he selected backupmanagement software from CommVault, CommNet Service Manager (until recently named QNet Service Manager).

New back up tools are making auditing and compliance less painful. But perhaps the biggest advantage of centralized, heterogeneous backup management is the peace of mind that it brings. With the trend, use and volume reports he gets from his Command Central software, Frewin says he acts proactively rather than reactively. He also is betatesting a stand-alone backup manager: Symantec's recently introduced Veritsas Backup Reporter. The specialty software should give him more advanced reporting capabilities than the broader SRM package and help him grapple better with the company's capacity planning needs, Frewin says. TD Banknorth is expanding its business rapidly, increasing its territory, customer base and types of services offered. As the business has grown, Frewin says he has witnessed 130 percent year-over-year growth in data backups. "With a growth curve like that, I have to be able to plan when I'm going to need more equipment, know what backup

Vol/3 | ISSUE/11

Essentisl Tec.indd 59

Sophisticated backup management was a must if the agency was to benefit fully from its move from tape backups to disk, Amstutz says. That 100 percent migration — unusual as a full-out tape replacement — occurred over four months beginning in October 2005. Today, two 64TB Network Appliance NearStore R200 Advanced Technology Attachment disk arrays house backup data at the agency's two main US data centers, which support 11,000 users. Data is replicated for off-site backups using NetApp's SnapMirror software, and backed up locally on disk using CommVault Galaxy. As of this spring, Mimosa Systems' NearPoint continuous data-protection software handles backups for the agency's Microsoft Exchange environment, Amstutz says. "We set QNet up so that if any jobs fail, it notifies

82% Of

respondents to a backup tools survey say that they keep a hard copy of important documents they’ve also saved electronically. Source: IDC

a local administrator. If a job fails more than three times, then it notifies a larger group, including supervisors. That's been quite effective," Amstutz says. A year into the new backup scenario, "we have just one person, on a very part-time basis, monitoring and controlling the entire backup infrastructure from one location," he adds. Previously, DCMA spent eight hours a week on average just handling tapes, he says.

Peace of Mind Perhaps the biggest benefit, however, is the peace of mind centralized, heterogeneous backup management brings. As Amstutz says: "I can certainly say that our backups are a lot more reliable now. Before we overhauled the backup infrastructure, we were very uncertain as to whether things were getting backed up at all — and in some cases, we found that they actually weren't. Now we know for sure." CIO

Send feedback on this feature to

REAL CIO WORLD | A P R IL 1 5 , 2 0 0 8


4/15/2008 3:48:36 PM


essential technology

Make Interoperability the Goal What do you do when even standards bodies have their own agendas? By Mario Apicella

| Getting storage vendors to play together nicely is no easy task. When they do, it is an event worthy of pause — even if the gathering proves more about selfservice than boosting the interoperability of their wares. The Storage Bridge Bay Working Group, a non-profit created by storage vendors, recently announced an update to its set of specs for storage devices and components. The specs — SBB 2.0 — attempt to standardize a wide array of storage parts, including the physical dimensions of controller canisters, bay constraints, and connectors, as well as controller power


striking given that non-members are free to use the spec as well. They may be wise to do that, considering the specs are proving fruitful to those who employ them. "At Xiotech, we have seen development time cut down by more than half," says Hall, who, in addition to his duties at SBB, holds a day job at Xiotech. A fast injection of new technology also brings advantages to end-users creating products that are more affordable, more modular, and easier to repair. Part of the power of the specs is the flexibility they bring to final products. For a good example, consider the Dell MD3000

Alas, interoperability remains, for the most part, incidental and still dependent on backroom deals. "If you buy an SBB chassis from Dell and one from Xiotech, this doesn't necessarily mean that the modules are going to be interchangeable," Hall explains. That’s only possible "if the two vendors have agreed to do that," says. Ironically, SBB 2.0 does include new specs for canisters that promote interoperability across chassis and vendors. Could this be a tentative first step in the right direction? Perhaps, but the day we'll be able to swap components among storage arrays as easily

The day we'll be able to swap components among storage arrays as easily as we plug a radio into different car models is still far off — if it ever comes. and cooling parameters and enclosure management functionality. Unless you design storage arrays, the specs aren't lively reading. But for vendors, they are fast-proving to be essential, if growth in SBB's popularity is any guide. In the 15 months since its first document, the group has grown its membership by 39 percent, says Mark Hall, chairman of the SBB marketing subgroup, with Sun Microsystems the most recent addition to the fold. Although some big names are missing from SBB's ranks, the growth is all the more 60

ET-Pundit.indd 60

APRIL 1 5 , 2 0 0 8 | REAL CIO WORLD

line, which offers different host connectivity on what is essentially the same chassis. But what is a bit too restrictive is that voting members are able to directly influence the specs, with an eye toward easing integration with their own products. Interoperability should instead be the goal of the group. Unfortunately, the specs clearly reject this objective: "The SBB specification is not intended to provide a guideline for interoperability between SBB compliant controllers from different vendors," the SBB 2.0 document clearly states.

as we plug a radio into different car models is still far off — if it ever comes. What customers really need from groups such as the SBB is a movement toward component compatibility. Although these specs won't make future devices vendoragnostic — compared with the original set — they will favor injecting new technologies, creating incentive to move away from dated, less efficient products. CIO

Send feedback to this column on

Vol/3 | ISSUE/11

4/15/2008 3:47:20 PM

April 15 2008  

Technology, Business, Leadership