Head in The Clous

Page 1

| THE INDEPENDENT RESOURCE FOR IT EXECUTIVES Q3 2010

Head in the clouds The focus on

cloud computing Also in this issue: – search, security, and

social technology

Pa u l B u r n s Ne o v i s e

Laura DiDio ITIC

Mar tin Kuppinger Kuppinger Cole

Matthew Lees

Patricia Seybold Group



ETM ■ CONTENTS PAGE

Contents and contributors page 11Editor

13Industry snapshot 15Professional profile 122

Events and features

16 GRC—Step by step

In the last few years, GRC has emerged as the meeting point of efforts to ensure not only compliance with regulatory requirements, but a responsible approach to the management of the enterprise risk that underlies regulatory demands. SCOTT CRAWFORD

(ENTERPRISE MANAGEMENT ASSOCIATES) talks with LUCIO DE RISI (MEGA INTERNATIONAL) and LUC BRANDTS (BWISE), two

experts directly involved in helping organizations shape their GRC strategy.

The back office means business 26 RANDY BRASCHE (GENESYS)

and ETM’S ALI KLAVER explore how companies are applying best practices from the contact center to the back office to improve efficiencies and meet service levels, all while reducing cost.

28 Pure logic

How can businesses better operate by identifying and capturing defined logic and repeatable rules? DANA GARDNER

(INTERARBOR SOLUTIONS)

discusses business rules management (BRM), and explores the value of businesses being agile in a safe way. He is joined by DON GRIEST

(FICO), RIK CHOMKO (INRULE TECHNOLOGY) and BRETT STINEMAN (IBM).

38 The key to success

As cloud computing takes a stronger hold on the IT industry, organizations must realize that there is a another factor that will ensure its success. MARTIN KUPPINGER (KUPPINGER COLE) writes about the importance of service management.

8

40 Smooth transition

Public versus private cloud computing is one of the hottest topics in IT at the moment, but there’s still a lot of confusion around its adoption— which has been higher than anyone expected.

STEVE BRASEN (ENTERPRISE MANAGEMENT ASSOCIATES) talks with DENIS MARTIN and BROOKS BORCHERDING (NAVISITE) about the scope and transition of cloud computing.

46 Transforming business

ETM’S ALI KLAVER interviews DEEPAK JAIN (WIPRO) about enabling business growth through IT infrastructure transformation.

Actionable Intelligence 50 DR. ANTON A. CHUVAKIN

talks about the usability and integration of security information and event management (SIEM) and touches on log management with the added benefit of three industry experts; MICHAEL

LELAND (NITROSECURITY), A.N. ANANTH (PRISM MICROSYSTEMS) and DEBBIE UMBACH (RSA, THE SECURITY DIVISION OF EMC).

your own app factory 62 Create ETM’S ALI KLAVER chats to

EDDY PAUWELS (SERENA SOFTWARE) about business process-

driven application lifecycle management and how they link to each other, as well as how businesses can benefit from such a strategy.


CONTENTS PAGE ■ ETM

Contents 68 LAURA DIDIO (ITIC)

The pros and cons of cloud

describes what to watch for if you’re considering making the jump to cloud computing. She suggests it’s definitely a benefit, as long as you’ve done your research and it’s done right.

A marriage of sorts 72 MIKE ATWOOD (HORSES FOR SOURCES) moderates a panel discussion

on IT outsourcing touching on transformation, the cloud and some fantastic case studies with the help of CHUCK VERMILLION (ONENECK IT SERVICES) and KARINE BRUNET (STERIA).

84 SIEM satisfaction

One of the most ignored benefits of security information and event management technology is using SIEM technology to improve overall IT operations. A. N. ANANTH and

STEVE LAFFERTY (PRISM MICROSYSTEMS) talk to ETM’S ALI

Seven reasons 92 PAUL BURNS (NEOVISE)

shares with ETM why every CIO must embrace Service Level Management as a way to transform the IT organization.

94 Moving with the times ETM’S ALI KLAVER talks to

PRASHANTH SHETT Y (METRICSTREAM) about

Business and IT— side by side 100 DANA GARDNER

(INTERARBOR SOLUTIONS) moderates a discussion on the productivity benefits and future of business process management with the help of MARK

TABER (ACTIVE ENDPOINTS), DR. ANGEL DIAZ (IBM) and SAMIR GULATI (APPIAN CORPORATION). This expert panel

examines BPM and explores what it delivers to enterprises in terms of productivity and agility.

88

Archiving on demand 110 MARTIN KUPPINGER

ETM’S ALI KLAVER talks with DR. KATHY

DAHLGREN (COGNITION TECHNOLOGIES) about an innovative

approach to meaning-based text processing technology. Among a discussion on market trends and the bottom line, it’s clear that Cognition is at the forefront of search—now and into the future.

Enterprise search and subsequent workflow has long been confused with compliance and policies for archiving. Information governance strategies reset thinking in this area by establishing clear use cases for accessing information versus those that enable retention. SIMON TAYLOR (COMMVAULT) takes ETM’S ALI KLAVER through the different faces of enterprise search.

managing enterprise GRC programs in global organizations, and how to realize the benefits that can stem from successful implementation.

KLAVER about how improved operations is seldom given much attention but might well provide the most tangible cost justification.

A question of semantics

116One at a time

(KUPPINGER COLE) talks to Astaro’s ERIC BEGOC about mail archiving, how it’s changing, and what to expect in the future.

112Open innovation

With the rise and embrace of social networking and software you could be forgiven for thinking that we’re at the height of innovation.

MATTHEW LEES (PATRICIA SEYBOLD GROUP) tells us how social

technology can make “innovation” more than just a word.

9


EDITOR’S PAGE ■ ETM CONTRIBUTORS

Fo u n d e r / P u b l i s h e r Amir Nikaein

Head in the clouds It appears the IT world is slowly but surely opening up a realm of possibilities for businesses globally. While we in the IT sphere have known this for a while, it’s just coming to the attention of businesses looking at their SIEM solutions, their business process management, and especially ROI. An IT department stuck in the bowels of the building is quickly becoming a thing of the past as business decision-makers realize that creative collaboration is the way to go. Now, IT and management are rubbing shoulders—literally—and working across the gamut of business applications to find ways to grow, collaborate and innovate. Our Q3 issue is a breath of fresh air and packed full of the answers to your questions. Why not start with our business rules management and business process management podcasts (pages 28 and 100 respectively), as one of our favourite moderators, Dana Gardner, guides you through the essentials with the help of industry leaders. Matthew Lees from the Patricia Seybold Group tells us how social technology can make “innovation” more than just a word (page 112), while I have an interesting chat with Dr. Kathy Dahlgren from Cognition Technologies about semantic search—certainly one to get the brain working (see page 88). There is plenty more to read in this issue of ETM, and hopefully in these pages you’ll find the perfect solution to your IT problems. And don’t forget, there are a wide range of podcasts to enjoy on www.globaletm.com Thank you for reading, and if you would like to contribute to any future issues of ETM, please feel free to contact me via email at editor@enterpriseimi.com

Managing Editor A l i K l av e r Creative Director Ariel Liu In t e r n D e s i g n e r Je f f C e r i e l l o We b D e v e l o p e r Vincenzo Gambino Po d c a s t / S o u n d E d i t o r Mark Kendrick A ssociate Editors M a r y Wr i g h t He l e n a S t a w i c k i Lee Lian Long Account Executives Jo e M i r a n d a Sandino Suresh Aicha Gultekin No r t h A m e r i c a n A c c o u n t E x e c u t i v e s Fa r r a h Tu t t l e Ye s s i t A r o c h o Marketing Executive Alexandros Themistos

Contributors Pa u l B u r n s P r e s i d e n t a n d Fo u n d e r Ne o v i s e Laura DiDio Principal Analyst I n f o r m a t i o n Te c h n o l o g y I n t e l l i g e n c e C o r p ( I T I C ) Mar tin Kuppinger Senior Par tner and Founder Kuppinger Cole

Ali Klaver Managing Editor

Matthew Lees Senior Contributing Editor Patricia Seybold Group

Enterprise Technology Management is published by Informed Market Intelligence

How to contact the editor

We welcome your letters, questions, comments, complaints and compliments. Please send them to Informed Market Intelligence, marked to the Editor, Farringdon House, 105-107 Farringdon Road, London, EC1R 3BU, United Kingdom or email editor@enterpriseimi.com

PR submissions

All submissions for editorial consideration should be emailed to editor@enterpriseimi.com

Reprints

For reprints of articles published in ETM magazine, contact sales@enterpriseimi.com All material copyright Informed Market Intelligence This publication may not be reproduced or transmitted in any form in whole or part without the written express consent of the publisher.

Headquarters Informed Market Intelligence (IMI) Ltd

Farringdon House, 105-107 Farringdon Road London, EC1R 3BU, United Kingdom

+44 207 148 4444 New York 68 Jay Street, Suite #201, Brooklyn, NY 11201, USA +1 718 710 4876

11


INDUSTRY NEWS ■ ETM

Industry snapshot CALLING KENYA

Kenya is registering all mobile phone numbers in a bit to cut crime. Kidnappers often use unregistered mobile numbers to text ransom demands and it’s expected most people will support the move to make life more difficult for criminals. Users must supply ID and proof of address before they get a number, and any numbers still unregistered as of the end of July will be disconnected. Tanzania is also involved in a similar process.

JAPAN EMBRACES GOOGLE

Google has trumped Bing in a deal to provide search results and related advertising to Yahoo Japan. The deal will see Google provide search capabilities to 90% of the PC market and roughly half of the mobile web market. This is a further boost for Google’s Japanese operation. On another note, the China/Google row continues...

GROWTH IN RUSSIA

Groteck Business Media reports that the information security market in Russia was able to develop during the economic crisis. Almost all segments grew in 2008, kept growing in 2009, and they anticipate growth in the 2010-2011 period. The main drivers were requirements towards protecting personal data and information in public authorities. In the next two years they expect the implementation of electronic document management and the protection of mobile devices to be big growth areas. www.groteck.com

IBM PROBED The European Commission has launched two separate competition inquiries to discover whether IBM has abused its position in the mainframe market following complaints by two software makers. The inquiries will examine whether IBM prevented competitors from operating freely and will look at their relations with maintenance suppliers. IBM says the inquiries have no merit.

PENTAGON HUNTS WIKILEAKS The Pentagon is still on the hunt for the source who leaked more than 90,000 classified US military documents. Bradley Manning, the 22-year-old Army intelligence officer currently under arrest for leaking a variety of classified documents, databases and videos to Wikileaks, has not been ruled out as a suspect. Although the leaks reveal past actions, the details are considered nonetheless damaging.

NEW LICENSING OPTIONS

It looks like the Microsoft Office license per device concept has had a rethink. With the increasingly mobile workforce, and as virtualization comes to the fore, Microsoft has had to clarify its SA policies. Since July, users of a PC with an Office license have rights to Office 2010 Web Apps from PCs or external devices, but companies will need to host the Office Web Apps on either SharePoint Server 2010 or SharePoint Foundation Server 2010. www.microsoft.com

VIRUS TARGETS INDUSTRY

Siemens is tackling a virus that specifically targets computers used to manage large-scale industrial control systems used by manufacturing and utility companies. Although this could be one of the biggest malicious software threats in recent years in the form of industrial espionage, Siemens is actively looking to counteract it. They’ve already discovered that it’s best to leave current passwords unchanged and to refrain from using USB keys.

13


EXECUTIVE PANEL ■ GOVERNANCE, RISK AND COMPLIANCE

GRC—Step by step In the last few years, GRC has emerged as the meeting point of efforts to ensure not only compliance with regulatory requirements, but a responsible approach to the management of the enterprise risk that underlies regulatory demands. SCOTT

CRAWFORD (ENTERPRISE MANAGEMENT ASSOCIATES) talks with LUCIO DE RISI (MEGA INTERNATIONAL) and LUC BRANDTS (BWISE), two experts

directly involved in helping organizations shape their GRC strategy.

http://www.GlobalETM.com 16


GOVERNANCE, RISK AND COMPLIANCE ■ EXECUTIVE PANEL

SC: THERE HAS BEEN A LOT OF TALK ABOUT GRC OVER THE PAST FEW YEARS, AND TO MANY IT SOUNDS QUITE BROAD AND UNDERSTANDABLY SO, CONSIDERING THE BREADTH OF WHAT IS INCLUDED TYPICALLY IN A GRC PROGRAM. LUC, HOW DO YOU AND BWISE DEFINE GRC?

LB:

We feel that there is a need to integrate all the different “levels” of defence in the organization. So with the first level we want to help businesses to responsibly implement policies and procedures, report incidents, implement controls and so on. This is supported by a second level of defence, whether it’s risk management compliance or quality management departments, and that’s where a big convergence effort is taking place—trying to integrate all the different risk languages into one. We see GRC as often being mistaken for just that second level of defence. There’s also more to it, a third level of defence, where internal audit does their independent review of that information, trying to leverage the data that’s out there as much as possible. Finally, there’s a fourth level of defence, the external auditor and regulators, who hopefully also take that information into account. What we’re trying to establish, and what organizations are trying to get across, is that all these different entities, departments and views within an organization need to be speaking a single risk language. There’s a strong element of risk and a very important component of compliance that is being governed by the processes that help these four levels of defence to cooperate.

LD:

Based on customer experience, I think that GRC doesn’t mean the same thing for everybody—just the initiatives today cover a large spectrum of requirements. An analyst’s definition would call GRC a set of policies, processes, methodologies and tools that a company implements to phase the increasing pressure of internal and external regulation, and to guarantee that all conditions are met to achieve business goals. In the beginning, GRC was formed to basically put together siloed initiatives within one global approach, then the acronym of GRC was born. Today, that model of GRC is more about implementing a holistic approach to target first business performance, and then fix the technical problems within each department.

SC: THAT LINES UP VERY WELL WITH WHAT WE’VE SEEN AT EMA. THERE’S DEFINITELY A VERY LARGE ELEMENT OF RISK, AND LUCIO, THE ISSUE YOU BRING UP DOES REVOLVE TO A CERTAIN EXTENT AROUND GOVERNANCE. ORGANIZATIONS THAT WE WOULD QUALIFY AS HIGH PERFORMERS HAVE PLACED A GREAT DEAL OF EMPHASIS ON SETTING THE TONE AT THE TOP, IF YOU WILL, AND THE SUPPORT OF SENIOR MANAGEMENT IN TERMS OF DEFINING PRIORITIES, STRATEGY AND A TRULY RESPONSIBLE APPROACH TO ENTERPRISE GOVERNANCE. ONE OF THE THINGS EVIDENT FROM OUR RESEARCH HAS BEEN NOT JUST SETTING THE TONE AT THE TOP BUT SENIOR MANAGEMENT SUPPORT FOR GRC EFFORTS. IT’S CRITICAL, NOT JUST FOR LEADING GOVERNANCE INITIATIVES, BUT FOR IMPLEMENTING RISK MANAGEMENT TACTICS—AND THAT INCLUDES MEASURES SUCH AS ENFORCEMENT, FOR EXAMPLE. LUCIO, WHAT ARE SOME OF THE THINGS THAT YOU THINK HELPS ORGANIZATIONS SUCCEED IN DEFINING EFFECTIVE GOVERNANCE?

LD:

First, I’d like to enhance the kind of dichotomy in how to approach GRC in initiatives, and then position the sponsorship from top managers with respect to these two different approaches. These two approaches are not opposed, but complementary. One is what you would call approach based on controls, and the second is based on business improvement. The role of senior management is different in these two different cases. Let me clarify what I mean. In some cases, the companies need to apply what I call the control-based approach. For example, you need to respect US law about not exporting to some countries, or you have to guarantee that you provide the appropriate data to the stock exchange. If you’re in Europe you know that you cannot provide any advertising, or you must be sure that you’re not referring to smoking, for instance. In all these cases, which are basically matters of respecting the law and other strict rules, I think that the control-based approach is the one that companies must apply, even if it’s not the only one. The second one that I mentioned, which I call the business improvement-based approach,

17


HEAD TO HEAD ■ BUSINESS PROCESS MANAGEMENT

The Back Office means Business Randy Brasche(Genesys)

and ETM’S ALI KLAVER explore how companies are applying best practices from the contact center to the back office to improve efficiencies and meet service levels, all while reducing cost. AK: RANDY, WHY DOES THE BACK OFFICE WANT TO EMULATE THE CONTACT CENTER OR FRONT OFFICE?

RB:

Well, if you think of the contact centre, they’ve had years to become efficient. The calls that go into a contact centre get prioritized, routed, sent to the right agent, and so on. When you look at the back office, they sometimes have six times the amount of workers, three to five times the costs, and a lot of the time they’re very inefficient. Perhaps you had to submit an insurance claim and normally you might be very happy with your experience of the contact center submitting that request, but when it came to actually being processed it gets delayed. And in terms of cost, especially since it’s three to five times the cost in the back office, that’s a lot of money companies are spending mostly on individuals in the back office, cherry picking work, not taking the right items, or the work not being aligned to the right person. AK: WHAT ARE YOU FINDING IS THE SOURCE OF THESE INEFFICIENCIES?

RB:

There are usually a couple of steps that are taken in the back office when a work item is processed.

26

Usually a piece of work goes into a queue, just like a telephone call would go into a queue, then the contact center, usually the back office person, has to pull that piece of work out of the queue and they have to validate it in terms of actually doing the work. Then they have to go one step further and execute the work. What we’ve found is that the process of finding the piece of work and pulling it out of the queue usually takes about three minutes or so, and that’s tremendously inefficient when you’re talking about thousands and thousands of work items in the back office every year. Imagine, for example in the same contact center, if someone were to go ahead and call in, and an agent looked at all the calls in the queue and saw that a person wanted to change their account information. They could then pull it out and take care of that customer very quickly and easily. Imagine how inefficient and frustrating it would be for the customer on the telephone to wait for an extended period of time, or get put through to the wrong department, when all they wanted was the change their account information—these are the exact same things that are happening in the back office today.

AK: THAT’S A GREAT EXAMPLE—CAN YOU PERHAPS TAKE US THROUGH A FEW MORE?

RB:

I have a few other great examples. Typically, when you think of the back office, there are usually some penalties or costs associated with it. If you think about provisioning a service like your telephone, or doing a credit card dispute, there’s usually a service level associated with that. Say that we took, for example, one of our customers who is a large telecommunications company. When they don’t provision a telephone service from the back office, they have to pay fines to the front office, the contact center and to the customer. Imagine if all this work is sitting there and they might have two piles—one might be something as simple as just changing account information, and the other pile is for provisioning a new service—obviously you want to go in and prioritize the provisions on your services to ensure that you don’t have to pay a fine. Similarly, one of our customers tries to move ahead quickly and set up new credit card account that they receive from the web, but they get stuck in the back office and they don’t get prioritized. These prospects actually end


BUSINESS PROCESS MANAGEMENT ■ HEAD TO HEAD

up getting credit cards from other companies because they weren’t processed appropriately or quickly enough in the back office. So that’s a case of lost revenue. Both cases are from a service level agreement perspective, and at times you may have to pay fines for not meeting them. From the other credit card example, this is lost revenue and lost customers that you could have had because you were very inefficient in the back office. AK: WHAT ARE SOME OF THE EXISTING TECHNOLOGIES IN THE BACK OFFICE THAT COULD BENEFIT FROM THESE NEW EFFICIENCIES?

RB:

There are a lot of new technologies that, if you think about it, any process or work based item can use to their advantage. The simplest might be a fax server, and I’ll give you an example in a second. Another one might be a service request system, such as a Remedy, trouble ticketing system, a Siebel system or SAP system, and all the way up to the more complex systems such as a business process management system. The simplest example I could give is that most people can relate to a fax service. I was using an airline several years ago and they missed crediting me for 5000 miles .They said to fax my request in and that it would be rectified. So I did that and nothing happened, even when I faxed it again. I ended up getting so frustrated that I switched airlines. Had they actually known my status as a gold customer, looked at the fax, and attached a high priority tag to it and sent it to the right agent, along with an SLA associated with it saying that I needed to be responded to within 24 hours, then I might not have ended up leaving and going to another airline. I’m now a happy customer at the airline I switched to, so that’s a great example of a technology that could actually benefit from this type of a process. AK: THE BOTTOM LINE IS STILL SUCH AN IMPORTANT ASPECT OF DAILY BUSINESS LIFE RANDY, SO CAN YOU TELL ME WHAT THE TYPICAL ROI AND SAVINGS ARE?

RB:

It’s pretty dramatic, when you think about it, if you prioritize the work items based upon value and send them to the right person in the back office just like you would with a telephone call in the front office and the contact center. We’ve seen a dramatic improvement of about 15-25% efficiencies in the back office.

I’ll use another real-time example. We did a pilot with a customer in Australia and ... these they typically had a problem with their nine to five work day and the fact that, at five o’clock when work was on by the done, they still had a lot of unfinished work and items and the that weren’t processed that were high priority. Once we worked with them, implementing this process, their nine to five work day went from nine o’clock until two o’clock, so they had three extra hours to re-provision these workers to do other items that could actually benefit from the next day’s worth of work.

“ back office processes are being relied front office contact centre...”

AK: THAT’S ANOTHER FANTASTIC EXAMPLE RANDY. WHAT TYPES OF COMPANIES WOULD BENEFIT?

RB:

The logical ones are obviously the most paper process-intensive companies such as insurance companies that have to deal with claims, financial services and so forth. But there are a lot of other companies that you might not even think about. These are companies that have to process leads that they get from their websites, or other companies that have some sort of government regulation and service levels associated with them—to use the credit card example, when you have to go ahead and issue your credit card dispute—then that has to be fulfilled within a certain period of time. So if you think about any company across the board and any vertical industry, there’s always some sort of business process associated in the back office, so any company could really benefit from this. AK: YOU’RE TALKING ABOUT CATERING ACROSS A WIDE RANGE OF INDUSTRIES HERE, AND ESPECIALLY IN THIS ECONOMIC CLIMATE IT’S

ESSENTIAL TO REALLY JUMP ON THESE BACK OFFICE BUSINESS PROCESSES TO ENSURE THAT YOU’RE GETTING THE BEST POSSIBLE RESULT YOU CAN. NOW FOR OUR LAST QUESTION, WHAT CAN THOSE COMPANIES LISTENING TODAY DO TO GET STARTED?

RB:

It really requires self assessment and some internal thinking in terms of: What are my processes? How are they being done? How are my workers being utilized? What are their skill sets? How do I determine which items are the higher business priority? The list can be endless. A lot of companies are doing this back office transformation today and I think it really becomes an issue of self assessment. You need to take a close look at what you do in the back office and re-prioritize your resources, your processes, and also implement new technologies to ensure that you’re becoming more efficient. At the end of the day, when you’re talking about six times the amount of back office workers at three to five times the cost, that translates into a lot of money. Also, these back office processes are being relied on by the front office and the contact centre, and you’re also talking about customer satisfaction which can result in customer defection. But if you can keep those customers happy, then these happy customers are going to spend more money. So it really requires self assessment and looking internally to see where you can improve.

Randy Brasche |

DIRECTOR OF PRODUCT MARKETING GENESYS Randy is responsible for driving adoption of Genesys’ marketleading customer service and sales solutions. Prior to Genesys, he was a founding member and director of product marketing at Active Reasoning, and held marketing and product strategy positions at Cable and Wireless, Exodus, Oracle, Informix and Liberate Technologies. Randy is the author of the popular IT Compliance for Dummies and Dynamic Contact Center for Dummies books.

27


EXECUTIVE PANEL ■ BUSINESS RULES MANAGEMENT

Pure logic

http://www.GlobalETM.com 28


BUSINESS RULES MANAGEMENT ■ EXECUTIVE PANEL

How can business better operate by identifying and capturing defined logic and repeatable rules? DANA

GARDNER (INTERARBOR SOLUTIONS)

discusses business rules management (BRM) and explores the value of businesses being agile in a safe way. He is joined by DON

GRIEST (FICO), RIK CHOMKO (INRULE TECHNOLOGY) and BRETT STINEMAN (IBM). DGARDNER: DON, TELL ME A LITTLE BIT ABOUT YOUR HISTORY. HOW IS IT THAT BUSINESS RULES MANAGEMENT PLAYS AN IMPORTANT ROLE AT YOUR COMPANY?

DGriest:

FICO is probably best known for FICO score which is used in credit decisions, when you get a credit card or apply for a loan for a house. It’s been around 50 years now and started with a couple of statisticians, Bill Fair and Earl Isaac. They came up with a way of using data analytics to improve decisions. They quickly found out that giving people a good credit score wasn’t enough and that they needed to apply those decisions in making offers on products and making decisions about the credit risk. So they started building applications for banking and then eventually insurance, retail, healthcare and other industries to help them make decisions informed both by the best practices that were in the policies, but and by analytics including predictive analytics, predictive modelling, simulation and optimization. Today, we sell both applications and then tools underneath that help us build those applications—business roles management being a critical one of those. DGARDNER: WHAT’S CHANGING? WHAT MAKES BUSINESS RULES MANAGEMENT SO IMPORTANT TODAY? DO YOU AGREE THAT COMPLEXITY IS SPINNING OUT OF CONTROL?

DGriest:

I agree that there have been a number of changes in the market. Obviously, the recent economy changes have put a lot of pressure on efficiency and doing more with less. This means you need to make faster, cheaper decisions, and you need to be able to make changes to those decisions faster. At the same time, we’ve got more regulatory pressures coming in not just in banking but also

in healthcare as well. That is increasing the need for decision-making with great transparency and also being able to minimize the impact to the overall return on the company. If you look at retail, it’s exploding in terms of what the web has done and in terms of consumer expectations about how many combinations of different products are available and greater competitive pressure to get the right price point and the right offer to the right customer—and actually still make money doing it. DGARDNER: I’M GOING TO GUESS THAT AT FICO YOU OFFER SERVICES THAT AMOUNT TO BUSINESS RULES MANAGEMENT, BUT I BET YOU ALSO EMPLOY IT WITHIN YOUR ORGANIZATION. SO YOU’RE A BUSINESS RULES CONSUMER AS WELL AS USER?

DGriest:

Definitely. FICO is known for its scores, so that uses rules management to implement those scores. They create formulas that take information at the credit bureaus and apply scoring techniques to create a score. We then have applications that our customers use in origination. So when you fill out the loan it actually helps to walk that through the process— a culmination of normal business process flow but also business rules being at the centre of that. So, yes, it’s used throughout the company. DGARDNER: RIK AT INRULE TECHNOLOGY, TELL US A LITTLE BIT ABOUT WHAT YOU DO, WHAT YOU PROVIDE, AND HOW YOU SEE THE LANDSCAPE FOR BUSINESS RULES MANAGEMENT SHIFTING OR ADVANCING?

RC:

InRule was started about eight years ago and we decided, at the time the .NET

framework was almost brand new, to focus on rule technology for that particular framework and platform. We’ve been doing that ever since and really pushing harder to be a solution for the .NET platform that provides the authoring, management, storage and execution of the rules applied on that platform. I think what I’ve seen over the last few years has been changing a little bit more than what was there before. There are always the top three industries that you would apply business rules to—insurance, financial services and healthcare—and while those are still going strong today there seems to be an uptake in a lot of other industry sectors that might be looking to use rules, outside those top three. For example, take the entertainment industry. One of our clients is actually using rules to manage their project plans to enforce consistency and promote realistic planning for a large scale video production. So it’s kind of interesting where we’re seeing this use of rules grow out from perhaps what people would traditionally apply rules to, and trying to branch into other industries. DGARDNER: BRETT STINEMAN AT IBM, HOW DO YOU SEE YOUR BRM MARKET SHIFTING OR PERHAPS GROWING IN THE NEXT FEW YEARS?

BS:

I’m sure most people have a fairly good idea of who IBM is in terms of the various software, hardware and services that we provide. In terms of business rules management, our offering came from an acquisition of a company called ILOG that occurred in 2009. ILOG has a long history, going back 20 years, in a variety of different types of decision technologies—both from a business rules standpoint as well as optimization and visualization technologies, all of which were used to help organizations make better  decisions for various parts of their businesses.

29


ANALYST FEATURE ■ SERVICE MANAGEMENT IN THE CLOUD

The

key to success

As cloud computing takes a stronger hold on the IT industry, organizations must realize that there is a another factor that will ensure its success.

MARTIN KUPPINGER (KUPPINGER COLE) writes about the importance of service management.

C

loud computing is the hype topic in IT. And without a doubt, cloud computing is about a fundamental paradigm shift in IT. However, it is not so much about procuring external services. It is about the way IT services are produced, procured and managed— internally as well as externally. That is where service management comes into play. Service management, from the Kuppinger Cole perspective, is the key success factor for cloud computing. When talking about cloud computing, it’s something different than “the cloud”. First of all, there are several clouds, in the sense of environments which deliver IT services. These might be internal or external; they might be private or public; but all of them provide IT services at different levels of granularity. These levels range from granular web services to coarse-grain services like complete application environments and many SaaS (Software as a Service) approaches. Cloud computing, on the other hand, is about selecting, purchasing/requesting, orchestrating and managing these services. The management spans the entire range from technical aspects to auditing and accounting. While the services might be delivered by many clouds, there has to be one consistent management approach. This approach has to cover internal and external IT services. It is about one view on the IT, regardless of the service provider (or cloud, to use that term).

IT’S NOT ABOUT IAM FOR THE CLOUD Given that, we have to redefine our IAM strategies. We have to think about how to manage everything consistently. That excludes approaches that run externally and only manage the external services. IAM in the cloud only for the cloud is contradictory to the target of managing everything consistently. Thus, we need to expand what we have (or should have) in IAM, and access governance to support our future IT infrastructure.

38

Approaches that aren’t focused on supporting a hybrid cloud environment can only be tactical approaches, if at all. On the other hand, internally focused tools have to expand their reach to external services. ONE IT—ONE MANAGEMENT It obviously doesn’t make any sense to deal with external services that differ with internal services. That will make management inconsistent, redundant and error-prone, plus, it will inhibit

the flexible change from internal to external services and back. Consuming external services is one element of the overall IT service provisioning. And it is, by the way, an element which is in place in virtually any organization. Think about web hosting, web conferencing and many other applications which are frequently provided by external service providers. The quintessence of cloud computing is that it standardizes the service management across


SERVICE MANAGEMENT IN THE CLOUD ■ ANALYST FEATURE

all types of services and thus allows you the flexibility to choose services from different providers. Another effect is that internal IT services have to become standardized and (from a cost perspective) produced efficiently—internal IT service production should significantly benefit from that approach by becoming more “industrialized” and automated. FOCUS BEYOND SERVICE FUNCTIONALITY To fulfill the security and governance requirements, service management has to focus not only on the functional aspects (and, like frequently seen today when looking at the cloud, costs).

vendor marketing, about having descriptions from the business perspective of their requirements to IT. It is not about availability management for business processes or something similar, it is about mapping the required IT services to a business requirement—for example, mapping storage and archiving services, information rights management and other services to the requirement, that contracts are handled in a defined way. One element within such a business service management is providing the input for the governance requirements of IT services. These requirements typically are derived from business requirements, including regulatory compliance.

“While the services might be delivered by many there has to be one management

clouds

consistent

,

approach.”

For each service there has to be “governance” requirements, including aspects like encryption of data, requirements for privileged access management, allowed locations for processing and storing the data, and many more. When describing services and defining the service requirements, there has to be a standardized set of such requirements to ensure that these aspects are considered when selecting the (internal or external) service provider. The most appropriate provider isn’t the one with the most advanced functionality or the lowest cost—as long as he doesn’t meet the governance requirements. BUSINESS SERVICE MANAGEMENT When talking about service management in the context of cloud computing, it becomes obvious that there are multiple layers of services. Within IT services there is a range from single web services at the application level to complex SaaS applications. However, that can be managed with a consistent approach because the fundamental principles of service management apply to any level of service. Beyond the IT perspective, there has to be business service management as well. Business service management is, in contrast to today’s

ERP FOR IT An interesting opportunity within this approach of consequently using service management paradigms at all levels is that the ability for accounting will significantly increase. Once everything is understood as a well-defined service, it is relatively easy to have a price tag on these services. That, in consequence, will allow you to do much better resource planning and to predict costs of new business services (eg. new requirements to the IT) much more reliably than before.

In other words: service management is the foundation for an “ERP for IT”, the ERP application which is still missing today. However, today’s service management applications aren’t an ERP for IT, even while some vendors tend to start telling this to their customers. FOCUS ON SERVICE MANAGEMENT— AND SUCCEED WITH CLOUD COMPUTING Looking at cloud computing and service management, it is obvious that these two things can’t be separated. Service management is the foundation for successful cloud computing. And cloud computing will drive the service management initiatives in organizations and will require the internal IT to standardize their services. The most important reason for this is that otherwise the internal IT can’t prove that they are providing the most appropriate services because they can’t directly compare with cloud services. Only with complete service requirements, including the governance aspect, can internal IT can validate that their service procurement suits the needs of the business better than the (sometimes) cheaper external service. It will absolutely change the way the internal IT is working—but it is the only opportunity the internal IT has: standardize services, optimize the service production, and be better in meeting all the service requirements. From an overall IT perspective, service management is key to success in cloud computing as well because it is the prerequisite for being able to flexibly switch between internal and external service providers and back.

Martin Kuppinger |

FOUNDER AND SENIOR PARTNER KUPPINGER COLE

Martin established Kuppinger Cole, an independent analyst company, in 2004. As founder and senior partner he provides thought leadership on topics such as identity and access management, cloud computing and IT service management. Martin is the author of more than 50 IT-related books, as well as being a widely-read columnist and author of technical articles and reviews in some of the most prestigious IT magazines in Germany, Austria and Switzerland. He is also a well-known speaker and moderator at seminars and congresses.

39


HEAD TO HEAD ■ CLOUD COMPUTING

Smooth transition

Public versus private cloud computing is one of the hottest topics in IT at the moment, but there’s still a lot of confusion around its adoption—which has been higher than anyone expected.

STEVE BRASEN (ENTERPRISE MANAGEMENT ASSOCIATES) talks with DENIS MARTIN and BROOKS BORCHERDING (NAVISITE) about the scope of cloud computing and its transition.

http://www.GlobalETM.com

40


CLOUD COMPUTING ■ HEAD TO HEAD

SB: TODAY WE’LL BE TALKING ABOUT PUBLIC VERSUS PRIVATE CLOUD COMPUTING, CERTAINLY A HOT TOPIC AND ONE AROUND WHICH THERE IS A LOT OF CONFUSION. I DON’T SEEM TO BE ABLE TO PICK UP A TRADE MAGAZINE THESE DAYS WITHOUT SEEING REPEATED REFERENCES TO CLOUD. ACCORDING TO EMA PRIMARY RESEARCH, IN FACT, 11% OF ALL BUSINESSES HAVE ALREADY ADOPTED SOME FORM OF CLOUD SERVICES IN ORDER TO ACHIEVE THEIR BUSINESS OBJECTIVES. THIS IS VERY FAST ADOPTION WE’RE SEEING, FOR WHAT IS REASONABLY CONSIDERED A FAIRLY NEW TECHNOLOGY. WE’VE PROJECTED THIS TO BE SOMEWHERE BETWEEN A $40-50 BILLION DOLLAR INDUSTRY BY THE END OF 2011, AND GROWING ROUGHLY TO $160 BILLION BY 2015. YET WITH ALL THIS PROMISE AND HYPE, IT’S SURPRISING THAT THERE DOESN’T SEEM TO BE MUCH CONSENSUS ON EXACTLY WHAT CLOUD IS AND WHERE THE SCOPE IS, WHICH LEADS ME TO MY FIRST QUESTION—HOW WOULD YOU DEFINE THE SCOPE OF CLOUD COMPUTING, AND HOW DO YOU DIFFERENTIATE PUBLIC AND PRIVATE CLOUD COMPUTING?

DM:

You’re right Steve, there is a lot of confusion in the market today around cloud. Today we’ll talk about it primarily from an infrastructure perspective, but even then there’s a lot of confusion about what a public cloud is compared to a private cloud. In that spectrum from private to public, the consensus defining factor that’s emerging is how many customers are on the cloud. If there is more than one, it’s typically defined or put in the bucket of being a public cloud. If it’s only one, then it’s eligible to be qualified as a private cloud. Even then there are a number of options or gradients along that spectrum where it’s not so black and white. For example, NaviSite offers private hardware where we can put one customer on a piece of hardware, yet they’re sharing other components of the underlying infrastructure. Is that private, or is that public? We think of it as quasi-private and we think we provide all the benefits of a private cloud without any of the downsides. It’s not so cut and dry that there is a single differentiator between public and private.

BB:

I’d like to add to what Denis said and to your initial points, Steve, around rapid adoption. If you take a step back and consider why cloud has become so interesting and gathered so much hype, it’s because it is a true revolution and a true transformation of the way that companies are consuming IT resources. So be it public, private or a hybrid of anywhere in between, I think at least in our generation, that we’ve very rarely seen something of this magnitude that is changing the fundamental consumption paradigm of IT. We certainly have experienced that where we’re based here, in the enterprise space, but you can see this very rapid consideration of alternatives to the way that companies are consuming IT resources from consumers across the enterprise spectrum. SB: WHAT ARE THE PRIMARY BENEFITS AND CHALLENGES OF PUBLIC AND PRIVATE CLOUD IMPLEMENTATIONS?

DM:

From the public cloud perspective, the biggest benefit is the ease of access and the ease of on-boarding.

Typically you can get on a public cloud, whether it’s for compute and storage or a combination of them, usually within minutes by simply providing a credit card number and commitment to pay and then your access is immediately available. So it’s a very simple on-boarding process. The challenge on the public cloud, because it is geared for larger groups of users and for a general usage case and not for specific usage cases, is that it’s geared typically for non-production services with very low service level agreement guarantees, and in some cases almost to the point of best efforttype performance. You really don’t rely on them for services other than things like development or some sloppy bursting that you might need, but for today at least, they’re not geared for the rigors required for producing and supporting production applications. Private clouds, on the other hand, inherently have all of the features required for not only doing high level SLAs, but also providing the complete infrastructure lifecycle management. So in the case of public cloud, you might only have the ability to manage your CPU and the amount of memory, but you don’t have control over firewalls and load balancers and the other components that are required to provide the use of the machine that you created, the firewall rules that you apply to it, the load balancing and so on. On the private side, since it is fully controlled, you do have the ability to create and work on the machines, apply firewall rules to them, apply load balancing as needed, and then expose them from either the back end network or expose them on the public network space automatically. It’s a much more robust environment for providing a range of services, whether it’s simply for development and testing, and all the way to the level of providing production services with four nines or five nines availability.

“... a quasi-private

cloud, as an extension

of your IT environment, is really a

win-win

on

both sides.”

BB:

I think a couple of the general benefits, be it public or private, is the fact that cloud services do have a promise of mitigating the complexity of IT. So, from a consumer perspective, it is easer to look at this as an alternative to acquire these resources, be they compute resources, or be they application on demand or SaaS-type capabilities. So from both, there is this promise that we can make it much simpler to consume IT. And on both, I think there’s also a promise, be it public or private, consumer or enterprise, that there will be a reduction in capital expenditure and an increase in operational efficiency across the spectrum. Where the differentiation comes between public and private, as Denis mentioned, is that they’re built to deliver enterprise-type class services— at least, that’s the approach that we’ve taken at NaviSite. So it’s taking everything from the quality of the underlying infrastructure and technology through to everything else that would be required such as the security standards, the wraparound services that you would expect, and then the ongoing support.

41


ASK THE EXPERT ■ IT INFRASTRUCTURE TRANSFORMATION

Transforming business ETM’S ALI KLAVER interviews

DEEPAK JAIN (WIPRO) about enabling your business growth through IT infrastructure transformation.

http://www.GlobalETM.com AK: CAN YOU GIVE US A BRIEF SUMMARY OF WIPRO’S INFRA STRUCTURE SERVICES BACKGROUND AND HOW YOU HELP BUSINESSES SUCCEED?

DJ:

Wipro has been offering IT Infrastructure services to its customers for over 25 years. Our customers are spread across North America, Europe, Asia Pacific and Middle East geographies. Wipro’s IT infrastructure business is US$926 million and contributes 21% of our IT services revenues globally. We’re seeing robust growth in this business and have been ahead of company growth at 40% CAGR for the last three years. We have seven datacenters and 17 global command centers including 14 security operations centers globally. Over 16,000 associates work for this division. Our growth strategy is to keep on expanding our portfolio and, today, in our infrastructure services business, we offer end-to-end IT outsourcing solutions from design and implementation services, managed services, DC outsourcing including technology transformation, and audits to continually improve the performance of IT systems. We also have comprehensive offerings to cater to complete IT security management services (consult –deploy –manage–audit). We also have a strong offering on the Core Telecom networks. AK: HOW DO YOU HELP BUSINESSES SUCCEED IN THE CURRENT MARKET?

DJ:

Our approach is to partner with our customers and look at long term relationships. Our domain knowledge helps us understand the clients’ business better, and then propose solutions more aligned to business.

46

Unlike many Indian IT companies focused on remote infrastructure management and cost arbitrage, we also focus on improving the business KPI’s, for example, how IT can help in faster inventory turns or reduce cycle time for order to cash, or how we can reduce cost to serve for our clients’ end customers. AK: LET’S TACKLE THE SUBJECT OF TODAY—IT INFRA STRUCTURE TRANSFORMATION. HOW DO YOU DO IT? HOW DOES IT TRANSLATE INTO COST SAVINGS?

DJ:

IT Infrastructure optimization and transformation has to be looked at through four broad areas: CONSOLIDATION: Our experience of working with customers across industries proves that consolidation is a big lever for cost savings. As IT assets increase so does IT infrastructure complexity, creating significant management problems. In addition, data center energy consumption is skyrocketing, not to mention the energy prices rise. Therefore, the consolidation approach should focus on: • Consolidation of IT operations, such as a central monitoring and command centre, IT service desk and knowledge database • Consolidation of IT Infrastructure, such as a reduced number of datacenters and computer rooms • Consolidation of IT procurement. We believe that procurement for global organizations gives them the scale to negotiate and manage hardware and software spend better • Consolidation of services through shared model for service desks, and factory model for application packaging.


IT INFRASTRUCTURE TRANSFORMATION ■ ASK THE EXPERT

STANDARDIZATION: Simplification is again an important lever for controlling costs. Over time, many customers have added complexity into their IT environments which could be in the form of disparate applications, operating systems, technologies, processes and tools being used. We recommend our customers make one-time investments to standardize the IT estate and look at ROI over five to seven years. RATIONALIZATION: Application and infrastructure rationalization is another key driver for cost reduction. VIRTUALIZATION: Both on the data center side and on the end user side. The data center side is where one consolidates the servers and storage across the enterprise. This leads to lower support costs and easier administration without sacrificing compute power and storage capacity. On the desktop side—virtualization of desktop or VDI—this leads to better control over the desktops, lower costs and increased ease of desktop management and refresh. AK: WHY IS IT IMPORTANT FOR A BUSINESS TO DO THIS NOW, PARTICULARLY IN THE CURRENT ECONOMIC CLIMATE? WHAT ARE THE MAIN BENEFITS?

DJ:

The CIO/CTO organizations have realized that after a phase of over abundance there is a need to get to a state of equilibrium. Also, the future trends and technologies, including the disruptive trends like cloud computing, will require organizations to seriously look at consolidation, standardization and rationalization. Therefore, it’s important that organizations are ready to make the best advantage and avoid high transition/migration costs as these trends mature. The cost take-out initiative that companies started in 2008-09 also continues because it was a multi-year program. First tranche of savings came in from discounts from vendors. However, to get a sustainable cost advantage, transformation is essential. Without this further cost take-out will be extremely difficult. We also believe that organizations will consolidate their business portfolios and that the center of economic activity will shift to newer geographies—time to market will be the key parameter to gain market share. Therefore, end users will not be bothered about the technology used but will be more focused on the functionality it delivers. The economic environment will move customers from being buyers of technology and building on capacity, to looking at capacity on demand and utility models. Companies that achieve the above will be quick in making the best use of on-demand computing and manage flexibility of infrastructure costs based on business cycles. AK: WHAT IS YOUR FAVORITE CASE STUDY THAT REALLY HIGHLIGHTS WHAT WIPRO CAN DO IN THIS SPHERE?

DJ:

We have multiple case studies to talk about, and there are three important ones in particular. We are currently engaged with a customer in the energy and utility space to transform their retail business, and one of the key KPI’s is to bring down their cost to serve by 10% per subscriber. Secondly, we’re working with a client in the retail space to help manage their IT cost as a function per square foot of retail space. And thirdly, we manage IT cost as a function on the number of subscribers for a client in the telecom space. All of the above are large deals, not in the context of technology transformation alone, but more importantly in terms of business transformation. We believe that IT costs as a function of customer revenues is important, but linking it to business parameters like the ones above ensures

that IT operations, technology adoption and spend on IT transformation is optimal at all times and can be a function of the business growth. AK: WHAT ARE THE LEVERS FOR SUSTAINABLE COMPETITIVE ADVANTAGE THROUGH INFRA-TRANSFORMATION?

DJ:

IT transformation positions organizations to: BETTER MANAGE COST AND RISK: Our hindsight experience of delivering these services for close to three decades helps us to do the job with the least possible risk, and at the most optimized cost through our Global Delivery Model. SUPPORT BUSINESS INNOVATION: Our domain understanding and its trends helps us to understand our customers’ current and future needs. We drive innovation in our solution to bring about business innovation for our customers. SCALE MORE EFFECTIVELY: We are not into product reselling on a stand-alone basis, and we’re not incentivized to dump more products. We analyze the customers’ current needs and the impact of their ever changing business environment. That helps us to provide the most optimized solution for today’s challenges while keeping scalability in mind for future growth. REDUCE ARCHITECTURAL COMPLEXITY: Through application rationalization and infrastructure consolidation, we try to simplify IT for our customers so that it becomes easily manageable and more meaningful to their business. INCREASE THE VALUE OF REVENUE-GENERATING SERVICES: In many cases today we help our customers to increase their touch points with their own customers, reduce their time to market, help them to deliver more efficient services to their customers, and become more agile and competitive in the market place. ENHANCED END USER EXPERIENCE: While traditional methods of monitoring IT services have been quite discrete in their approach, today we measure our performance through the eyes of the business user that experiences our services. It is no longer enough if the servers are up 99.99% and the network is up 99.99% etc; what really matters is that the end user who is trying to complete a transaction through an application is able to do it in the specified time. By monitoring the performance across each layer of the chain we are able to control the performance and deliver enhanced end user experience. AK LET’S LOOK TO THE FUTURE—WHERE DO YOU SEE THE FUTURE OF IT INFRA STRUCTURE TRANSFORMATION IN THE NEXT TWO TO FIVE YEARS, AND WIPRO’S PART IN IT?

DJ:

Wipro is very actively promoting a concept called “21st Century Virtual Corporation”. Essentially it means organizations globally should have: • • • •

A detailed look at core and non-core processes Lean process optimization to drive sustainable productivity improvement Optimization of technology to enable innovation Extended execution leveraging “partners” versus contractors in a whole new way.

By virtue of three decades of experience in IT services and solutions, Wipro is increasingly being chosen as a partner of choice by customers for IT transformation. Customers are increasingly looking to achieve business and IT alignment, more meaningful reporting, better coordination between various departments, a reduction of the overheads associated with managing multiple vendors, and so on.

47


EXECUTIVE PANEL ■ SECURITY INFORMATION AND EVENT MANAGEMENT

Actionable Intelligence

http://www.GlobalETM.com 50


SECURITY INFORMATION AND EVENT MANAGEMENT ■ EXECUTIVE PANEL

DR. ANTON A. CHUVAKIN talks about the usability and integration of security information and event management and touches on log management with the added benefit of three industry experts; MICHAEL

LELAND (NITROSECURITY), A.N. ANANTH (PRISM MICROSYSTEMS) and DEBBIE UMBACH (RSA, THE SECURITY DIVISION OF EMC). AC: WHAT ARE YOUR CRITICAL SUCCESS TIPS FOR USERS? HOW DO YOU INCREASE THE CHANCE OF SIEM DEPLOYMENT BEING SUCCESSFUL EARLY ON AND THEN GET TO ONGOING OPERATIONAL SUCCESS?

ML:

The most important thing is to manage expectations and align the necessary resources. Ensure that you have agreement from every department that expects to benefit from a SIEM, and make sure the technical resources they can apply to the planning and implementation phases, as well as what metrics they’re going to use, measure the success. When working with larger organizations and enterprises with de-centralized networking, it’s likely that a SIEM vendor was probably selected having gone through a proof of concept deployment. The learning from these test runs can be critical to planning and successfully implementing production SIEM’s. Identify where the obstacles throughout the organization might be as well as the individuals who support the effort and can help champion the cause, trying to find some way that each area of the enterprise will benefit from the tool. Also, don’t assume that a SIEM that performs well in its concept with just two weeks’ worth of production data will perform equally as well with 12 months. If you can’t generate in the kind of volume your SIEM will be faced with in the real world, make sure the vendor can supply sample data or provide access to a host with commensurate volumes of information to demonstrate what you’ll be faced with a year down the road.

DU:

I agree with you in terms of the POC and production environments. A lot of folks don’t anticipate what their volume will be and assume that it’s going to be able to scale. Vendors can provide tools to inject that data and there are other options to help you do that.

Setting expectations and getting executive sponsorship and support is very key. The customers that we’ve seen have the most success with implementations. It starts from the top, assigning resources and making sure that folks are on board. There are some process changes that go along with it—the technology is not going to work like magic. It’s a tool and it needs to be adopted by individuals and users. I would also suggest you start with simple use cases based on policies you’ve already defined and for which you may or may not already have processes in place, but at least you’re starting small and you can get them in place. That helps you to validate and gain confidence in the system. One example would be server monitoring. You want to try to identify whether someone is compromising a server and knowing this information can prevent insider abuse before significant damage is done. So getting server user monitoring processes in place is an early use case to start with. Another common use case is firewall monitoring to meet compliance needs as well as to make sure that you can identify activity patterns for forensics purposes, for example. The next thing would be to really iterate over these use cases. So you start with the simple use cases and then plan for the next phase of use cases which maybe a little bit more advanced. This way you’re taking baby steps as you go, validating, making accomplishments, and then preparing yourself for future success.

AA:

You should think of it as a classic IT project. Plan, install, tune and train. When I say plan, it’s about involving the stakeholders who might use this and then, better yet, the vendor that you’ve selected, because quite often they can have useful advice for you. Think about volume, usage, what you’re going to audit and so on.

51


HEAD TO HEAD ■ BUSINESS PROCESS-DRIVEN ALM

Create your own app factory

ETM’S ALI KLAVER chats to EDDY PAUWELS (SERENA SOFTWARE) about business process-driven application lifecycle management and how they link to each other, as well as how businesses can benefit from such a strategy.

http://www.GlobalETM.com 62


BUSINESS PROCESS-DRIVEN ALM ■ HEAD TO HEAD

AK: EDDY, CAN YOU GIVE US A QUICK INTRODUCTION OF YOURSELF AND SERENA SOFTWARE?

EP:

I am based out of Belgium and head up the Product Marketing Department for ALM within Serena and have been with the company for about five years. I have a Masters in Computer Science from the University of Brussels and I have been working in ALM for over 20 years now. During those years I have witnessed the evolution in methods as well as in tooling to support the development process and have seen applications morph from a monolithical source file to currently thousands of tiny little files that need to be combined. I have also seen development teams being spread more and more globally. I have seen the internet changing the capabilities we have in the marketplace, and also the number of platforms and operating systems that have been introduced over the years. As a result, the complexity to build applications with confidence has only become harder to do and this is actually one of the main reasons that brought me to Serena. We have a mission to help enterprises develop and deliver applications with confidence, given the increase in complexity and pressure from the business. So who is Serena? Serena is an enterprise skill company with over 29 offices worldwide. We have over 700 employees of which more than 300 are in engineering which is significant for a company in IT. We produce over $200 million in revenue each year. We have a size that enterprises feel comfortable working with and that sets us apart from the smaller players, but on the other hand we also have a very dedicated focus on ALM and app factories which set us apart from the mega-vendors because they can’t bring that to the party. Serena is owned by Silver Lake which is the world-leading technology investor. As a company we think that, as in a modern supply chain driven factory where you can order highly customized products and get them into a predictable period of time, plus get them in high quality and at competitive cost, it should be the same when you’re creating applications. They should come in the same predictable way so that you can develop them similar to those supply chains within factories. This is what we call “create your own app factory”. At Serena we have developed a solution set to deliver and create your own app factory yourself. The purpose of a factory, if you look at the factory itself, is to produce goods that are in demand in the market so that they can be sold and profit can be made. Similarly, you want apps that can be used within a business because the business demands it, and they can add value to the business process. This is why I love to speak about business process-driven ALM. AK: WHAT DO YOU MEAN BY BUSINESS PROCESS-DRIVEN ALM?

EP:

As I mentioned before, more and more apps are getting at the core of the business process. In today’s world, apps are critical to create competitive advantage regardless of industry. I see it in all business areas—banking, automotive, air space and defence. Organizations want to have killer apps—those that bring in revenue more effectively than the competition and that remove cost more effectively than the competition. They can provide necessary transparency more effectively than the competition, and better service to customers more effectively than the competition as well. This is what killer apps are, and if you can’t deliver them on time then your competitor or successor will. This is what organizations are trying to look for—applications within that business that can

differentiate themselves and set themselves apart. Understanding the business process is vital to being successful. Another vital aspect is the timing, because the window that you have in order to create something new that is leading edge is very small in today’s economy because everything is moving so fast. So whenever possible, you should aim to try to configure apps or assemble them together so that you can create new processes and automate them. You have to be able to do this with a limited set of resources because IT is very constrained with resources. In the early days the only way to create a new application was by coding or engineering and it took sometimes months or years to do it. Now we have learned over time what we can deliver some of them by configuring and assembling them together. Configuring apps are especially suited for automating business processes that are evolving rapidly because business itself can evolve very rapidly, so sometimes you even need overnight agility to respond to those requirements and this is where configuring apps is the way to go. Sometimes, if you don’t have the building blocks or it’s too complex, you need to use the limited IT resources, but in order to be successful they need to have a very close alignment and understanding of the business and the processes itself. So the way I look at business process-driven ALM means putting the business processes and constraints at the forefront of the development process so that you can create those killer apps. Ideally you could do that by configuring, assembling or by coding them, but always put the business process upfront. AK: WHAT DO YOU MEAN WITH CONFIGURING APPS? I HEARD YOU TALK ABOUT PROCESS AUTOMATION, SO HOW DOES THAT LINK TO BPM AS WE KNOW IT TODAY?

EP:

I have been asked that a couple of times, and I think that business process management and automation has indeed some alignment with what I call configuring of apps. But the problem lies more in the details. When you’re looking at the traditional BPM tools they are extremely labour intensive. I heard some of our customers talking about creating those applications in months. What we do with Serena technology is provide an ecosystem that allows you to create those applications with the more technical savvy business people so that you can collaborate with IT, but not necessarily consuming all the IT resources for doing it. This way you’re not only reducing the amount of IT resources that you need, but you’re also doing it in a tenth of the time and cost in comparison to the traditional BPM tools in the market. This gives us a unique position in the space. We do that by providing a tool that is extremely flexible and simple, it is highly graphical as well so that you can configure those modules in a graphical way with limited or no coding at all which gives you the possibility to come to market extremely quickly. We’re giving you almost overnight agility in order to be able to respond to the changes that you might have in your business environment. 

63


ANALYST FEATURE ■ CLOUD COMPUTING

The

E

pros and cons

very year or so the high technology industry gets a new buzzword or experiences a paradigm shift which is hyped as “the next big thing”. For the last 12 months or so, cloud computing has had that distinction. Anyone reading all the vendor-generated cloud computing press releases and associated news articles and blogs would conclude that corporations are building and deploying both private and public clouds in record breaking numbers. The reality is much more sobering. While there is a great deal of interest in the cloud infrastructure model, the majority of midsized and enterprise organizations are not rushing to deploy private or public clouds in 2010. An ITIC independent web-based survey that polled IT managers and C-level professionals at 700 organizations worldwide in January 2010, found that spending on cloud adoption was not a priority for the majority of survey participants during calendar 2010 (see Figure 1). However, that is not to say that organizations—especially mid-sized and large enterprises—are not considering cloud implementations. ITIC research indicates that many businesses are more focused on performing much needed upgrades to such essentials as disaster recovery, desktop and server hardware, operating systems, applications, bandwidth and storage before turning their attention to new technologies like cloud computing.

68

of cloud

LAURA DIDIO (ITIC)talks about the

partnerships, and divisions, between the two sets of players in the current virtual desktop infrastructure market—Citrix/ Microsoft and VMware/EMC.

What are the organization’s top IT spending priorities for 2010 (select all that apply*)? Disaster recovery Upgrade server hardware Deploy new apps to support the business Server virtualization software Replace older versions of server OS Security Upgrade desktop OS Upgrade desktop hardware Storage Upgrade legacy server-based apps/DBs Improve revenue and profitability Increase bandwidth Skills training for existing IT staffers Desktop virtualization (VDI) Upgrade the WAN infrastructure Add remote access and mobility Application virtualization Green datacenter initiatives Add IT staff Build a private cloud infrastructure Implement a public cloud infrastructure

2%

17% 15% 13% 11% 11% 10% 9% 6%

47% 45% 44% 41% 37% 36% 36% 35% 31% 30% 27% 24%

*Total may exceed 100%

Copyright © 2009 ITIC All Right Reserved

Despite the many articles written about public and private cloud infrastructures over the past 18 months, many businesses remain confused about cloud pecifics such as characteristics, costs, operational requirements, integration and interoperability with their existing environment or how to even get started. DE-MYSTIFYING THE CLOUD What is cloud computing? Definitions vary. The simplest and most straightforward definition is that a cloud is a grid or utility style pay-asyou-go computing model that uses the web to deliver applications and services in real-time.

Organizations can choose to deploy a private cloud infrastructure where they host their services on-premise from behind the safety of the corporate firewall. The advantage here is that the IT department always knows what’s going on with all aspects of the corporate data from bandwidth and CPU utilization to all-important security issues. Alternatively, organizations can opt for a public cloud deployment in which a third party like Amazon Web Services (a division of Amazon.com) hosts the services at a remote location. This latter scenario saves businesses money and manpower hours by utilizing the


CLOUD COMPUTING ■ ANALYST FEATURE

host provider’s equipment and management. All that’s needed is a web browser and a high-speed internet connection to connect to the host to access applications, services and data. However, the public cloud infrastructure is also a shared model in which corporate customers share bandwidth and space on the host’s servers. Organizations that are extremely concerned about security and privacy issues, and those that desire more control over their data, can opt for a private cloud infrastructure in which the hosted services are delivered to the corporation’s end users from behind the safe confines of an internal corporate firewall. However, a private cloud is more than just a hosted services model that exists behind the confines of a firewall. Any discussion of private and/or public cloud infrastructure must also include virtualization. While most virtualized desktop, server, storage and network environments are not yet part of a cloud infrastructure, just about every private and public cloud will feature a virtualized environment. Organizations contemplating a private cloud also need to ensure that they feature very high (near fault tolerant) availability with at least “five nines” (99.999%) uptime or better. The private cloud should also be able to scale dynamically to accommodate the needs and demands of the users. And unlike most existing, traditional datacenters, the private cloud model should also incorporate a high degree of user-based resource provisioning. Ideally, the IT department should also be able to track resource usage in the private cloud by user, department or groups of users working on specific projects for chargeback purposes. Private clouds will also make extensive use of business intelligence and business process automation to guarantee that resources are available to the users on demand. Given the Spartan economic conditions of the last two years, all but the most cash-rich organizations (and there are very few of those) will almost certainly have to upgrade their network infrastructure in advance of migrating to a private cloud environment. Organizations considering outsourcing any of their datacenter needs to a public cloud will also have to perform due diligence to determine the bona fides of their potential cloud service providers. There are three basic types of cloud computing although the first two are the most prevalent. They are: • Software as a Service (SaaS) which uses the web to deliver software applications

to the customer. Examples of this are Salesforce.com, which has one of the most popular, widely deployed, and the earliest cloud-based CRM application; and Google Apps, which is experiencing solid growth. Google Apps comes in three editions— Standard, Education and Premier (the first two are free). It provides consumers and corporations with customizable versions of the company’s applications like Google Mail, Google Docs and Calendar. • Platform as a Service (PaaS) offerings; examples of this include the abovementioned Amazon Web Services and Microsoft’s nascent Windows Azure Platform. The Microsoft Azure cloud platform offering contains all the elements of a traditional application stack from the operating system up to the applications and the development framework. It includes the Windows Azure Platform AppFabric (formerly .NET Services for Azure) as well as the SQL Azure Database service. Customers that build applications for Azure will host it in the cloud. However, it is not a multitenant architecture meant to host your entire infrastructure. With Azure, businesses will rent resources that will reside in Microsoft datacenters. The costs are based on a per usage model. This gives customers the flexibility to rent fewer or more resources depending on their business needs. • Infrastructure as a Service (IaaS) is exactly what its name implies: the entire infrastructure becomes a multi-tiered hosted cloud model and delivery mechanism. Both public and private clouds should be flexible and agile. The resources should be available on demand and should be able to scale up or scale back as business needs dictate. CLOUD COMPUTING—PROS AND CONS Cloud computing like any emerging new technology has both advantages and disadvantages. Before beginning any infrastructure upgrade or migration, organizations are well advised to first perform a thorough inventory and review of their existing legacy infrastructure and make the necessary upgrades, revisions and modifications. Next, the organization should determine its business goals for the next three to five years to determine when, if, and what type of cloud infrastructure to adopt. It should also construct an operational and capital expenditure budget and a timeframe that includes research, planning, testing, evaluation and final rollout.

PUBLIC CLOUDS—ADVANTAGES AND DISADVANTAGES The biggest allure of a public cloud infrastructure over traditional premises-based network infrastructures is the ability to offload the tedious and time consuming management chores to a third party. This in turn can help businesses: • Shave precious capital expenditure monies because they avoid the expensive investment in new equipment including hardware, software and applications as well as the attendant configuration planning and provisioning that accompanies any new technology rollout. • Accelerated deployment timetable. Having an experienced third party cloud services provider do all the work also accelerates the deployment timetable and most likely means less time spent on trial and error. • Construct a flexible, scalable cloud infrastructure that is tailored to their business needs. A company that has performed its due diligence and is working with an experienced cloud provider can architect a cloud infrastructure that will scale up or down according to the organization’s business and technical needs and budget. The potential downside of a public cloud is that the business is essentially renting common space with other customers. As such, depending on the resources of the particular cloud model, there exists the potential for performance, latency and security issues as well as acceptable response, and service and support from the cloud provider. Risk is another potential pitfall associated with outsourcing any of your firm’s resources and services to a third party. To mitigate risk and lower it to an acceptable level, it’s essential that organizations choose a reputable, experienced third party cloud services provider very carefully. Ask for customer references and check their financial viability. Don’t sign up with a service provider whose finances are tenuous and who might not be in business two or three years from now. The cloud services provider must work closely and transparently with the corporation to build a cloud infrastructure that best suits the business’ budget, technology and business goals. To ensure that the expectations of both parties are met, organizations should create a checklist of items and issues that are of crucial importance to their business and incorporate

69


EXECUTIVE PANEL ■ IT OUTSOURCING

A marriage of sorts

MIKE ATWOOD (HORSES FOR SOURCES) moderates a panel discussion on IT outsourcing touching on transformation,

CHUCK VERMILLION (ONENECK IT SERVICES) and KARINE BRUNET (STERIA). the cloud and some fantastic case studies with the help of

http://www.GlobalETM.com

72


IT OUTSOURCING ■ EXECUTIVE PANELL

MA: AS YOU EXPAND YOUR BUSINESS, WHAT IS YOUR IDEAL CLIENT AND WHAT LETS YOU RECOGNIZE THAT?

We also provide organizational consulting to assess the maturity of the organization and their processes, and advise on that.

A typical client is one that wants to experience a big transformation in the coming months or years. It can be transformation of its IT or its entire organization, perhaps through mergers and acquisition. I think this is where an outsourcer can bring the best value. Looking especially at the profile of Steria, we work very closely with our clients to assist them in their transformation. So for us, a very good client is one with a transformation agenda.

MA: CHUCK, WHAT IS YOUR IDEAL SCENARIO?

KB:

MA: AND IN TERMS OF IT RESOURCES, IS WHAT YOU’RE PROVIDING HOSTING, AND HOW DOES THAT HELP WITH A TRANSFORMATION?

KB:

I think we’re providing not resources but services, and that’s what we mean by outsourcing. Steria is a mid-sized player in the market and it’s one of the top ten players in the European market. One of our differentiators is the fact that we have very good proximity and flexibility with our clients. Prior to migrating a client, one of our key capabilities is to be flexible enough in the transformation phase to adapt our service solution to whatever the client challenges are. Another demonstration of that flexibility may lie in that one part of our service incorporates an offshore alternative, but we may still need people on site for certain services because the client is not mature enough to have everything fully outsourced. We are able to accommodate that, and then industrialize the service delivery and the service mechanism to achieve an efficient delivery. MA: DO YOU PROVIDE ANY TRANSFORMATIONAL CONSULTING SERVICES?

KB:

We provide two types of transformation consulting services; one in the technical aspect which is about transforming your IT infrastructure—meaning virtualization and platform as a service or cloud—and one which is much more on the organizational level. Restructuring the way you operate your IT services can be tricky, but as a mid-sized player, we see more and more clients deciding on selective sourcing. This requires that they organize the governance, processes and how they’re going to manage multiple suppliers, as well as the potential to mix with internal teams.

CV:

For us it starts with pain—we don’t like that our customers are in pain—but we like to talk to companies that are experiencing pain because, generally speaking, they’re trying to avoid the experience of managing their own IT systems and have just had enough. They want someone else to do it for them. Or, they recognize the pain when they’re implementing a new system and have decided they’re not up to the task of effectively managing that new environment. The second thing we look for is a complicated environment. I think we really differentiate ourselves when we find that it’s not just a single application our customers are looking to host or outsource, but rather, they’re looking to outsource a multiple set of applications on several different types of infrastructure. This is where we’re best able to differentiate our capabilities. MA: AND DO YOU PROVIDE ANY SORT OF TRANSFORMATIONAL SERVICES?

CV:

IT outsourcing generally creates transformation within a company because you’re significantly changing the way they view and access their IT services. With regards to calling it transformational services—no, we don’t provide any services described as such. We know who we are as an outsource provider of IT services. We pride ourselves on the fact that customers would look at us not as a vendor, but recognize us as one of their employees. MA: ONE OF THE ISSUES THAT NORMALLY ARISE WHEN SOMEONE IS THINKING ABOUT OUTSOURCING IS: WHAT EXACTLY DO I OUTSOURCE? WHAT IS THE SCOPE OF THE OUTSOURCING PROJECT, AND WHAT AM I GOING TO SEND OUT? CHUCK, WHAT WOULD YOU ADVISE A CLIENT THAT THEY SHOULD OUTSOURCE?

CV:

We can provide a broad array of services and do everything for our customers; from as low as the base infrastructure providing the data centre services, all the way through

the application level that includes not only the enterprise or ERP application management, but also functional consulting on top of that. Our support centre is one where someone can not only call in if they’re having a technical issue, but they can also call our support centre if they’re having functional issues, such as getting a batch release in the ERP system, or how to better use the manufacturing functionality to accomplish a particular task. Also, with regards to the EDI or electronic data interchange, we will host and manage customers’ EDI translators. We’ll establish the trading relationships with the trading partners, test those transaction sets, and then ensure that they go forward in production. When there are errors flagged by the translator, we’ll research the nature of the issue and then forward that information back to someone in the company to help fi x whatever data issue is causing the problem. It’s almost business process outsourcing in some of these cases. We also do outsourcing of desktop administration. In some cases it’s done with on site desktop resources, and in others it’s done with the desktop resources in our location in a depot fashion. We have a very broad capability. MA: AND WHEN YOU DO THE OUTSOURCING, WHAT ASSETS DO YOU OWN AND WHAT ASSETS DO YOU ADVISE THE CLIENT THAT THEY OUGHT TO KEEP OWNERSHIP OF?

CV:

We’re pretty flexible with regards to asset ownership. Generally speaking, we encourage customers to keep the assets they started with. We move the assets from their facilities to our facilities and provide the services. However, as the relationship starts to mature and the assets come to the end of their useful life, our customers can decide what they’re going to do from a new equipment acquisition perspective. Oftentimes they look to OneNeck’s cloud services as an alternative to owning their own assets. However, from a desktop perspective, they normally continue to own their own assets. MA: IS THERE SOME SCOPE THAT WOULD BE TOO SMALL OR THAT YOU WOULDN’T TAKE ON?

CV:

No, for us it’s all about making sure our customers are satisfied and that we’re solving a business problem. 

73


ASK THE EXPERT ■ SECURITY INFORMATION AND EVENT MANAGEMENT

SIEM

satisfaction One of the most ignored benefits of security information and event management technology is using SIEM technology to improve overall IT operations. A. N. ANANTH and

STEVE LAFFERTY (PRISM MICROSYSTEMS) talk to ETM’S ALI

KLAVER about how improved operations is seldom given much attention but might well provide the most tangible cost justification.

AK: ANANTH, COULD YOU GIVE US A FEW EXAMPLES OF SIEM IN OPERATIONAL USE CASES?

AA:

One example that’s clear in these recessionary times is that people have had turnover in staff, particularly in North America, as so-called pink slips. A problem that IT people

84

have everywhere is to verify that the accounts for these departing employees have been properly terminated. There is a use case which is called the pink slip null which is essentially when HR tells you that a person has been let go or has chosen to resign, which to IT means that their access and accounts must be removed from the

directory. You can put that username into your list and start looking for any activity from those usernames for the next few weeks. This is a failsafe to make absolutely sure that there is no further activity. If you’re doing Active Directory you’ve probably removed that username or disabled them, but they could have configured a service with their name, as we


SECURITY INFORMATION AND EVENT MANAGEMENT ■ ASK THE EXPERT

found out with one of our customer locations some time back. So it’s a good idea to do this because it’s very inexpensive to run and should be coming up empty if you’ve done your job right. It’s a fantastic way of assuring yourself that things have gone according to plan. If you look at the security vulnerabilities, a lot of it happens because of default accounts that have been dormant for a long time and no one ever turned them off after the use was over. So the pink slip null is a really good operational use case. Another example is update failure of antivirus. One of our customers is a government contractor that chooses to use McAfee, and their concern is that the antivirus has not updated on a nightly basis. There is no fun in looking for successes because most of the time that’s what you get, but they look for an update failure and those desktops or servers, for whatever reason, couldn’t update. A very small percentage of machines will have this issue so it’s a small report, but it’s also very useful because this is an operational problem. If any of your machines are not properly updated they could potentially infect the whole network.

SL:

Operations is tasked with keeping the infrastructure running. Typically, when something goes wrong in the infrastructure, the only way they hear about it is when a user calls and says their service or application isn’t running. With SIEM collecting all of the information from all the systems, applications, users and everything else, it has unique knowledge of things that will going wrong before they go wrong. So operations can go in there and proactively prevent things from happening before the user is actually impacted. It can also monitor things, as Ananth mentioned before, like process and service monitoring and virtual infrastructure monitoring. All these infrastructure components that are pulling off logs that the SIEM product is monitoring—collecting, alerting on and everything else—greatly helps operations in being able to keep that infrastructure running well and available to the end users. AK: IT SEEMS THAT SIEM IS ALMOST A SAFEGUARD IN ANTICIPATING WHAT WILL HAPPEN RATHER THAN DEALING JUST WITH THE CLEANUP, SO THE BENEFITS ARE FAIRLY TANGIBLE AND OBVIOUS.

AA:

Yes—after all, the old adage about an ounce of prevention being worth a pound of cure is right in this sense. The ROI with this technology in the operations use case is quite compelling. We’re well aware that IT budgets everywhere are under pressure and are shrinking. For example, the typical ratio of system administrator to server which used to be 1-25 a year or so ago has jumped up to 1-40. That means one system administrator is now responsible for 40 machines and this is even true after you have virtualized. So the pressure on the system administrator to manage these machines and uptime is a big deal for all these organizations. These machines all have some business function, otherwise they wouldn’t exist. To be able to automate this so that expensive expert admins can focus on the real issues and those that are going to become issues, as opposed to having to chase your tail and react all the time, is a compelling one. Ops people will generally find that this technology can be their friend. They’re the ones who suffer all the time having to stay back on long weekends for upgrades and being called in the middle of the night. The more that you can head that off at the pass, as it were, the better SLA you’ll have in your organization.

SL:

There is also a lot of functionality that SIEM will provide to help with capacity planning and performance of your critical systems. Looking at CPU trends, memory trends and disk space trends allows you to anticipate long term issues and plan well in advance to get more disk space, put more memory in, or perhaps deploy your critical applications on bigger servers. AK: SO SIEM ACTUALLY TIES INTO THE OVERALL BUSINESS STRATEGY AS WELL?

AA:

Traditionally, SIEM has been something that only a subset of the IT department has tended to use for limited purposes, for example, security or compliance. This is shallow root, meaning the technology is capable of doing a lot more within the enterprise, but that potential is not realized. We know that data owners, usually middle management, are the people that understand the data best, but IT is the custodian of the system on which the data resides, and

this technology can do a lot for middle management. So if you present this information to the end users of the data it would be of much greater value to the entire business process. As the technology is currently used and named (SIEM) it is often limited to one or two tricks, however it’s capable of a lot more and of offering much greater value.

SL:

Certainly, the business owner can provide context to the data that IT operations simply can’t because they don’t understand the usage patterns and the typical things that people would do in their operation or in their organization—so that’s a great point.

AA:

To give you a specific example of this; a sales manager focuses entirely on quotes and he knows about underperforming sales people. For example, say a member of the team isn’t doing well and the sales manager is unhappy with him. The fact that that salesman has sensed it and has begun copying all of his quotes off onto a USB would be a major red flag for the sales manager, but might mean nothing at all to IT because that salesman is authorized and within his rights to do this. Showing that kind of trend to the data owner versus the custodian can make a huge difference and can be very valuable inside an enterprise. AK: WHY DO YOU THINK OPERATIONS GETS SHORT SHIFT WHEN COMPARED TO SECURITY COMPLIANCE?

AA:

I do think that the security guys thump their chests and get a lot of attention. Quite frankly, in 2010, security is everyone’s problem and has loomed large in all of our sensibilities. Our observation is that security folks will typically leverage the compliance requirements in order to get whatever they need and operations comes third in the sense that, once the evaluation is done, operations gets to actually use it. So their use cases aren’t often considered on par. It is also a fault of the analyst community that has chosen to position this technology as SIEM—this is limiting in and of itself. The log management space has a number of applications, security being perhaps one of the most important, but it’s certainly not the only one. If a user chooses to limit their use of this technology and security, that’s fine, but it doesn’t mean the technology itself is limited in some fashion.

85


HEAD TO HEAD ■ SEARCH TECHNOLOGY

A

question of

semantics

ETM’S ALI KLAVER talks with DR

KATHY DAHLGREN (COGNITION TECHNOLOGIES) about an innovative approach to meaning-based text processing technology. Among a discussion on market trends and the bottom line, it’s clear that Cognition is at the forefront of search— now and into the future.

http://www.GlobalETM.com AK: CAN YOU GIVE US A BRIEF OVERVIEW OF COGNITION, WHAT YOU DO, AND PERHAPS TOUCH ON THE BACKGROUND OF THE TEAM YOU WORK WITH?

KD:

Cognition Technologies, based in Culver City, California, is made up of a team of PhD’s that have been with me for 20 years. We have developed, with over 450 man years, a search engine that is years ahead of its time. While everyone else has been developing statistics-based search, we have been using linguistics to really understand the complexity of language so that the computer will understand you. It is the next evolution in search technology. We patented a semantic search architecture, known as CognitionSearch, that employs a unique mix of linguistics and mathematical algorithms. We’ve developed a special program that actually understands English to some extent. And I really stress “understand”. The Cognition team has built the largest computational semantic map of English in the world today.

88

Almost all search engines use statistics or popularity rankings to bring up search results based on your search terms or phrases. Google, for example, uses a combination of statistical methods and popularity rankings. But the problem with statistics or popularity rankings is that you only get the most “likely” results for your search. It is just not possible for it to be precise. Cognition is precise, while at the same time being complete and exhaustive in what it finds. This difference between statistical methods and popularity rankings and what we do is huge. We understand the search terms and phrases. I started at IBM working on this in the 80s, and now with a team of PhD computational linguists we have gone against the popular opinion of our time. I have really invested in the belief that what we call semantic search is critical to being able to do effective search. So to clarify what we do, because it is a bit complicated, we not only understand the search terms or phrases, but we also get all the synonyms within a context. This issue of context is super important in search. Just think about any time you say something—it’s not just about the words, it’s also


SEARCH TECHNOLOGY ■ HEAD TO HEAD

everything around it. For example, if you want to look up websites on heart disease, you’re not interested in documents about “the heart of the city” or “the heart of the matter”. Cognition understands the context to find only the desired information. At the same time, Cognition finds synonyms like “cardiac disorder” and more specific concepts like “ischemia” and “fibrillation”. We understand what you’re looking for so we can get 90% precision and 90% recall linked not only to the search terms but to the “meaning” of what you’re looking for. CognitionSearch really helps organizations that need to find very specific information in terabytes of data. We have developed solutions for law firms and scientists developing drugs to find very specific information in their unstructured data. For example, if you’re looking in a huge database of scanned articles and you want to find one relevant piece of information among literally tens of thousands of documents and emails, it takes half the amount of time with Cognition as with other methods. And Cognition finds the smoking gun that can be missed altogether because of variations in wording. We deliver savings of 50% for our customers just by eliminating the sheer man hours that are needed to go through all that unstructured data. I also want to stress how easy we are to integrate into your technology stack—we’re certainly not a headache at all. We work on Windows and Unix and are a plug and play kind of technology. CognitionSearch easily fits on top of any unstructured data environment. AK: WHY ARE YOU FOCUSING ON THE SEMANTIC WEB NOW? WHAT HAS CAUSED THE RISE IN SEMANTIC SEARCH?

KD:

There’s a lot of buzz relating to upcoming “Web 3.0” and the semantic web. Over the last 15 years, finding simple terms through popular queries has been the norm. Even though in total count 80% of today’s queries are popular, in terms of distinct queries, 80% of them are unique. This “long tail” presents a daily problem, especially for the professional searcher with something very particular in mind. Attempting more specific or complex queries has made all of us aware of the drawbacks of today’s search technologies. More and more, people want to think of the web as a giant database from which they can get answers to their questions. To sort through all this information requires context, disambiguation of the queries and the use of more natural or conversational language. Simply said, when a computer understands meaning it can do intelligent search, reasoning and combining. This demands the kind of semantic capabilities that Cognition has now. One of the interpretations of the term “semantic web” has involved hand coding of documents with semantic tags and relationships among them. Semantic tagging has had a following but ultimately it won’t be practical. As content continues to grow at an exponential pace, who is going to do all the tagging and how do all the definitions stay synchronized? For others, the semantic web means automatically interpreting the documents for meaning and automatically inferring relationships among them.

This will become the mainstream approach and this is what the CognitionSearch tools enable. AK: TAKE US THROUGH WHAT YOU CAN OFFER YOUR CUSTOMERS. HOW WILL WORKING WITH YOU IMPROVE THINGS SUCH AS APPLICATION PERFORMANCE, USER SATISFACTION AND SO ON?

KD:

Cognition is helping Microsoft Bing bring intelligence to their search function in collaboration with Microsoft division Powerset. Cognition Text Analytics analyze a document base for word, concept, frequent word sequences and unknown words. Measures of frequency and salience are reported. These analytics highlight unusual or unexpected terminology which may be suspicious for a legal case or very useful in a market research case. Also, they provide input to a customization process where the “company speak” or local slang of the client or customer base can be added to the Cognition semantic map. Cognition More-Like-This interprets the meaning of documents and can find those with similar content. The basis document can either be a document in a Cognition index or any document identified by URL. Unlike other document-similarity systems, Cognition More-Like-This is based on conceptual similarity rather than string similarity. Auto-categorization is where Cognition reads and understands documents, and places them in categories provided by the client. Autocategorization compares the salient concepts with given category meanings and ranks the match. Various statistical methods are deployed for the comparison along with the Cognition ontology. Cognition Document Foldering enables the user to automatically place documents in pre-chosen topic areas, for example, for a legal case. Foldering queries are often complex concept Booleans that express the content of the topic areas, and the results are segregated into the appropriate folders. Each part of the Boolean is interpreted linguistically with semantic concepts. The semantic search engine, CognitionSearch, is available for use in, for example, Wikipedia.cognition.com and medline.cognition.com. Users experience finding all and only the desired information and don’t waste time browsing irrelevant documents. Cognition finds the particular content you‘re looking for. The Cognition Ad Matcher interprets the meaning of queries, documents and phrases and ad copy. Using meaning as the medium, this ad placement technology maximizes the number of ad phrases matched and minimizes the poor matches. AK: HOW DOES COGNITION DIFFER TO OTHER SOLUTION PROVIDERS IN THE MARKET AT THE MOMENT?

KD:

Other solutions in the space use pattern-matching or statistical methods to compute relevance, guess meaning and match documents and query. Such methods make many errors, typically performing with 20% precision and recall. Statistical methods are popular as they have the promise of relatively low development cost while yielding improvements that are hoped to be

“Many companies need highly precise and thorough search capabilities of their unstructured data—we can deliver that and we are the only ones who can!” 89


Seven Reasons

HANALYST FEATURE ■ SERVICE LEVEL MANAGEMENT

S

92

PAUL BURNS (NEOVISE) shares with ETM why every CIO must embrace Service Level Management as a way to transform the IT organization.

ervice Level Management (SLM) is not the hottest hot topic in IT. In fact, it is not even a new topic. SLM has been in use for many years, improving IT service quality, performance and availability. It has also been used to increase customer focus and grow the maturity of the IT organization. Still, many CIOs have yet to implement SLM to any significant degree. Some remain focused on infrastructure and technology

components rather than the more abstract notion of IT services. Others have implemented ad hoc processes which address—at least superficially—a couple of aspects of SLM such as service level monitoring or reporting. This is not to suggest that ad hoc implementations are not worthwhile or that “to do SLM properly” requires strict adherence to every possible element of SLM. The idea is that SLM must be firmly embraced in order to achieve the most benefit. Of course, there are

always a large number of projects and processes competing for the attention of IT, and not all of them can be managed to peak levels of benefit. So why should an investment in SLM receive priority over other initiatives? While seven specific reasons will be offered here, it is also helpful to look at two conceptual benefits of “SLM done right”. First, SLM can be foundational. Rather than being treated as an independent process, SLM should provide support to other critical IT processes.

SLM should not just be about monitoring and reporting on service levels, it should also be about ensuring the right resources are applied to the right services in order to optimally support user needs. To do this, the SLM process needs to link with other IT processes from financial management and service definition to supplier management and continuous improvement. The second conceptual benefit of SLM is that it can be transformational. SLM should be used to drive a greater understanding of customer needs and their relative business priorities. This clarity should result in far better alignment between the IT organization and the business it serves. The actions of IT should then begin to provide more direct value through consumable services rather than indirect value through the technology infrastructure that supports those services. SLM as a transformational process not only helps the IT organization see the world

differently—it helps IT take more of a leading role within that world. With the conceptual benefits of SLM in mind, as well as some SLM philosophy, every CIO should be ready to learn why they must embrace SLM. If your IT organization is already using SLM heavily, the following seven points should still provide some fresh thoughts on what you could be getting from your SLM process. If SLM is still an ad hoc process or only used on a limited basis within your organization, these seven points may help you identify some important gaps. Finally, if you are still just considering SLM, it is hoped that these seven reasons will provide that last bit of evidence or motivation required to move your SLM project forward.

news to any CIO today, the idea of managing services rather than technology has not yet fully permeated all IT organizations. The concept of a service is supposed to provide a level of abstraction above the IT infrastructure and even many applications. This way, IT consumers are not bothered with the details such as servers, operating systems, VLANs or SANs. However, this abstraction is not just for the IT consumer—it must also be used by the IT organization. Only by adding the end-to-end service perspective can the IT organization operate in a way that both allows them to drill in to deep technical issues, and maintain an appropriate interface for the consumer. Adopting SLM helps ensure the IT organization has an appropriate level of service-orientation.

REASON ONE: SLM IS FOCUSED ON SERVICES Let’s face it; most IT consumers only care about the services they receive and not at all about the underlying technology. While that shouldn’t be

REASON TWO: SLM ENABLES QUALITY Quality is an interesting and involved topic of its own. For example, depending on the particular situation, there can be several


SERVICE LEVEL MANAGEMENT ■ ANALYST FEATURE

competing dimensions of quality. Do users view quality as higher when trouble tickets are resolved quickly or when they don’t have to be re-opened again for a slightly different use case? Users may be willing to trade off one for the other to some degree, and IT organizations often find themselves trading off quality dimensions such as speed and reliability. There can also be economic or cost differences when trading off dimensions of quality. It turns out that the process of creating a service level agreement (SLA) can go a long way toward arriving at the proper quality mix for IT services. SLM is just what is needed to drive this process with inputs from users, business leadership and the IT organization. REASON THREE: SLM ENABLES PRIORITIES As a CIO, you don’t just want things done right; you also want the right things done. As discussed, the notion of quality helps get things done right. However, it is the separate notion of prioritization that helps the right things get done. So what is the best approach to take for prioritizing capital investments, hiring, improvement projects, or anything else within IT that has inherent tradeoffs? There are many approaches that can help, and perhaps no single approach will work in all cases. Yet there is a lot to be said for leveraging the priorities that are established for various IT services as part of the SLM process. Through SLM, the IT organization should build an under- standing of which services provide the most value. The needs of those services should be addressed before the needs of lower priority services. REASON FOUR: SLM IS PROACTIVE One easy recipe for creating an underperforming IT organization is to be reactive— there are so many ways for IT to become reactive. How many dashboards and reports and statistics are available to analyze? While this type of information is critical, the risk is that IT may begin to drive too many decisions from lists of issues, problems and challenges. What is needed instead is for IT to be proactive. That means deciding on desired outcomes up-front, and then making them happen. Reviewing a report that says email is down for one to two hours every month, and then pulling together a team to resolve it, is reactive. Having an agreed amount of email availability documented in an SLA, and having made the appropriate investment to ensure it—as well as detect issues early, before they impact the SLA—is proactive.

REASON FIVE: SLM SETS EXPECTATIONS SLAs need to be visible. Users should be able to see at a glance the committed service levels for a given service. Otherwise they are far more likely to arrive at a set of expectations that has nothing to do with what the IT team intends to deliver. When a user knows that a particular application is not typically available on the weekend, frustrations can be greatly reduced even if the user still wants the application available every day. Of course, setting expectations should not simply mean keeping expectations low. Some service levels may be improving over time and that represents some positive publicity which should not be lost. Further, IT should indeed be “on the hook” for delivering to the challenging service levels to which they have committed. The business that IT supports has commitments to its customers and owners and must know it can rely on IT to meets its needs. REASON SIX: SLM FACILITATES NEGOTIATION Since IT organizations do not have the luxury of an “all you can spend” budget, they must demonstrate the value they deliver through services. However, the related budget process is not as simple as it may sound. The business will always want and need more service than it is receiving, and the IT organization will always want and need more budget than it is granted. So once again, the SLM process is involved with tradeoffs—in this case in the role of negotiator. The list of services and associated service levels can become central to the budget discussion. For example; “This is the value we are delivering today, under the current budget. Alternatively, with the incremental budget proposal, we can improve the performance and reliability of the CRM system.”

SLM helps ensure that budget discussions are based on facts and tradeoffs, rather than dreams and politics. REASON SEVEN: SLM IMPROVES IT MANAGEMENT The level of complexity in IT is growing every day. IT managers, from supervisors to the CIO, must rely on technical specialists, third parties and others more than ever to design, deploy and operate IT systems. SLM helps management transfer responsibility to others by clarifying service attributes and required service levels. For different departments within the same IT organization, operating level agreements (OLAs) may be used to formalize expectations around technical services in support of SLAs. For third parties, SLAs and underpinning contracts help transfer responsibility for how things get done while maintaining management control over the outcomes and related service levels. This indirect approach to management allows IT to scale without creating a loss of control. Every CIO that makes use of critical third party services—whether through desktop support services, software as a service (SaaS), or others—must have a proven SLM process in place. SLM is neither the newest nor the hottest topic in the IT industry today; however it does provide a solid foundation for IT. SLM is not a process that stands alone or simply a software package that can be purchased and installed. It is a philosophy that touches on many other IT processes and provides the most value when woven in to the broader IT management fabric. SLM also provides a way for the IT organization to transform itself. Rather than focusing on technical infrastructure, SLM allows IT to address end-to-end services in support of user and business needs. If you are a CIO, you now have at least seven good reasons to embrace SLM.

Paul Burns |

PRESIDENT AND FOUNDER, NEOVISE Neovise is an IT industry analyst firm that uniquely adds business perspective to technology. Paul has nearly 25 years experience in the software industry, driving strategy for enterprise software solutions through product management, competitive analysis and business planning. He has held a series of leadership positions in marketing and R&D, and spent two years as Research Director/Senior Analyst at another firm immediately prior to founding Neovise. Paul earned both B.S. in Computer Science and M.B.A. degrees from Colorado State University.

93


ASK THE EXPERT ■ GOVERNANCE, RISK AND COMPLIANCE

Moving with the

times

http://www.GlobalETM.com

94

ETM’S ALI KLAVER talks to PRASHANTH SHETTY (METRICSTREAM) about managing enterprise GRC programs in global organizations, and how to realize the benefits that can stem from successful implementation.


GOVERNANCE, RISK AND COMPLIANCE ■ ASK THE EXPERT

AK: TELL US ABOUT THE SOLUTIONS YOU OFFER. WHAT YOU CAN DO FOR CUSTOMERS OUT THERE?

PS:

MetricStream is the market leader in enterprise-wide governance, risk, compliance and quality solutions for global corporations. MetricStream solutions are used by leading corporations such as NASDAQ, The United Bank of Switzerland, TVA, Amedisys, Cummins, and many other customers in diverse industries including energy, financial services, pharmaceuticals and manufacturing. We manage their risk processes, internal audit, and regulatory and industry mandated compliance. In addition, we also own a web property called Compliance Online which is used by over one million compliance professionals worldwide for the purpose of gathering compliance intelligence, training and discussion groups. MetricStream has also been ranked as the world leader in enterprise GRC and IT GRC solutions by leading independent industry analysts. In summary, we are a privately-held profitable company with investments from some of the best known names in venture capital including Kleiner Perkins Caulfield & Byers, as well as Advanced Equities. AK: LET’S LOOK AT MARKET TRENDS—WHAT ARE THE KEY CHALLENGES THAT ORGANIZATIONS ARE FACING TODAY FROM A GRC PERSPECTIVE?

PS:

We’ve seen a few different challenges. I would classify them primarily as organizational, operational and technology challenges. On the organizational side this really has to do with a lack of sufficient buy-in on the importance of GRC from the upper echelons of management. This either happens because there are other pressing priorities, or some of them just don’t want to change the status quo. In fact, I was recently in a meeting with some executives in a leading pharmaceutical company and one of them said: “We’ve been in business for over 100 years and we didn’t have to do anything special on the GRC front—so why change now?” I think this reflects some of the organizational mentality that is contributing to the challenge of GRC adoption. As a result, GRC sometimes gets adopted very sporadically or inconsistently, which makes it quite ineffective. Secondly, there’s the operational challenge—even when the organizational buy-in does exist, companies still face the execution challenge which is: How do I really operationalize this? Where do I get started? What sequence do I proceed in? This is the classic case of there being too many things on the table and not being sure where to begin. Finally, the challenge on the technology side is that ever since GRC became a mainstream term in the last few years, there have been myriad technologies that have been positioned as GRC-centric. Several technology solutions like business process modelling or BPM, business intelligence or BI, and event monitoring—these are just some examples. Companies are also faced with the decision of which technology solution to deploy, and which is more effective than the others. They also have to consider how much technology to deploy versus human judgement. After all, functions such as compliance, controls and audits have been done manually for a long time—and for good reason—because a lot of these are based on human judgement. So you really need to strike the right balance between technology automation and judgement. These are just some of the challenges that we’ve seen. And in spite of these challenges

I would say some organizations are starting to figure out how to work through this by starting with the basics and keeping their focus on business objectives and performance metrics, then using that as a starting point to move forward. AK: WHAT STEPS ARE GLOBAL ORGANIZATIONS TAKING IN ORDER TO CONSOLIDATE THEIR GRC PROCESS? CAN YOU TALK ABOUT SOME OF THE PRACTICAL CHALLENGES OF TRYING TO REALIZE THE INTEGRATION OF GRC INITIATIVES IN THESE GLOBAL ORGANIZATIONS?

PS:

The one fact that organizations are beginning to realize is that, until now, disparate functions of audit, risk management, compliance, policy governance and so on, are not truly siloed initiatives, rather, they are interconnected with one another. And companies are now looking to exploit the benefits of this interconnectedness. So when we talk about consolidation, it can be realized in many different ways. One of the common ways is using, for example, a risk-based approach to audit and compliance. Organizations first begin by identifying the major material risks facing the company and then use that to drive compliance testing and audit strategies, or both. In fact, MetricStream’s solutions themselves are designed to support exactly this approach of the risk-based methodology. Another approach to consolidation that we’ve seen is addressing the so-called multiple compliance problem by using a standardized set of either business process or IT controls to address, in a rationalized manner, several different regulatory and policy requirements. So again, our platform and our solutions really support this risk-based approach as well as this notion of shared or rationalized controls. And this level of consolidation is starting to enable customers to derive the maximum benefit from their solutions. As straightforward as this sounds, consolidation is not always easy and this is because some companies don’t even have a discipline of formal risk management so they don’t know what their major material risks are and how to measure them. Another issue is that there are times when the data lies in disparate, separate systems and companies have to take on the overhead to do a lot of this technical data migration to get the integration to work. That is not always easy. AK: I KNOW GRC IS A RELATIVELY NEW TERM, BUT WHAT IS YOUR PERSPECTIVE ON THE EMERGING CONVERGENCE OF ENTERPRISE GRC AND IT GRC?

PS:

At MetricStream, we believe that this current distinction between enterprise GRC and IT GRC is based on a dated understanding of the market, and is fairly quickly bound to change. This separation was more from a convenience standpoint and was artificially divided into two spaces, but honestly, from what we’ve seen, there is a good deal of overlap and a natural convergence of EGRC and IT GRC. We believe that IT GRC, for instance, is not just driven by technical objectives alone but by business objectives such as corporate performance indicators, which in turn are tied to the integrity of IT processes and systems. And these are exactly the same drivers as enterprise GRC so they do share much in common.

95


EXECUTIVE PANEL ■ BUSINESS PROCESS MANAGEMENT

Business and IT—

side by side

DANA GARDNER (INTERARBOR SOLUTIONS) moderates a discussion on the productivity benefits and future of business process management with the help of MARK TABER

(ACTIVE ENDPOINTS), ANGEL DIAZ (IBM) and SAMIR GULATI (APPIAN CORPORATION). This expert panel

examines BPM and explores what it delivers to enterprises in terms of productivity and agility. 100


BUSINESS PROCESS MANAGEMENT ■ EXECUTIVE PANEL

DG: WHAT IS IT FROM YOUR VANTAGE POINT ABOUT BPM THAT IS MAKING IT SO POPULAR NOW? WHAT ARE THE BUSINESS AND TECHNOLOGY DRIVERS THAT ARE MAKING THIS THE RIGHT TIME FOR MANY FOLKS TO BE GETTING INTO BPM IN A BIG WAY?

MT:

The big trend that we see is a power shift from central IT (mega-projects) to the line of business. This has huge implications because of the fact that smaller departments are much more expertise-bound and require real-time ROIs. What is most important to the project manager is gaining efficiency while at the same time reducing risk. The ROI must be believable for the current year. Proof is much more rigorous and this is where BPM comes into play.

AD:

I think all of us who have been in the BPM business know that it’s been around for quite some time; everything from the basic assembly line to providing power to the people, and allowing business folks to be first class participants in the design and understanding of what the key business outcomes are, and the rapid implementation of those. It’s important when you look at where our customers are headed in the next year or so. It’s very clear that yesterday’s best in class is no longer good enough. The workforce needs to be more productive and we expect that across the board—a 26% increase in productivity. Supplier’s lead times, 62% faster in two years; stock to stock, 50% faster in two years. Key organizations need to be 25% more efficient in two years. How are we going to get there? By very prescriptive approaches, as Mark was saying, by focusing on business problems and involving all the stakeholders, business leaders, line of business executives, IT developers and business analysts. These people are needed to deliver imitative, quick solutions to the market that address those specific customer needs. It is a greater inflection point in both business and technology. The confluence of many technology buzz words and acronyms to do with social, textural capabilities, dynamic networks, IT conversions—that type of technology converging with the business needs is bringing us to an exciting point in the industry.

SG:

For us, business has always led the charge for BPM initiatives. There has to be a compelling business need to deploy BPM and

when there is, it catches fire very quickly. Typically, IT comes along for the ride and then realizes that it’s an extremely powerful technology and can be applied in a number of different areas, so the business imperative is key. What is happening now is that business is getting more and more educated about the benefits BPM drives and all the ideas that can reduce cost, increase productivity and lead to things like improved response time or agility. What we’re finding with our customers— because they’re maturing now in terms of the usage of BPM—they’re seeing things like increased quality where you have more process consistency and better rigour in terms of the way you’re doing business. It leads to better executive management visibility of end-to-end processes and improved customer service. BPM is finding its way into a lot of customer-facing processes and that is impacting customer retention and service with improved risk management as a by-product. So I think business is now appreciating all the other benefits of using BPM technology that aren’t the traditional benefits of reduced cost and improved response time. In a time where people are now trying to come out of this recession and introduce some new products, it’s critical to have that end-toend visibility. Business is leading this comeback but technology trends are driving adoption. It really has to do with increased collaboration. If you look at the workforce today, everyone is adopting social media and wants to work more effectively. BPM is the perfect platform to drive that. And with the increasing mobile workforce, BPM technology can adapt to this new workforce that is both collaborative and mobile. This will really drive adoption from a technology perspective. DG: IT’S INTERESTING THAT WE’VE TALKED ABOUT COLLABORATION IN A SOCIAL CONTEXT AND THE FACT THAT THE OLD STATIC APPLICATION FOR COMMUNICATIONS MIGHT NOT BE THE BEST WAY TO COMMUNICATE ACROSS PROCESS BOUNDARIES AND DOMAINS. FROM YOUR PERSPECTIVE AT ACTIVE ENDPOINTS, MARK, DO THE BPM AND COLLABORATION TREND LINES EITHER MEET OR DIVERGE— WHAT IS THE RELATIONSHIP?

MT:

I am going to take maybe a

101


HEAD TO HEAD ■ MAIL ARCHIVING

Archiving on demand MARTIN KUPPINGER (KUPPINGER COLE) talks to Astaro’s ERIC BEGOC about mail archiving, how it’s changing, and what to expect in the future.

http://www.GlobalETM.com MK: ERIC, MANY PEOPLE STILL DO NOT KNOW THAT MOST COUNTRIES ENFORCE STRICT BUSINESS REGULATIONS GOVERNING EMAIL ARCHIVING. HOW CAN THIS BE?

EB:

Well Martin, the problem is that everybody is in a kind of standby state. If one does not really know what standards should be enforced because the requirements and the consequences of not being compliant are not clear, one just waits. Another problem is that most email archiving solutions are far too expensive and require much administration. So I guess at the current time, a lot of companies would rather just wait and hope that being caught doing nothing is cheaper than implementing what should be a business standard. MK: WHY SHOULD EMAILS BE ARCHIVED IN THE BUSINESS ENVIRONMENT, OR WHY HAVE THEY TO BE ARCHIVED THERE?

EB:

First and foremost, and you’ve actually already stated it, archiving emails is the law. So I’d like to point why that is. Email today is seen to be on a par with printed business letters. And no one has ever doubted that you have to archive documents that are related to your contracts which include business communication in written form. It was just a logical consequence to make the archiving of emails obligatory. Also, the burden of proof lies directly with you when there is any arguable text matter at hand. If you can’t prove your position, you will have to pay a penalty. And still compliance is not the best thing about a good email archiving solution. MK: THAT IS THE SORT OF THE MESSAGE WE HEAR ALL THE TIME FROM THE INDUSTRY—WHAT ELSE IS THERE?

EB:

That’s right, for example, with productivity. In terms of personal organization, when an employer searches for a specific email he may waste a good part of an hour. Some time ago, I had many folders in my email inbox and I was scared

110

of deleting any email that I was not sure I would ever need it again. But the real charm of an intelligent archiving solution, in addition to providing compliance, is that I may delete any email without any other thought. Because every time I delete a message I know it is securely stored in the archive and that I will find it again in seconds whenever I need it. MK: SO THERE’S MUCH MORE VALUE THAN JUST FULFILLING COMPLIANCE NEEDS, WHICH I THINK IS VERY IMPORTANT, BECAUSE THERE’S A REAL BUSINESS VALUE IN MAKING PEOPLE MORE EFFICIENT. SO WHAT SOLUTIONS FOR EMAIL ARCHIVING ARE OUT THERE, AND HOW DO THEY DIFFER FROM EACH OTHER?

EB:

There are different kinds. First, there’s the traditional mail archiving software that just installs on an email server. It may look like a cheap and easy way to archive those emails, but you should keep in mind that these solutions are limited by the hardware and are not very scalable. Another approach out there is the email archiving appliances which take off the load from the main servers. This is a good thing, but as those vendors are traditionally more active in the storage areas, those solutions are complex and expensive. Plus, we will still have those scalability issues at some point. And let’s not forget that those software and hardware-based solutions will both require additional expenditure for maintenance and upgrade, for example. So you need an unlimited capacity and scalability—why not make use of a cloud? MK: WHAT ADVANTAGES DO YOU SEE IN ARCHIVING SOLUTIONS TO MAKE USE OF THE CLOUD?

EB:

Well, in the traditional solutions we just saw, you needed to go through several steps. You had to contract your IT partner, order the product, wait for delivery—and of course wait for your partner to install it which is a very lengthy and complex process. On top of that there are all the troubles that I mentioned before.


MAIL ARCHIVING ■ HEAD TO HEAD

Therefore, we set about creating a solution that solves all those problems, and we decided to go for a cloud-based solution. Astaro Mail Archiving makes archiving really easy. There’s an automated set-up process providing step-by-step guidance which will get you up and running 15 minutes from the initial stand up. You won’t spend any time on recurring maintenance tasks such as daily monitoring, for example. Also, updates are supplied automatically so you are always using the latest software releases. Other big advantages of course include unlimited storage capacity, so scalability and hardware upgrades are no longer an issue—even if you want to archive emails for decades. The Google-like search helps you to find messages in seconds and, together with a seamless Outlook integration, people work the way in which they are comfortable, so additional training is not an issue. MK: SO ARCHIVING BUSINESS EMAILS FROM THE CLOUD—I THINK A LOT OF PEOPLE WON’T FEEL REALLY COMFORTABLE WITH THIS THING ABOUT PRIVACY ISSUES AND COMPLIANCE. HOW DO YOU SEE THIS TOPIC?

EB:

Well true, but why is that? It’s because people assume that cloud providers are not as safe as their service providers next door. And that of course is just because the guy next door gives them a feeling of trust, which is based on emotion as much as on facts. So how can people ensure that their provider won’t breach the privacy laws or, for example, forward company communication to other parties? It’s a difficult issue based only on trust. The repeated statement: “We need to make the cloud secure”, is something that is true for every solution and every provider without exception. MK: IF I WANTED TO ARCHIVE MY EMAILS IN THE CLOUD AND WAS LOOKING FOR A SOLUTION, AND IN FACT FOR A PROVIDER, WHAT SHOULD I PAY ATTENTION TO?

EB:

There are many cloud solutions out there. When thinking about selection criteria, the first thing that springs to mind may be those compliance and local regulations, but there’s more that should be considered. Whether a solution matches those local regulations can be found out very quickly. One important point is to make sure that a solution comes without hidden costs, for example. Storage space which is included in a subscription is often limited in other offerings. If you need more later on, the cost will rise. Also, find out what has happened to archives for email boxes that are no longer needed—for example when an occasional employee leaves the company. If continuing to archive them costs you money, you budget for email archiving.

Eric Bégoc|

PRODUCT MANAGER, ASTARO

Eric started with Astaro at the beginning of 2008. He has amounted extensive experience in the email security and archiving technology areas through his days at the security provider GROUP Technologies. Today, with more than 15 years of project and product experience in the IT business, Eric is responsible for the Astaro Mail Archiving product. Another example of what to pay attention to is usability—for your IT administrators as well as your employees. Check if a solution offers quick and easy access to archive email as well as if it contains features that simplify the auditing tasks you will require. MK: SO YOU TALKED ABOUT SEVERAL REASONS FOR MOVING TO THE CLOUD, AND BEFORE YOU ALSO TALKED ABOUT THE PROBLEM OF TRUST IN THE CLOUD AND THE PROBLEM OF TRUSTING THE PROVIDER. ARE THERE ANY SPECIFIC POINTS WE SHOULD LOOK AT TO ENSURE THAT IT’S NOT ONLY ABOUT TRUST, BUT THAT WE ALSO HAVE SOME ASSURANCE THAT WE REALLY CAN TRUST A SPECIFIC PROVIDER FOR FULFILLING COMPLIANCE REQUIREMENTS WHEN ARCHIVING MAIL FROM THE CLOUD?

EB:

Well, if you’re looking at those providers I think it’s always a good idea to look at their processes and of course trusting the provider who has a transparency and make available all that information so that you can assure yourself about how their processes are in place, how they are living, and trust in those solutions. Of course, if you look at those providers you will often find that transparency is really very key to their service. MK: SO PROVIDERS WHO ARE OPEN ABOUT WHAT THEY’RE DOING ARE MORE TRUSTWORTHY. I PERSONALLY THINK THAT THE CLOUD IS A VERY INTERESTING PLACE TO DO EMAIL ARCHIVING, AND I ALSO FULLY AGREE WITH IT BEING ABOUT WHAT THE REAL BUSINESS VALUE OF IT IS. IT’S NOT ONLY COMPLIANCE; IT’S ABOUT HOW YOU CAN RETRIEVE ALL OF THIS INFORMATION AND KNOWLEDGE WITHIN EMAILS IN AN EFFICIENT WAY.

Martin Kuppinger - MODERATOR|

FOUNDER AND SENIOR PARTNER, KUPPINGER COLE + PARTNER

Martin established Kuppinger Cole, an independent analyst company, in 2004. As founder and senior partner he provides thought leadership on topics such as Identity and Access Management, Cloud Computing and IT Service Management. Martin is the author of more than 50 IT-related books, as well as being a widely-read columnist and author of technical articles and reviews in some of the most prestigious IT magazines in Germany, Austria and Switzerland. He is also a well-known speaker and moderator at seminars and congresses. 111


ANALYST FEATURE ■ SOCIAL TECHNOLOGY

Open

innovation

With the rise and embrace of social networking and software you could be forgiven for thinking that we’re at the height of innovation. MATTHEW LEES (PATRICIA SEYBOLD GROUP) tells us how social technology can make “innovation” more than just a word.

112


SOCIAL TECHNOLOGY ■ ANALYST FEATURE

“Concentrate!”

my high school tennis coach would yell from behind the fence. Repeatedly. Every time I placed a bad shot or lost a point: “You’re not concentrating!” So I’d concentrate harder, or at least try to. But on what? Mostly I’d just say the word “concentrate” in my head, over and over like a special mantra, as if saying it was the same as performing the act, hoping that would do the trick. I don’t think it did. In reality, many factors, both physical and mental, were at play: my own (admittedly limited) athletic prowess, the (often superior) skills of my opponent and our relative levels of determination, in addition to our own abilities to concentrate. It seems a wasteful and distracting thing, then, for my coach to zero in on my supposed lack of concentration—better perhaps to have provided more specific guidance (such as: “Hit it to his backhand!”). I am reminded of this these days as the interest in—and hype around—innovation increases. While I’m a strong proponent of business innovation the word is too often treated as if it has a magical property. “Here’s what we’ll do” an exec will say; “We’ll create a cross-organizational team to innovate. We’ll give the group a fashionable name, we’ll give them a charter, and we’ll send an email to tell everyone else what they’re doing. Then we’ll let them go and… well… they’ll innovate.” When I hear things like this my mind conjures up a picture of my tennis coach in an office cubicle, yelling “innovate!” to everyone who passes by, as if saying the word itself will make it so.”

A NEW LANDSCAPE The desire and need for innovation have been with us as long as human beings have walked the planet. And people have always found ways of innovating, utilizing whatever skills, resources and determination at their disposal. (Many of us today may find it hard to imagine a world before computers, but as far as I know, the wheel was invented without a dedicated design team, CAD software or a wiki.) But desire or need alone isn’t enough to spark new ideas and generate real change. Although innovation is an inherent part of who we are and, therefore, how we do business, we haven’t always been good at it. With the need to stay relevant and ahead of the competition only increasing, organizations can’t depend on innovation as a result of serendipity or the brainpower of a small group. Rather, companies need to lay the groundwork so that innovation can occur on a regular, repeatable basis, involving as many minds as possible. That’s the underlying principle of crowdsourcing—in the big picture, if enough ideas and perspectives are shared, the end result will be more innovative, more transformative, and more feasible than anything an individual or small team could come up with. So, while the desire and need for innovation aren’t new, what is new is the landscape that grew out of the internet boom of the past 10 to 15 years. In particular, there are three components that make innovation easier and more accessible than ever before. These are: 1—CONNECTIVITY There is currently the ability for people to connect with each other (and with information of interest and important to them) via computers and mobile devices 24 hours a day from just about anywhere in the world. 2—SOFTWARE APPLICATIONS

Today’s software systems bring an impressive array of features and capabilities that provide not only channels for communication and collaboration between and among employees, but also workflows and administrative and analytic tools that make these systems both productive and cost effective. 3—AN OPEN AND COLLABORATIVE MINDSET Driven by both the business and consumer worlds, people are not only more easily able to share their opinions and ideas, they are more comfortable than ever before in doing so. (In fact, they are increasingly expecting to be able to do so.) WHAT SOCIAL TECHNOLOGY BRINGS TO THE TABLE How do connectivity, software and collaborative mindsets play out in the business world (and beyond)? There are five primary considerations: 1—SCALE You need a crowd to crowdsource. Social technologies can connect people across the globe and support innovation efforts between dozens, hundreds, thousands or millions of people. Steve Adler, Director of Information Governance Solutions at IBM, sees all this as a game changer. He says: “Social media is probably one of the most important inventions in mankind. It is the first type of technology— and there will be others—that allows people to communicate with content directly, and without knowing each other. “If you think about other technologies— fax, email, the World Wide Web—you have to know something about the other people (such as their fax number or email address) or you are broadcasting to everyone, hoping that some small portion is interested in what you’re saying.

“But social networking provides people who don’t know each other very well the opportunity to interact around things they’re interested in, that they care about, and that they need to make decisions around. In that way it’s an innovation accelerator. People can share ideas regardless of race, age, gender, ethnicity or geographical border. And that’s really profound.” And IBM should know, because for many years it has run programs to foster innovation among its nearly 400,000 employees. Adler goes on to say, “Within the enterprise, it’s the same principle. At IBM, we’re doing business in 170-plus countries around the world. We’ve been using social technologies for many years. We innovated 10 years ago with the World Jam, a crowdsourcing program that empowered innovation through technology.” 2—GETTING THE WORD OUT Greg Matthews is a social media director with global communications firm WCG. He just completed a six-year stint with insurance giant Humana, most recently as Director of Consumer Innovation. In recognizing the importance of innovation as a core value, he says: “The big step for us at Humana was forming the Innovation Center, a department dedicated specifically to foster innovation. But it wasn’t supposed to be the only place for innovation. We wanted innovation to be a part of our DNA as a company.” Many organizations have taken similar routes, creating special groups to spearhead innovation efforts. However, these groups may or may not look to leverage social technology. Humana did, particularly because, as Matthews adds: “The greatest ideas in the world are of no use if nobody hears about them. It was always harder when you couldn’t communicate easily and effectively on what was going on within the Innovation Center. So number one for us

113


ASK THE EXPERT ■ ENTERPRISE SEARCH

One at a time Enterprise search has long been confused with compliance and policies for archiving. Information governance strategies reset thinking in this area by establishing clear use cases for accessing information versus those that enable retention.

SIMON TAYLOR (COMMVAULT) takes ETM’S ALI KLAVER

through the different faces of enterprise search. http://www.GlobalETM.com 116


ENTERPRISE SEARCH ■ ASK THE EXPERT

AK: THE TOPICS OF COMPLIANCE AND E-DISCOVERY OR E-DISCLOSURE BOTH SPEAK TO EFFECTIVE ENTERPRISE SEARCH, BUT IT’S A MUCH BROADER SUBJECT. WHAT DO YOU SEE AS THE TYPICAL REQUIREMENTS FOR ENTERPRISE SEARCH, AND WHAT TYPE OF DATA IS MORE COMMONLY ACCESSED?

ST:

Firstly, let’s break the question down into some of the reasons why we hear about enterprise search and then we can go into use cases that probably make more sense. In the context of issues like compliance we often hear about search as a means of auditing records that in turn have to be kept, which is of course the predominate driver around compliance, the forced retention of records and so on. For e-discovery, or e-disclosure as it’s referred to in Europe, it’s very much around finding evidence and unstructured information that needs to follow a particular process or a court action. The general exercise of eventdriven search is actually very similar in both of these situations in terms of effort, even though the process use cases are slightly different. So when we look across the enterprise, what sort of data then is focused around enterprise search? Well, it’s really everything today. Historically, it’s always been about perhaps more structured records. Very commonly now, particularly in situations like e-discovery scenarios, we see searches in relation to unstructured content including word documents, office documents, emails, assets not only on the file system but also records in repositories, including document management systems and covering all types of assets—for example, sound bites and images that have been stored. Medical records including images are a good example of this as they have to be kept for HIPAA/healthcare compliance including audit search requirements. In general though, types of search do vary quite a bit and although historically driven in siloed use cases, what we’re seeing now is a lot more interest in developing enterprise search that covers the broader enterprise. This is all types of data, including structured, semi-structured and unstructured sources, and finding a way of bringing that together in a single approach. Good examples of particular use cases that fit into modern enterprise search include things like FOI requests or subject access requests, as they’re often called in Europe, where people

query, particularly public organizations asking for information in an electronic format. Then we have the idea of the HR department, running an internal investigation and trying to find information in relation to perhaps a dispute between two employees. That obviously causes some form of enterprise search. We’ve already talked about the example in the legal environment, but then of course there’s corporate productivity and the idea that maybe at an end user level, individuals need to find things more quickly and don’t want to spend lots of time looking for email, for example. Also, in a more general sense, corporates may be doing something like a corporate social responsibility report that can often take up to nine months to complete. They’ll often have to search local information stores and lots of different sites and geographies to check for things like green credentials or social reporting considerations in countries, and bring all that together into a report they release with their financial results. So that’s another driver. There are more specific things like looking for data privacy breaches, or use of things like credit card information or social security or national insurance numbers in structured content that shouldn’t be there, and exposing those as potential risks. In the financial world the use cases around compliance, including conforming to legislation like Basel III, financial services Acts per country and also in the US, SEC 17-a, from the security exchange commission, force the retention of local broker-deal related transaction records, but more importantly the audit of those records. In this context enterprise search is specifically about the audit and supervision requirements. AK: SO YOU’RE SAYING THAT ENTERPRISE SEARCHES ARE A LOT BROADER THAN JUST DELVING INTO RECORDS. AND SOME ORGANIZATIONS OBVIOUSLY HAVE MORE REQUIREMENTS THAN OTHERS TO PERFORM ACTUAL SEARCHES. CAN YOU HIGHLIGHT THE MORE PROMINENT INDUSTRIES AND EXPLAIN WHAT THEY NEED AND WHY?

ST:

I think enterprise search does occur in some specific industries more than others. If you look at things like corporate social responsibility and more of the internal type of enterprise search, you find the retail industry as being a very good example of organizations that produce a lot of corporate social responsibility

reporting and therefore have a lot of internal needs for searching that start off on a yearly basis. But then, of course, the institutions that tend to perform search on a regular basis typically tend to be more government organizations, including US Federal, American state and local, and outside the US, European governments where all have some form of FOI or Data Protection Act requirement, to search and disclose information based on subject access requests made by the public. Government institutions particularly also have the pressure of having to do multiple searches for particular requests, with a very limited amount of time to do them. They can vary between 20 to 40 days, and when you’re getting multiple requests per week it’s quite a lot of effort to fulfil these requirements. But there are also very specific industries like pharmaceutical and healthcare that have forced retention of medical requirements and compliance legislation, but at the same time have to audit activity requests against them and perform very specific searches for individual medical records. In the pharmaceutical industry, they might find themselves in repeated litigation and therefore have to perform frequent searches against defined custodian-based criteria in order to manage legal holds or preservations against expected electronic evidence. The telecommunications industry is another good example in relation to legislation, including the Data Retention Directive in Europe where they are forced to retain electronic records for two years and then, on request, go and discover records for perhaps criminal investigation and all sorts of things. AK: AT ETM WE’VE BEEN HEARING A LOT ABOUT THE MINING OF UNSTRUCTURED INFORMATION IN CONNECTION WITH SEARCH. HOW DOES THIS DIFFER FROM TRADITIONAL BUSINESS INTELLIGENCE, AND WHAT DO YOU SEE AS THE MAIN DRIVERS?

ST:

This is actually a very important question because a lot of people get quite confused about business intelligence and how it relates to unstructured information. Business intelligence started from the premise of bringing together structured data in a particular repository or a data warehouse for a specific reporting need, organized in a particular hierarchy so that you can easily report and mine into it. This is how business intelligence tools evolved.

117


EVENTS AND FEATURES ■ 2010

Events and features 2010

ETM is focusing on:

BI, Security and Virtualization

ICCCN 2010 DATES: 2 – 5 August 2010 LOCATION: Zurich, Switzerland URL: http://icccn.org/icccn10

INTEROP MUMBAI DATES: 28 – 30 September 2010 LOCATION: Mumbai, India URL: www.interop.in/index/htm

TDWI WORLD CONFERENCE DATES: 15 – 20 August 2010 LOCATION: San Diego, CA URL: www.tdwi.org/Education/Conferences/ index.aspx

GIL 2010: MIDDLE EAST DATES: 5 – 6 October 2010 LOCATION: Abu Dhabi, UAE URL: www.gil-global.com/middleeast

GIL 2010: INDIA DATE: 1 September 2010 LOCATION: Bangalore, India URL: www.gil-global.com/india

RSA CONFERENCE EUROPE DATES: 12 – 14 October 2010 LOCATION: London, UK URL: www.rsaconference.com/2010/europe/ index.htm

GARTNER OUTSOURCING AND VENDOR MANAGEMENT SUMMIT DATES: 14 – 16 September 2010 LOCATION: Orlando, FL URL: www.gartner.com/technology/summits/ na/outsourcing/index.jsp

GARTNER SYMPOSIUM/ ITEXPO ORLANDO 2010 DATES: 17 – 21 October 2010 LOCATION: Orlando, FL URL: www.gartner.com/technology/ symposium/orlando/index.jsp

GARTNER SYMPOSIUM/ITXPO BRAZIL 2010 DATES: 14 – 16 September 2010 LOCATION: Sao Paulo, Brazil URL: www.gartner.com/technology/ symposium/brazil/index.jsp

INTEROP NEW YORK DATES: 18 – 22 October 2010 LOCATION: New York, NY URL: www.interop.com/newyork

GARTNER SECURITY AND RISK MANAGEMENT SUMMIT DATES: 22 – 23 September 2010 LOCATION: London, UK URL: www.gartner.com/technology/summits/ emea/security/index.jsp BUSINESS ANALYSIS CONFERENCE EUROPE 2010 DATES: 27 – 29 September 2010 LOCATION: London, UK URL: www.irmuk.co.uk/ba2010

GIL 2010: ASIA PACIFIC DATES: 19 October 2010 LOCATION: Singapore URL: www.gil-global.com/asia VIRTUALIZATION WORLD, DATACENTRE TECHNOLOGIES 2010 + SNW EUROPE DATES: 26 – 27 October 2010 LOCATION: Frankfurt, Germany URL: www.virtualizationworld.net

ENERGY DATA STORAGE 2010 DATES: 3 – 4 November 2010 LOCATION: London, UK URL: www.smi-online.co.uk/events/overview. asp?is=5&ref=3543 DATA MANAGEMENT AND INFORMATION QUALITY CONFERENCE + DATA WAREHOUSING AND BUSINESS INTELLIGENCE CONFERENCE EUROPE 2010 DATES: 3 – 5 November 2010 LOCATION: London, UK URL: www.irmuk.co.uk/dm2010 TDWI WORLD CONFERENCE – FALL 2010 DATES: 7 – 12 November 2010 LOCATION: Orlando, FL URL: http://tdwi.org/pages/education/ conferences.aspx GIL 2010: CHINA DATE: 9 November 2010 LOCATION: Shanghai, China URL: www.gil-global.com/china GARTNER 23RD ANNUAL APPLICATION ARCHITECTURE, DEVELOPMENT AND INTEGRATION SUMMIT DATES: 15 – 17 November 2010 LOCATION: Los Angeles, CA URL: www.gartner.com/technology/summits/ na/applications/index.jsp INFOSECURITY RUSSIA 2010 DATES: 17 – 19 November 2010 LOCATION: Moscow, Russia URL: www.infosecurityrussia.ru/#googtrans/ ru/en-EN

Interested in contributing? If you’re an analyst, consultant or an independent and would like to contribute a vendor-neutral piece to future issues of ETM, please contact the managing editor: Ali Klaver: aklaver@enterpriseimi.com. 122


ETM Q3 ISSUE ■ 2010

To read the

full version of the Q3 issue, please go to

“www.globalETM.com” please go to

full version of the Q2 issue, To read the

TBC


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.