__MAIN_TEXT__
feature-image

Page 1

25 years of lessons (and counting) on relevant data

on the human and technical side of quantitative decision making

Roberto Lofaro


Copyright © 2015 Roberto Lofaro All rights reserved. ISBN: 1496073592 ISBN-13: 978-1496073594


CONTENTS A Caveat .................................................................................................................. v 1 Yesterday and Today........................................................................................ 1 1.1. Business intelligence and the real world ............................................... 1 1.2. Ends and means............................................................................................ 6 1.3. “Data vs. knowledge” ownership .......................................................... 10 1.4. A systemic perspective .............................................................................. 13 1.5. Managing (i.e. “governance” of) Business Intelligence needs .... 19 2 Tomorrow.......................................................................................................... 24 2.1 Coping with the future ............................................................................... 24 2.2 Externalization and its side-effects ........................................................ 30 2.3 Business intelligence “on demand” ...................................................... 35 2.4 Why the “Internet of Things” approach is relevant.......................... 38 2.5 Evolving your business intelligence infrastructure ........................... 41


A CAVEAT This page should be accessible for free on Amazon: if you are looking for a book on the “technical” side of Business Intelligence or “Big Data”, this is not for you. If, instead, you are looking for something that, based on over 25 years of experience in Decision Support, Business Intelligence, and assorted paraphernalia, delivers an overview about past, present and future issues and impacts of using data in your management choices, there are few ideas that I would like to share and might be useful in your own activities. Some of my consulting and marketing colleagues (Linkedin classified my profile as in the top 10% of “sales professionals in management consulting”- funny definition, but it makes sense, as in the late 1980s I was trained in UK and coached in Italy on selling solutions to senior management) might say that “Business Intelligence” is obsolete, now we say “Analytics” or ”X” (replace with a newer buzzword), but seen from the business end, it is just yet another renaming for something that was evolving past its original design. This book is within the “connecting-the-dots” series, and follows the structure of a book on the business side of BYOD and IoT that I published in 2014 (including the table of contents), as a “brainstorming case study” that will be used in future activities. That book, along with others that I published since 2013 on Amazon and Kindle on organizational change and business social networking, can be read online for free on Slideshare.net (Issuu.com for the online components, e.g. a business case about a fictional program on compliance)1. Your comments and suggestions are welcome2.

1

http://www. robertolofaro.com/books

2

http://www.linkedin.com/in/robertolofaro

v


Your personal time is the scarcest resource, and anyway the mini-books within this series have been designed to be used as “business cases”. Therefore, each announce online contained also a “tag cloud” summarizing the content of the book. If what you see between “” sounds Greek to you, go on Wikipedia. To make access to the information easier, from now on each book will include after the “caveat” a “tag cloud”, produced using Wordle3.

Of course, you can claim that you read the book by just looking at the tag cloud, as Woodly Allen reportedly said as a joke about “speed-reading” “War and Peace” (it is about Russia)… 3

http://www.wordle.net – see an article describing how you can structure data to drive the design of the resulting tag cloud (it is a free service) http://www.slideshare.net/robertolofaro/cv-summary-20140215draft/


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

1 YESTERDAY AND TODAY 1.1. Business intelligence and the real world Why now? Because this is the right time within my publishing schedule, considering what I already published and what I am about to publish- and because “Big Data” and “mobile data” on track to become the next big “number crunching hype”. To avoid quoting myself, I decided to start this chapter with a couple of quotes to share and discuss with you, adopting a non-technical approach. I apologize if here and there my technical experience in Business Intelligence will surface, but I will keep it to a minimum. There are many definitions of Big Data, Business Intelligence, Data Science. The focus of this book is a business-oriented discussion on past, present, future of what can be called “relevant data”. Most of the “future” part will be covered in the second half of the book, but in reality the “technical” side of those innovations is discussed in the first chapter, and therefore the second chapter will discuss not just the evolution in tools, but also alternative organizational structures and business models. 1


http://www.linkedin.com/in/robertolofaro

A short book (84 pages) from the O'Reilly Radar Team4 in 2012 delivered a basic introduction still worth reading to have a general understanding of the difference between “Big Data” and what your organizations got used to since the 1990s by using Business Intelligence. So, what is Big Data? “Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.”5. I am skeptical of the idea of applying traditional concepts (e.g. year-onyear) traditional Business Intelligence directly on “Big Data”, as I remember how that was a mere illusion already over a dozen years ago, in an environment where individual employees were routinely reassigned. Imagine doing that on data sources that, being based on data sources (e.g. home appliances, mobile phones) that receive a software update once every few months, could dramatically change their data content. There will therefore probably be a need to define "licensing contracts" that associate "Big Data" sources with specific timeframes of stability, i.e. for how long that data content is ensured to be consistent and comparable with previous releases expansion, i.e. scheduling the introduction of additional data contraction, i.e. removal of data elements not needed anymore. If you work in ICT, probably you recognized that this is what usually is done not with data, but with Operating Systems: Windows 10 promises to remove that (it should update itself continuously), but it is still to be seen what this will imply, as if that “automatic update” feature implies that some services provided by Windows are removed, applications might cease to work, or assumptions made by applications (e.g. which information is contained within system logs) might change. 4

"Planning for Big Data" ISBN: 978-1-449-32967-9

5

Ibidem, page 9

2


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

Therefore, what is needed is a cultural shift, with impacts also on recruitment, as it will be discussed in a later section. “The phenomenon of big data is closely tied to the emergence of data science, a discipline that combines math, programming and scientific instinct. Benefiting from big data means investing in teams with this skillset, and surrounding them with an organizational willingness to understand and use data for advantage. In his report, “Building Data Science Teams,” D.J. Patil characterizes data scientists as having the following qualities: • Technical expertise: the best data scientists typically have deep expertise in some scientific discipline. • Curiosity: a desire to go beneath the surface and discover and distill a problem down into a very clear set of hypotheses that can be tested. • Storytelling: the ability to use data to tell a story and to be able to communicate it effectively. • Cleverness: the ability to look at a problem in different, creative ways. The far-reaching nature of big data analytics projects can have uncomfortable aspects: data must be broken out of silos in order to be mined, and the organization must learn how to communicate and interpret the results of analysis. Those skills of storytelling and cleverness are the gateway factors that ultimately dictate whether the benefits of analytical labors are absorbed by an organization. The art and practice of visualizing data is becoming ever more important in bridging the human-computer gap to mediate analytical insight in a meaningful way.”6 Since my first official IT project (back then the “C” within ICT was considered as managed by another “sect” of technology experts) in 1986, I got used to a basic concept: you need to learn the “current lingo”. I had the chance of working directly with the business side after few months into my first official job (I had prior experiences in data management and presentations, e.g. in politics and the Army, including for technical and non-technical training). 6

Ibidem, page 15

3


http://www.linkedin.com/in/robertolofaro

Therefore, I learned the hard way that business needs do not necessarily require the latest, trendiest technology: if you sell solutions, the means should play second fiddle to the ends. In the 1980s, the ancestor of Business Intelligence was a series of technologies, ranging from primitive spreadsheets, to number-crunching and structuring systems called Decision Support Systems (DSS henceforth), to their presentation sibling called EIS (Executive Information Systems), and a variety of reporting tools. What I observed since 1988 was what my French colleagues would later call “une usine à gaz”- technology was for technologists, and if business needed to extract intelligence from data, often had to wait for a solution to be assembled by using a seemingly endless list of tools and tricks. The tools I worked on (mainly DSS, but also EIS) had an advantage: once you had defined your needs (a model of reality, a set of reports), the system would keep you up-to-date on the evolution of data fitting those models, and business users could add further analyses without IT supportit was just a matter of knowing basic algebra and few concept, as what was really needed was their understanding of their own business. By the early 1990s, with the expanded use of technologies collecting and structuring data (e.g. relational databases), lowering of storage costs, increased ways to access data, those 1980s assumptions started spawning new “shortcuts” on the business side. So, why wait for IT to deliver something to support your decisionmaking activities, when you can use a spreadsheet (e.g Lotus 1-2-3 was popular with my Finance, Control and Marketing customers)? The trouble was that, after a while, nobody understood what that spreadsheet was doing with the data- and nobody could ensure that the data were actually correct and refreshed whenever new data became available; moreover, often spreadsheets were used by business users as “shortcuts to the raw data”, and a mistake made by in a first spreadsheet might influence decisions made by others- as spreadsheets were often reused to “seed” new spreadsheets: call it the epidemiology of number crunching.

4


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

With the usual pendulum that you see in anything involving humans and decisions, the “free-for-all” spreadsheet was constrained within new tools that promised to ensure that anybody would use the same data. Technology-wise, this implied adding infrastructure, software licenses, and more people working on converting operational data (and external information) into something that business users could actually work on. It was an add-on on what your systems delivered, and both in the late 1980s and late 1990s, with different tools, the problem was always the same: data was neither what was needed, nor available when needed. Anyway, there was an improvement, as gradually data were restructured to support different business uses- a “data warehouse” here, one or more “datamarts” there (for smaller business audiences or specific processes). While in the 1970s-early 1980s reporting was something done by experts, and in the late 1980s-early 1990s tools often involved having an assistant doing the “number crunching”, hiding the cost of changes, Business Intelligence tools that “empowered the end user” implied a significant investment by all those involved. Personally, I remember how both presentations at industry events and training classes for Business Intelligence tools were packed with business users and middle-level managers, often stating that their real aim was to get rid of their dependency from IT: DSS tools were more powerful but too complex, users wanted something close to a “corporate Excel”. By the early 2000s, the real game across the publishers of Business Intelligence software tools was “consolidation”, with few larger players buying up (sometimes absorbing, sometimes just putting them into their own “product portfolio” by replicating their features) smaller players. Already in 2010, according to “EnterpriseAppsToday”, Gartner reported that just six vendors (IBM, Microsoft, Microstrategy, Oracle, SAP, SAS) represented 72% of the enterprise Business Intelligence market7. 7

“The Buyer’s Guide to Business Intelligence Applications”, 2011, http://www.enterpriseappstoday.com

5


http://www.linkedin.com/in/robertolofaro

1.2. Ends and means By choice, I stopped working hands-on on business intelligence tools since mid-2000s (thereafter I worked in that domain only as PM, BA, or to “sell&control activities”, including to audit projects and services or manage budgets from the supplier side), but I kept following the industry. Obviously, there are new elements, e.g. using Internet and mobile phones instead of desktop computers, or the forthcoming promise of the Internet of Things to have everything eventually “software defined”, but the real change was elsewhere. There is a curious element in business choices: the larger the investment you need to get to a certain point, the less the incentive to admit that it isn’t exactly what you needed. In the 1980s and 1990s, the main idea was to have temporary experts involved in building what was needed to enable business users to do their own analysis (I worked both on the customers’ and suppliers’ side). Over the years, tools added many more “bells and whistles”, and those delivering projects often were asked to prepare something that matched what the business users had seen in presentations delivered by software publishers (or consultancies)- even from other products, and not necessarily because those features were business-critical: they were “trendy”. I called it “featuritis”: the more, the merrier, usefulness notwithstandingakin to buying a Ferrari and asking to add a hook so that you can use it to carry your caravan (trailer, for my American readers). While helping to improve sales and qualification processes for a Business Intelligence publisher, I was told by new customers that we won a major deal not because we were the largest, or the ones delivering the tool with the most impressive list of features, but because we understood their business better, and were able to deliver what they needed. Way too often Business Intelligence software vendors forget that their success is measured by the business users’ willingness to use their tools.

6


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

Over the last decade, suppliers’ business practices that were unusual but smart one or two decades ago became common place, such as giving free licenses to universities and consultants or influencers, or developing a free “reader” to allow the general public to use it, as done by some newspapers and banks. If new staff members have been “brainwashed” at the university, when they had more time to attend presentations and play with software toys, and less responsibility to deliver something right now, you can expect that they had time to spend on investigating “bells and whistles”. When you are aiming for “TV quality presentations”, you would need people who actually do that day after day (or have access to those people). Since the late 1980s, I have been on each one of the sides involvedcustomers, software suppliers, system integrators, management consultants. Therefore, I can safely say that this development has been a “joint effort”, as creating a distorted market doesn’t imply just somebody “pushing”, but also somebody “pulling”. What is the key risk? The “wizardry” required to be able to produce appealing presentations usually requires also time that isn’t available to those who possess the business knowledge on your data. Since the late 1980s I routinely saw projects turning into services, i.e. there was never a complete delivery of a “product” (meaning: data, tool, and business-specific use of both as platform that the customer can independently adapt to business changes), but a constant work-in-progress. This was actually the original purpose of Business Intelligence tools, as noted above, but assuming that, once something has been delivered, business users take control, with support by a competence center (I helped create and trained a few in various business cultures across Europe). Constant updates to PC technologies imply a continuous re-training and re-investment to… keep doing what you have been doing: let’s see if Windows 10 will change that, by adapting to any new computing device without requiring a re-training of all those involved- something that was more or less what IBM delivered in the 1980s on mainframes. 7


http://www.linkedin.com/in/robertolofaro

Since the late 1990s, I often found that the “business cycle” adopted to fulfill Business Intelligence needs is the one showed in the following image.

So, you have somebody having a need (a business user, not necessarily a manager), who usually has to rely on another business user (or an assistant) who has better skills on the specific tool- or, at least, is able to “convert” those needs into something that can be understood by technical experts, e.g. your own internal competence center or IT staff. Then, your competence center will see what is needed to turn that into reality, probably by involving (mainly external) experts, who will have to balance needs and budgets. As shown within the picture, the risk is that business users (in order to reduce costs and bypass authorizations and delays) will build their own informal competence center, staffed either with consultants or newly hired assistants (that followed product training elsewhere), i.e. an informal competence center staffed with those that are not necessarily kept informed of changes within your corporate objectives or information architecture8. Most often, somebody who has just relative expertise but no businessdomain experience will build something, until the business user will need to do something different, and experts (internal or external) will be called up. 8

On the use of experts and competence centers, see also http://www.robertolofaro.com/SYNSPEC

8


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

I actually delivered training on business intelligence tools, or meetings and presentations, on half a dozen tools in three languages in few countries: and I saw that the issue isn’t just a matter of business culture, technology, or company characteristics- it is a matter of missed opportunities, as a misguided or mismanaged first experience can increase the resistance to change (a.k.a. skepticism) toward future initiatives. What really changed since the late 1980s? As company data are nowadays stored in various forms of structured repositories, the time needed to alter and deliver a new version of a “Business Intelligence product” (as described above) is shorter, as you can skip the “fishing for data” step that I was used to in the late 1980s to early 1990s, before “relational databases” became common- your information architecture is already documented. Along with technical components (databases), also management approached evolved, including on requirements management and design, converging on few shared “guidelines” or “frameworks” that cover from the identification of the business and technological architecture (e.g. TOGAF 9 and FEAC), to project and program management (e.g. PRINCE2, PMI/PMBOK, MSP), to service strategy and management (e.g. ITIL), with their associated approach to create “virtual assets” that are then managed across a lifecycle. It is a primitive form of Lego™ bricks that is getting increasingly more structured: you have already the components in place; you have just to alter the configuration to obtain new “knowledge products”. Reducing time does not imply reducing costs (the other potential benefit of the initial proposals of Business Intelligence), if each activity involves more people, more resources, more infrastructure- only used faster. But a more significant shift over the last couple of decades has been the identification of “data vs. knowledge ownership”, as in the past usually the former belonged to IT, the latter to business users.

9


http://www.linkedin.com/in/robertolofaro

1.3. “Data vs. knowledge” ownership “Business Intelligence” isn’t just about “business” and “intelligence”- it is also about a distribution of roles and responsibilities. The only way to be able to understand where you are, evolve, and minimize the involvement of external resources is by having those who understand your own environment and are committed to your organization act as “collectors and disseminators of experience”. Business Intelligence has three elements: Business-domain knowledge Organizational development history knowledge. Technological knowledge. The first element requires a day-by-day experience within the business domain: when selling methodologies or working on organizational development and change, I always discouraged creating organizational structures (e.g. a competence center) staffed exclusively with experts picked from a business domain, and then using them as “the” business reference, assuming that they will magically retain their business domain expertise. The second element is trickier: who is the provider of your corporate environment development history knowledge? Frankly, in most organizations, including larger ones, usually there is nobody tasked with that role, i.e. no “institutional memory”- you discover who took over that informal role only when they retire, or when you have a round of “rightsizing” removing older hands to lower costs. There might be a “knowledge management” function, or “business architecture repository”, but usually at best they deliver a current picture of what is available, not knowledge about the decision-making processes that produced those results. Business Intelligence assumes that you have data that can be used to support management processes- but, again, having a “database manager” doesn’t imply that (s)he has current, relevant knowledge of what is represented by data, as that would require access to current “business domain knowledge”, as defined above. 10


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

If you have a large organization, probably there will be enough demand of specialized skills on a specific tool that you can find tempting to build your own A-to-Z “in house” team of product experts. It might make more sense to do a small “triaging”, both on resources and their training. The concept is relatively simple: the closer the skill to your daily business needs, the more often there will be chances that it will be used. It is up to you to identify the level of skills that will be routinely used “in house”, and therefore become almost a “second nature” (a “Pavlovian reflex”)- if your expert is unable to roughly understand which technical area of the product is involved (and use documentation just to confirm details), and has to search within manuals, probably those skills aren’t used often enough; the same applies to experts who claim to be able to avoid using any documentation at all, as they are inclined to just replicate past solutions. If you have somebody who is an expert in adding the appropriate background color on your charts, probably (s)he will be asked for help on a daily basis- but choosing the wrong color will have less impact than, say, structuring data in a way that supports just one business aim. If you add temporarily to your team external experts to design the architecture of your solution, those skills are to be used in a way that is relevant to your environment- both on the business and data side. You need to ensure that your own people knows who should be contacted and through which channels. Moreover, you will have to allocate budget to keep those skills relevantotherwise, they will just keep recycling what they did to fulfill past needs, instead of doing what is needed to satisfy current (and future) ones. For the purposes of this discussion, “data” are to be considered raw elements that are provided by somebody who isn’t the final user of the “Business Intelligence product”, while knowledge is the interaction between data and the specific business needs, after the business recipient has converted data into information (i.e. structured it according to needs).

11


http://www.linkedin.com/in/robertolofaro

What really changed with the widespread use of the Internet is the ability to interconnect your own data with data from third parties. In the 1980s, the investment required to achieve that goal delivered less flexible solutions (i.e. data exchange service providers defined the rules). Over the last two decades, we moved from repurposing existing data (e.g. accounting or financial control data used to monitor sales and marketing), to expanding the portfolio of data and restructuring processes. By using standard databases, and standard ways to share data via the Web, already in the late 1990s to early 2000s some of my customers in UK were able to deliver to banks services (e.g. risk management, ALM) that would have simply been impossible a decade before, for a company of their size (a handful of banking experts and few developers offshore). This decade saw the explosion of the quantity of data, but technological improvements and innovations were delivered faster than our organizational and cultural ability to absorb those changes could cope with. We expanded the quantity of data, but often failed to convert that into information, and usually altered the data that we used before it could be converted into knowledge: processing larger quantities of data faster expands your payroll and infrastructure, but doesn’t necessarily increase the quality of your choices. A couple of decades ago I was asked by a multinational customer to pick up a study on logistics improvement that a then-Big8 consultancy had delivered, and convert it into a Decision Support model to enable better choices on where to locate warehouses in order to optimize costs. It took a while, and mixed both my technology and business experience (from other projects I had worked on) from myself and my. We then presented the results, and did a simulation on a question, crosschecking then with a more senior manager who had had a lifetime of experience in the field: we gave him the parameters that our model used, and he guessed the impact on logistics costs- faster than our model, and only marginally less precise.

12


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

1.4. A systemic perspective It isn’t just more data or more advanced tools what matters in decisions making: what is needed is a systemic approach (who, what, when, where, not just how). If you already read my previous book on BYOD and IoT9, this section will be familiar to you- both in title and concepts. But we are now moving from a “technical” issue and its impacts (devices that produce and transfer data), to a “business” choice: what do you do with data? Which data should be considered? Etc. The title of this book refers to “relevant data”, and the subject was discussed across previous sections, but from an historical perspective: Business Intelligence started as an “add-on” repurposing existing data that had been assembled for different (usually operational- e.g. accounting, production scheduling, material resource planning) purposes. Bridging from the past toward the future implies understanding also where you are now, as what is to be considered “systemic” should be a business choice, not a mere cross-checking toward a “standard”: I like reading business cookbooks, both in ICT and business at large, and history books or biographies- but repeating the past (or what worked elsewhere) isn’t a recipe for future success. Anyway, there are few elements that are part of the business lingo of the early XXI century, and it is worth sharing few ideas here. The first concept is that the real world isn’t an engineering lab: what is systemic cannot be static, and under the best conditions whatever system you will design should have the ability to cope with uncertainty. You have to evolve, your business has to evolve, and the tools that you use in your business (not just software tools) have to evolve accordingly. 9

Read online at http://www.robertolofaro.com/BYODIOT

13


http://www.linkedin.com/in/robertolofaro

If you followed training on Lean Six Sigma, World Class Manufacturing, or other “continuous improvement” approaches, you know that the definition of “systemic” changed since the Internet made continuous and seamless data exchanges affordable to everyone. The Internet, with its ability to constantly (and cheaply) reconfigure your data sources, is, as discussed before, an enabling factor. While preparing this book, I considered useful to expand my previous scribblings on “systemic thinking”, notably for those working outside engineering domains- but I then found a book published in 2014 that can be used as a useful introduction also for general business readers10. While I suggest that you keep an approach to stakeholder “management” closer to that suggested by OGC’s MSP than that advocated by the book, it is a book worth adding to your business library. Also, have a look at the movie “Pentagon Wars”, on the case history of a product development that turned into a massive case of “scope creep”. In the future, there will be other contributing factors (e.g. the devices that I discussed above, or further integration of your ICT and operations with State authorities for monitoring purposes, call it a “SOX 3.0”), but the key elements will be the same: everything will have to adjust to a different mix of resources, sources of information and funding (as your Business Intelligence analysis and the associated technical paraphernalia might move from being a cost, to becoming an additional revenue stream). Within the BYOD book I proposed the following list of components to consider within your “systemic approach”: Devices Users Services Channels Information.

10

Hester and Adams “Systemic Thinking” Springer 2014 978-3-319-07629-4

14


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

Most discussions on how “Big Data” will affect business consider just coping with data from multiple sources, and making the most out of it, as if any business were to be just a consumer of “Big Data”. In reality, the “Big Data” concept implies that any actor, any “stakeholder” will potentially be both a consumer and a producer of information: and this is the area where adopting a “systemic thinking” might improve your chances of success. Why? Because being “systemic” implies also being “selective” (you have to define your “boundaries”, a.k.a. “scope”), and, as discussed above, when the Internet lowers the marginal cost of adding another data source almost to zero, it is way too tempting to mistake quantity for quality. As a business, also what you decide not to share is a business message, and influences both the “Big Data” that others are willing to share with you, and, more important, the feed-back that will influence your stakeholders’ (and yours) business choices. The OGC framework for program management, MSP, in its latest versions deals at length with the concept of stakeholders management (albeit I prefer “governance”, as management often implies a hierarchical relationship): probably, it is worth using it while defining your own “systemic approach” to data management. If you are just receiving data, it is relatively easy to map out the “external contributors” to your “Big Data collection”, their motivation, and their capabilities (i.e. what they can deliver, when, and through which channels). “Continuous improvement” assumes a two-way exchange of data, information (structured data), and knowledge (contextualized information). It might well be that in the future most of our “data-intensive” activities will be delegated to robots, but for the time being it will still be humans that will have to understand them and how they affect business initiatives. Incidentally: as I will discuss in a forthcoming book on robots and humans, I consider both the Star Wars types and software able to make independently limited choices (based on data, information, knowledge) as “robots”, getting back to the etymology of the word “robot” (worker). 15


http://www.linkedin.com/in/robertolofaro

What does this all imply, for your “systemic perspective”? A short digression should make it clearer. If you go online on social networks, you probably read once in a while of the forthcoming demise of Google’s attempt to compete with Facebook, called Google Plus (or G+). G+ is built around the concept of “circles”, a derivative of the “circles of influence” in “network analysis”. In English: let’s say that you have nine people, named from A to I. Let’s say that A and B and C are always talking to each other; ditto D E and F; A talks often also with D, E, and F; G talks occasionally with B and F; H and I talk to nobody. A B C are a “circle”, D E and F are another circle, G is a kind of “moving gate” between the two circles (or an overlapping circle with more tenuous links), while A is a kind of “bridge” between the two circles (de facto another circle); for the time being, let’s ignore the two “loners”. If B wants to influence the other circle, he might consider “using” A (e.g. gossip), but probably A is unwilling to jeopardize the existing relationships by acting as a “transport vehicle” for gossip. Instead, G might be interested, to have something to remind of his/her existence to both circles. If you were to expand your Business Intelligence to be able to assess what the two circles are doing, but had to limit your costs, would you collect data from all nine of them, or would you collect data just by A, the most obvious choice? Probably, adding also G could let you get information that, for the sake of relationship, A would not deliver. If you are just happy with collecting data from your own systems, the above mentioned discussion is really irrelevant to you, but if you want to see where things are heading to before A decides to share that information, maybe G is a useful source to add, obviously remembering what I wrote about “stakeholders” before. 16


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

A systemic perspective in data management implies working with a moving target, and ICT staff should not be involved in just setting up, but also in evolving the Business Intelligence framework, exactly as you probably are doing with CRM. You have to deal with general principles, broad categories, and a twoway communication channel between your ICT staff and Business Intelligence users, creating a kind of “ideas factory”11. Each corporate culture will have its own way of managing “continuous communication and training” activities, but a checklist of the basic elements that it should include might be useful: Communication channel to inform users on planned system evolutions that could affect their “business intelligence products” Alert system to inform users of critical issues (e.g. on data quality) Routine “knowledge update” on what could be of interest to users Communication channel to enable users to either share information with ICT staff (to be then relayed to other users), or to receive answers. With “Big Data”, you are actually introducing external influences. The higher the volume of data, the faster it is delivered, the greater the risk is that no human will be able to understand how it is influencing your decisions- until it is too late, as shown often in the past. Leaving technology aside, probably a good book on mismanagement of “information leads” (i.e. the apparent systematization of data into something that seems to make sense but is really misleading) is “The March of Folly”12. To move from theory to business practice with a simple example: imagine that you were the CFO of a US company in the mid-2000s, and used information from rating agencies to define your investment choices and optimize your cash-flow. 11

See e.g. for BYOD http://www.cio.com/slideshow/detail/103232#slide1

12

Tuchman “The March of Folly. From Troy to Vietnam” ISBN 0-394-52777-1

17


http://www.linkedin.com/in/robertolofaro

“Inflated ratings” could have negatively affected your choices, but while everybody was affected (i.e. the portfolio composition across your industry was homogeneous), the net result was neutral: you were all misled, and you were all assuming similar levels of risk (which doesn’t imply that it was healthy, but implies that you fared as good as your peers). The issue was if you ventured beyond the “average”, e.g. by taking a more aggressive stance and absorbing more risks: you were already off-themark vs. where you assumed to be, and, ignoring that, you assumed unbearable risks. Some companies may choose to simply keep “Big Data” outside decision making, due to their “uncontrolled” nature: but that would imply missing opportunities that others (probably your competitors) would seize. The point is really to develop your own “systemic” approach by identifying your own “confidence” and “network of trusted influencers”. Imagine that you are a retail company, and that you have a loyalty program: if you add an incentive for your customers to share their mobile data with you while they are on your premises, that is a “reliable” source of data that can help you, by associating what they bought with their movements across the store, to optimize your assortment planning across multiple categories, and restructure the store accordingly to influence their purchasing patterns (old, “Occult Persuaders” trick reported already by Vance Packard in the 1950s and then 1960s, with XXI century technology). In this context, a mobile phone is akin to applying a RFID tag to each customer that enters your premises. If you further extend that authorization to sending offers by location (implying: continuous tracking), you can become entitled to see also where they go, or how long they stay in other shops: and that is valuable information for your decision making, as you might consider extending or reducing your range of products and services. Compare instead with another approach to “Big Data”: getting all the traffic and location information of mobile users that match a certain demographic profile: you would get huge volumes of data, but you might be misled by, say, a sudden change of travel patterns due to road works, or even just a string of free concerts. 18


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

1.5. Managing (i.e. “governance” of) Business Intelligence needs If you think according to a systemic approach, you start considering data in context, i.e. other sources of information that needs to be there to make sense (more about this in the next chapter)- nothing more, and nothing less. Once you have the “data boundaries” of your Business Intelligence defined not by technology, but by your “systemic approach”, what’s the next step? Move from the traditional “command and control”, to a “governance” mode. If it makes sense for your business to integrate, within your Business Intelligence efforts, data generated outside your organization, the real benefit is when you let your business users select and connect to data that they assume to be relevant, if and when needed. How do you move from traditional “in house” data to such an approach without getting misleading information, or wasting budget resources? The approach that I used since the 1980s to deploy new technologies and processes that involved both business and IT was to help my customers identify a potential “champion” that had credibility within the target audience, to be supported by IT staff and consultants in delivering a first project. This approach can be used also for each new “round” of technological improvement, sometimes by selecting other “champions”, or just by identifying a pilot project that fulfills a business purpose, can be delivered in a relatively short timeframe, and whose business users can be involved in communicating progress. I observed the best results when those “champions” were then involved into supporting the creation of a competence center, and becoming a kind of communication channel with business- but while still being involved in their own business operations. Less successful was instead the conversion of the “champion” team into a competence center, as what made them useful in the first place (being on the business side) was lost when they became yet another support office. 19


http://www.linkedin.com/in/robertolofaro

Even less successful was the “fire and forget” approach to pilot project, i.e. having a “champion” do a project and then being completely ignored by the new competence center built upon their experiences. In the latter case, sometimes business users who had been part of the pilot became an informal and competing competence center. By starting with a “champion” project, and keeping communicating progress across that project, you can actually have a first “success story” in the use of “Big Data” within a business domain. You can manage tools, but integrating in your Business Inteligence access to external data selected by business users by relevance to their own decision making processes requires something different- “governance”. Using “Big Data” in business requires extending the “champion” approach, considering each project on the business side integrating a new source as a new pilot project. Any new source of “Big Data” from external sources will have carry along some bias on the data that has been selected, usually obviously associated with the purposes of those generating the data source as a sideeffect of other business activities (e.g. selling consumer electronics). In the future, it will probably become more common to plan new products and services considering also their potential to generate “Big Data” while still on the drawing board (e.g. by adding an inexpensive chip within the product), as new revenue streams, and already some industry experts are reviewing some “Big Data” marketplaces, where you can buy the right to access specific data sources. I think that it would make sense, in order to enhance the “governance” of your “business intelligence products” while adding “Big Data”, to expand your data management teams with people who are coming from industries used to manage massive amounts of experimental data (i.e. physical sciences), as they are routine users of e.g. more advanced statistical analysis approaches and tools to validate the quality of data. Within the first section of this chapter, I quoted extensively a report on “who” should be part of your “Data Science” team, helping business to “filter” and select what e.g. Big Data marketplaces will have to offer. 20


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

You can have a look online at what is trendy, in term of human resources in Data Science, in 201513. Obviously, while only business users can assess what is relevant to them, a “data governance” expansion of your “Business Intelligence and Data Warehousing” might require altering your staff training and analysis roles. Some “Big Data” sources might be negotiated at the corporate level, for a long-term use), others might be generated by reliable sources, but most sources will become useful if used only when needed. Whenever a process is considered at a systemic level, each component activity has to be assessed toward at least the following elements: Degrees of freedom, i.e. your level of flexibility vs. the environment where the process is executed Actors involved, i.e. who concurs to the execution of the process Stakeholders impacted, i.e. who could influence or be influenced by the process Etc. etc.14 This process identifies a set of risks that must be: Assessed: as you can manage what you know, you need to know it Managed: as what you know to be potentially critical has to be managed to maximize the potential benefits while minimizing the risks Avoided: as in some cases a risk is unacceptable, and it is better to circumvent it, whatever the cost Accepted: whenever the cost to avoid or manage the risk is excessive if compared with the potential business impacts. Obviously, Business Intelligence adds a further set of requirements.

13

http://tinyurl.com/RelevantData-DataScience2015

14

Suggested reading: “Managing Successful Programmes – 2011 edition”, TSO, ISBN 978-0-11-331327-3, Part 1 and the introductory and ending sections of each chapter within Part 2

21


http://www.linkedin.com/in/robertolofaro

The third set of elements that are specific to the expansion of Business Intelligence to external sources (not just “Big Data”) is: Vendor evaluation, i.e. did you assess not just the data or data source, both on quantitative and qualitative elements, but also the business viability of the supplier, for the timeframe of use that you expect? Intensity of use, i.e. is the source to be used once, often, or continuously? Firewalling strategy, i.e. would you still be able to identify which part of which decision is based on which data? Risk scoring, i.e. as per the risk framework discussed above, did you evaluate the costs and impacts of associated risks? There is a further element that has to be considered as shared with BYOD, but it is even more critical when considering that you are both a consumer and creator of “Big Data”. Whatever your current assessment on the risks, you will have to monitor them as you would do with any other risk- with an additional twist: you have to include a degree of “delegation” to Business users, as they will be the only ones able to understand if, business-wise, something is going offtrack within the data. I wrote in the previous section about the “collateral damage” generated by misleading ratings: but ratings were limited in volume, and with a limited frequency. Now, imagine if you were to restructure your supply chain under the influence of misleading continuous, massive streams of data provided by sources that eventually are proved to be unreliable (e.g. a data stream on traffic delays on key routes that is then proved to have been converted from a “continuous stream” to a “statistical forecasting of trends”). There are plenty of methodologies around describing how to measure and control or “manage” risk, e.g. due to compliance issues related to SOX, or, in the financial industry, Basel III. Integrating “Big Data” in your business choices might affect anything from marketing to quality.

22


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

Frankly, I do not remember any single case of “first use of external data” that wasn’t then reused internally (also to reduce the number of points where a potential failure might occur). Therefore, the most difficult task of any governance effort will not be in “filtering” “Big Data” sources, but in managing their propagation. A practical example: if I use the data from my customers as I described in the retail example in the previous section, eventually it is tempting for business users to simplify by attaching a “scoring” element. Eventually, some business users will move to other parts of your business: but, unless your monitoring and governance side has “stored” the knowledge underlying that “scoring” element, you risk that it will become a fact, also if the logic of the “Big Data” source were to change.

23


http://www.linkedin.com/in/robertolofaro

2 TOMORROW 2.1 Coping with the future In the first chapter, I hinted at the need of people with expanded quantitative skills, but that is a mere expansion of the current mix of resources. Actually, “quantitative skills” is an understatement: we need a completely different approach across the whole HR lifecycle, as shown by an article published in October 2014 by the Harvard Business Review, forecasting the demand of management roles in 202215. “Quantitative” implies “quantifiable”, while instead the future of Business Intelligence includes expanding a “discovery” and “experimentation” element that currently is common only in Research and Development (R&D), not in management and control or reporting. An unintended side-effect of the increase in the amount of data sources provided by the Internet and social networking (to say nothing about the Internet of Things, IoT henceforth) in a society that is placing a computer into anything- from shoes to cars- is the increased uncertainty.

15

https://hbr.org/2014/10/what-well-be-doing-in-2022

24


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

The title of this chapter is “tomorrow” and, in Business Intelligence, talking about what will happen five years from now (2020) is significantly different from what will be available in, say, 2035, when Artificial Intelligence proponents and scaremongers both assume that we will have the first computers with an intelligence (whatever that means) exceeding human intelligence. Just three small examples of providers that might add some “intelligence” to your Business Intelligence and Big Data: Amazon Artificial Intelligence On-Demand16, IBM Watson17, Wolfram Alpha18. Actually, each one of them is already alive and kicking, and I wasn’t the only one jokingly, after IBM service scored successes ranging from chess to Jeopardy to cooking and health or new product development, to suggest that eventually we might get toys that, using the ubiquitous availability of Internet connections (e.g. basic Wi-Fi is quickly becoming a “human right” in developed countries), could come with a subscription to Watson. Imagine having on your desk something looking at you (of course- it will be bidirectional) and giving your advice, while at the same time being able to help you in your work by answering questions that usually would have required setting up a meeting. But that is the distant future, more appropriate for another book, while here I will focus in this chapter on few ideas about trends, i.e. something based on what you can see around you, past experiences, what is currently being evaluated, and potential paths toward that distant future, using as a “roadmap” the picture on the next page. A first trend to consider is that computers are embedded in an increasingly large number of everyday appliances (e.g. “intelligent” lights), in some cases with a processing and adaptation power that just 50 years ago would have been considered magic (smartphones loaded with sensors). 16

http://www.bloomberg.com/news/articles/2015-04-09/amazon-cloudintroduces-artificial-intelligence-service 17

http://www.ibm.com/smarterplanet/us/en/ibmwatson/what-is-watson.html

18

http://www.wolframalpha.com/

25


http://www.linkedin.com/in/robertolofaro

A second trend is partially linked to the first, but really due to an urbanization drive that accelerated over the last few decades: even the most remote locations are now connected via the global Internet (according to the International Telecommunications Union, the last country was connected few years ago). A third trend is a side-effect of the second: as more people got used to access information (and being asked to provide information) electronically, sometimes continuously (e.g. banking, utilities “smart” meters), governments increasingly had to provide “OpenData”.

There are still some “practical” (e.g. compliance, privacy, security, to name but a few) issues to be solved, but using online space to store your business data and applications will become increasingly common. Many still consider “OpenData” only information produced by government activities, but it is now expanding upstream, i.e. to information used by public authorities to make choices. 26


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

In some cases, “OpenData” is also becoming a tool to integrate knowledge and expertise available in the public within government activities. A side-effect of the 1990s call for “corporate social responsibility” is that transparency and disclosure of information are taken for granted. Furthermore, in developed countries, more than 50% of the active population has at least limited computer skills, i.e. is used to both receive and “broadcast” data (continuously, if they use tablets, smartphones, etc.) Over the last few years Business Intelligence tools blended with a faster Internet so well that some newspapers delivered the option to “dig into their sources”, instead of just sharing charts and statistics (Wikileaks was a source of inspiration on how to share publicly massive files of structured information). So, if in the past “data-based decision making” was something only for businesses, and “digital natives” (as Prensky defined them19) changed that, now also Business Intelligence is on the verge of becoming “democratized”. For the time being, just a tiny slice of the readership actually uses those options (probably, those that are already users of Business Intelligence tools), but you need just one reader able to use those tools, to have that spread across thousands of customers. Now, let’s move on the “human side”. There is a curious element in human cultures: the more we add complexity to our society, as needed to deliver more advanced services, the more we have a Luddite reaction. Actually, the current crop of those “building the case against change” (for neo-Luddites to follow) usually has access to and uses the same “cultural assets” used by those initiating the change- just carving their own “niche” as “against the system/incumbents”.

19

See http://www.robertolofaro.com/BSN2013 for a detailed discussion

27


http://www.linkedin.com/in/robertolofaro

In our times, this is wrapped within a layer of pseudo-scientific lingo, e.g. those promoting the “left-brain/right-brain” dichotomy: so, we want all our public services, healthcare, retail shops working as clockwork, but invoke a mythical “age of intuition”. As a society, we need both sides of our brain, and most of those advocating this “split” actually… assume that everybody else will do the “practical” work for them. “Outsourcing your brain” isn’t a smart choice, as this deprives you of the ability to evolve20. In a distant future, there might be a temptation to delegate Business Intelligence to machines or “intelligent software agents”. The above mentioned trends imply something else: in most developed countries, anybody who will enter management ranks by, say, 2020, will have limited memories of a world without Internet, smartphones, elearning, and permanent access to information, e.g. through “knowledge toys” such as the one I described above, also thank to the Gigabit network currently on trial in few selected towns by Google. Most of the articles about “Big Data” that I read over the last few years seem to be focused on “adding” data sources, mainly by expanding on data that are provided by your own products, services, operations. Certainly, the first sources of “Big Data” will be your own organization and your own customers, whatever products or services you deliver. If you use a smartphone, you are used to the concept of “application store” (or something similar), where you can find new applications to add (and probably never use) on your smartphone or tablet, e-books, or other digital media: and similar opportunities will be available in the future for “Big Data”. Incidentally, a further side-effect of pervasive computing and the IoT will be that, probably within the end of the next decade (i.e. before Artificial Intelligence will exceed human abilities) for businesses being regulated will be the rule, not the exception. 20

On outsourcing, see also http://www.robertolofaro.com/BFM2013

28


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

This will require expanding existing agreements with your distribution network and supply chain, to increase the integration of your ICT systems to data exchanges about internal processes that are upstream or downstream to your exchanges of services or products. The concept of “internal” or “confidential” will have to change, at least as much as the concept of “privacy” changed since the advent of Internet on mobile phones and smartphones. The difference with, say, the 2000s will be that this “integration” (better: “mutual coordination”) will have to be delivered in a way that ensures that you can activate or rescind any integration at short notice. After this (incomplete) overview of trends that will increasingly impact your Business Intelligence activities, let’s move to risks and opportunities.

29


http://www.linkedin.com/in/robertolofaro

2.2 Externalization and its side-effects Opportunities and risks Any opportunity can also represents a risk, notably when opportunities for new organizations imply altering the business of existing ones. In the XIX and first half of the XX century, acquiring information about other businesses implied using a “third party”, such as a rating agency or “data cabinet”. As discussed in the previous section, the second half of the XX century and first decade of the XXI expanded the quantity, quality, and speed of data that any business (not just those listed on a stock exchange) is supposed to deliver to governments or other public authorities. More data (“Big” or not, both internal and external) will require rethinking the distribution of roles and activities between staff and external suppliers, and also your business model. Recently, few articles here and there started discussing a further option, i.e. “data shops” selling “Big Data” that you can integrate in your own Business Intelligence and reporting activities. It happened already with e-commerce: newer, nimbler competitors were able to reap the benefits that incumbents could not see- not for lack of imagination or creativity, but as their corporate culture could not “frame” new opportunities within what they already knew. If, over the next few decades, any business will be required to provide to public authorities (and the general public) data that traditionally were available only internally, this will create opportunities for nimble “information consolidators”, who would be able to create new data services with almost no investment. Yes, the “Uberization” of data and information services: imagine rating agencies able to deliver on-demand an “instantaneous assessment” (e.g. a “SWOT”), by having just some “smart” pieces of data collection software accessing the public side of websites belonging to corporations and watchdogs.

30


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

Uber etc., using assets in the physical world, are constrained by a need to limit supply to keep the motivation to keep working with/for them. Creating new data services based on “OpenData” (private and public) will be a matter of building an appropriate “Intellectual Property layer”, i.e. something to both add value (for those using the service) and differentiate oneself from competitors- something that even a smart high school kid could eventually do. Actually, if you followed news over the last few years, there is already a “niche” within the digital market that had to develop a similar “run for intellectual property”, in that case faster and faster algorithms: the generation of Bitcoin virtual currency21. Again, it is both a risk and an opportunity: a risk, by providing information that your competitors can use in their own decision-making to improve their competitive advantage; an opportunity, to fine-tune your own decision-making by using their data and data or information from the actual behavior of your customers. In Business Intelligence, it isn’t just a matter of data: data scientists are and can be useful, but you need to have the right mix of business and technological abilities, uniquely appropriate to your own business needs. “Big Data” is often still discussed as if it were just another data source that you copy, store in your corporate database, and then use. If in the future Census data were to be continuously updated, would you copy that data on a daily basis? That would be an overkill- better to remotely check the Census data if and when needed, directly where the data are kept (last time I read about it, Amazon). Just consider Google (or Amazon, or Facebook): the way you use their systems generates information (for them), and continuously expands the “Big Data” available, adding an increasingly larger revenue stream for them. 21

If you are interested in the “mechanics” of Bitcoin, have a look and search on spectrum.ieee.org, that published interesting articles both business and technical oriented on the “Bitcoin ecosystem”, its economic model, and its “arms race”, including its fair share of heroes and scoundrels

31


http://www.linkedin.com/in/robertolofaro

This enables them to fine-tune resources, or resell information on how and when data is used by their user community- further “Big Data” streams (on how “Big Data” is used by their customers) If you want your Business Intelligence efforts to achieve more than just reporting ex-post what happened from your own internal perspective, “Big Data” will increasingly contain information critical for your success. Managing information whose creation your staff more or less controls, along with information that is not even remotely “certifiable” (and anything in-between) will require changing more than few data sources or databases here and there. In the 1990s and early 2000s, many companies externalized their first line of support to customers- only to discover that by outsourcing the first contact with potential and existing customers deprived them of information that could be useful in budgeting and sales control activities, or product and service development. The same could happen with the “Big Data” opportunity: if you expand too much your externalization drive, you will miss “Big Data” that you could have actually used to fine-tune anything from your supply chain, to your forecasting and decision-making or sales control activities. Before turning page, read again from the beginning of this section, and try thinking what this could imply for your own organization, in terms of risks. Actually, try mapping out the risks that you see associated with both moving your own data outside (“cloud computing”, or using third-party web-based applications and infrastructure), and using data whose creation you do not control.

32


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

Layering risk If you at least tried to do the above mentioned risk assessment exercise, you probably identified various levels of uncertainty. It happened with the beginning of the Lean/Six Sigma (and the ISO9000 drive), and each new technology is simply expanding on that. That you are talking about your own data (from your own operations under your direct control), data from your supply chain (relationships defined by contracts and mutual interest), or your distribution network, it isn’t just the content, but also the degrees of freedom that you have in using, accessing, and tracing the whole lifecycle of data that makes the difference. Data have a lifecycle: would you use the 1980s Census Data to identify potential demand for a new consumer electronics product? Hopefully not (unless your target audience is represented by those who were teenagers back then, i.e. a “nostalgia” product). With “Big Data”, beside probably “compressing” the lifecycle (e.g. keeping a decade of mobile phone traffic is something that no private company would consider as “an additional data source”), you have to consider also the context that generated them: it is different to have a look at web searches generated while Yahoo was almost a monopolist, or web searches since Google started. If you “unbundle” the source and destination of data, you add another layer: converting early adopters and creators of “add-on services” into “evangelists” of your products or services. As an example: a “white goods” manufacturer (fridges, washing machines, etc.) might add a new wonderful computer on board of its products- but then decide that, instead of trying to create a market (or “ecosystem”) single-handedly, it makes more sense to expand acceptance of the products by letting others create services or products based on those data. It is a form of “lateral thinking” using somebody else’s brain, let’s call it “collateral thinking”: your investment in adding “intelligence” to your products can effectively be used as “collateral” by third-parties to initiate new services or add-on products. 33


http://www.linkedin.com/in/robertolofaro

It is an “Uberization” step (you make the investment, others benefit from it), but by design, to allow you to expand your own market- a “winwin”. Decades ago a computer and business application vendor decided to develop an application for the pharma industry, but the first customer did not have the budget needed, and the vendor saw a market opportunity that could not be missed, while it needed the customers’ expertise to develop the application. Solution: “unbundle”, by using their own application development expertise, and their first customer’s business knowledge, to create an application (and service) delivered by a third party (with some further constraints to protect IPR, etc.- but discussing that would be a digression). This is an approach similar to that that I found often useful when working in the outsourcing industry: a new application or service might make economic sense over its lifetime, but the initial investment required cannot be financed. Therefore potential business customers are “layered” in a range from those that will contribute a nominal fee, running costs, and IPR, to those that will cover the development cost but have not IPR to contribute, to those that will join later, and pay usually a steeper price for using it. Assessing that kind of risks will probably generate more “rating agencies”, and more indirect suppliers layered according to the “service level” that they provide on data. Expect from pre-built “Lego Business Intelligence bricks”, to just a certification of data sources and continuous monitoring and auditing before they are inserted on the marketplace, as happens e.g. with Google or iOS, a kind of quality assurance, more than quality control. Using external “bricks” within your Business Intelligence, without “Big Data”, reminds me of the different approach management that I observed few decades ago when a company product lines within Business Intelligence- a business tool for users, and a scripting language to write reports, used by developers.

with or to data had two business software

Actually, the two product lines originated from two different corporate cultures- and each one attracted its own “compatible” (by culture) users. 34


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

2.3 Business intelligence “on demand” Yes, software products aren’t “neutral”, and contain the “imprint” of the corporate culture that originated them, notably business applications (both Business Intelligence and Enterprise solutions such as ERP). In the example discussed at the end of the previous section, software developers on Yahoo groups shared everything- from piece of logic to whole sources of new reports; as for business users… the more vague they asked questions or answered questions from somebody else, the better. What will be the real risk of the uncontrolled dissemination and unbundling of services even within your own organization? Disclosure of IPR. Imagine what can happen when the “buying cycle” is much shorter than the one usually associated with selecting, licensing, and deploying Business Intelligence or an ERP in your organization. With future “data products”, you will probably have cases where the “purchase” is one-off, others were it is a recurrent event (or even a subscription), and others where you will have a mix, e.g. with an “instantaneous, on demand update” while doing critical analyses, using instead the “basic” (e.g. updated once a month) data in other cases. Actually, I worked already almost two decades ago on a negotiation to integrate with something following a similar approach, in banking risk management, so it isn’t really new: what will be new is the “democratization” of data processing and data sources, and the creation of marketplaces for Big Data (and analyses). What will change in the future is the issue of “agency”, i.e. data products will be living on three “durations” (short, medium, long), and each one will carry different requirements in terms of organizational structure of the “data provider” to ensure that the product is viable and fit for use within business decision making. If you accept the concept of a marketplace, you can also accept the concept of a different design for Business Intelligence solutions.

35


http://www.linkedin.com/in/robertolofaro

In the early 2000s, it was common to design “vertical solutions” containing a Business Intelligence component, but the new potential approach to Big Data and Business Intelligence may actually generate a different structure of the offer of Business Intelligence packages. What is the point of paying a license for a full set of features, when your needs are for a mix-and-match of components from different providers, embedding data and tool supporting/managing the data? Maybe in terms of content and not of container, there might be some interesting “connecting the dots” of unrelated data that a new algorithm delivers- and whose provider is a small new “data think tank”. Obviously, you would take a risk by using that, but if you wait, it might become so common across your industry that delivers no competitive advantage. Moreover, eventually having just your financial report on your corporate website will not be enough, as disclosure rules might require to provide a continuously updated flow of data about how the business is going, along with free tools to access them. Therefore, it will become feasible to have a dynamic pool of suppliersagain arranged by time horizon. Consider what happened with utilities: once you separate the infrastructure from the service (or commodity, e.g. gas or water or electricity) delivered, switching suppliers is just a matter of few exchanges of contracts and few parameters here and there in few databases, and maybe creating a marketplace akin to the “check truncation room” that was (physically) used in banking decades ago. My customer uses X, your customer uses Y, and if they don’t switch bank, we simply exchange the balances, not the whole amount written on a check; with utilities providing a commodity, it is the same (electricity is electricity), it could be marginally trickier on data collected e.g. by IoT. In Business Intelligence, it will become increasingly useful to consider that not just data (the “content”), but also what delivers them (algorithm, software tool, overall package of all the elements) creates potentially conflicting sets of interests.

36


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

As hinted above and in previous sections, it will not be just a corporateto-corporate issue, as the actors (beside the “data think tanks” and software publishers) will include also governments, and various influencers working thank to the reduced “cost to enter the game” enabled by Internet and “cloud computing”. What’s more interesting is that, whoever will use “on demand” data products within their own Business Intelligence will actually generate further information that will re-enter the cycle, probably useful to identify trends (a “Big Data” use that Google years ago tried to show by disclosing data on searches about the flu along with data from e.g. the World Health Organization and CDC).

37


http://www.linkedin.com/in/robertolofaro

2.4 Why the “Internet of Things” approach is relevant As I discussed within the first book of this series, on BYOD and IoT22, what most people forget is that, if anything becomes “smart”, i.e. gets its own computer on board (including clothing), and these days these implies also “with data broadcasting capabilities”, any environment becomes both a source and a consumer of information. Moreover, the choice on which information should be “broadcasted” to external third parties is not yours anymore: as shown with past scandals with Google/Android and Apple/iPhone (I will skip Snowden, Wikileaks, NSA, and other government-sponsored data collection initiatives worldwide), your suppliers make their choices, and every new bit of “smart” technology is a new bit of “data harvesting” technology. One of the “standard” characteristics of Big Data is that, in most cases, it is a collection of smaller bits generated by countless points of “data harvesting” (e.g. tablets, smartphones, eventually house or office appliances and clothing), points whose quantity varies continuously. Being automated, any new entry within the “pool” will be predictable in the structure of what it delivers, and unpredictable in the location from which it will “broadcast”. In traditional Business Intelligence, often the number of “data harvesting” points is pre-set, and can be altered only by a joint decision from those managing the Business Intelligence infrastructure and users. In reality, when you deal with humans, often the data provided is “pigeonholed” within the appropriate slots, but to a point. Specifically, it is not uncommon for data to be reported with a certain degree of “interpretation” (e.g. you call that a “prospect”, a qualified lead- I call it a “suspect”, as a sales colleague called it a couple of decades ago, whenever the quality of the lead was dubious).

22

http://www.robertolofaro.com/BYODIOT

38


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

In the future, also most of the data that you consider within your Business Intelligence (e.g. forecast on production or sales analysis, financial controlling, etc.) will be collected automatically, as a side-effect not just of your activities (something that most large companies already do), but also through your “behavioral activities”, i.e. the actions that lead to decisions that eventually will generate data- both from those reporting data, and from their sources. Reducing the latitude in reporting data has always been a torn in the side of anybody making decisions based upon data provided by others: imagine if you have to plan production based on the forecasts of your salesreps in the field, only to then discover that some routinely underestimate the sales projections, so that magically they… routinely exceed targets, as shown in old cases from “creative accounting” books23. Consider each one of your units or managers reporting data as if they were “data providers”: would you accept from providers that they provide what they want, when they want, how they want? A least the “when” and “how” is often solved by traditional Business Intelligence, but the “what” still requires a review later on (unless all of your “data providers” apply the same misleading interpretation, you will still eventually catch them). A lesson from IoT is that data have to be collected when available While “data providers” might add contextual information for their “interpretation”, they should not filter data, including that about how they reached the decision, as it is useful “contextual” information. Example: imagine that producing sales projections requires, across your company, that each sales manager collect some sets of information from internal and external sources, and that some evaluation is carried out on their prior sales projections (to see how close they were to reality, etc.).

23

E.g. see Smith “Accounting for Growth” ISBN 0712657649 or Mantle “For Whom The Bell Tolls: The Scandalous Inside Story Of The Lloyd's Crisis” ISBN 0749314877

39


http://www.linkedin.com/in/robertolofaro

Usually, all those activities involve reading some files, talking with some people, exchanges of emails, meetings, etc. A Big Data approach based on the IoT paradigm might imply looking at the match of behavioral patterns between different “data providers”. Initially, maybe just to cross-check after events, i.e. to identify those routinely “deflating” their sales projections, or “inflating” the number of prospects close to the period when the bonus is discussed. After few “rounds”, this might be expanded to pre-empt, as you would do with IoT devices that are discovered to have a constant bias. They might be perfectly functional, but have just a slight misalignmentbetter to factor that in, then replacing them. If this seems to be too “Orwellian” or “mechanistic”, probably you missed few lessons from the 1980s, 1990s, 2000s- and maybe you can start by re-reading a couple of books on past “creative accounting” practices. Moreover, you might also want to have a look at what Google and Facebook do with your data: they are already carrying out those activities, using whatever channel you give them access to- it is only a matter of time before that is available to any organization. I could continue for few more pages, but I would rather invite you to read the first book of this series online (free on Slideshare)24, and think how, in reality, any action you take in our computing-intensive business world is a contributor to Big Data (the “pervasive computing” element). It is only to be expected that, after moving from the Ford/Taylor model to a data-intensive Lean Six Sigma, introducing also non-quantitative information in decision-making (e.g. through Balance Scorecards), it will become commonplace to integrate Big Data both from “how” you carry out your own activities into your Business Intelligence. If the purpose of Business Intelligence is to improve the quality of your choices, why should you exclude the most abundant source of information that you have in your organization? 24

http://www.robertolofaro.com/BYODIOT

40


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

2.5 Evolving your business intelligence infrastructure There is a catch: get used to a continuous re-assessment of the ICT that supports your Business Intelligence efforts. A further element to consider is that, probably, you will have to get used to the idea that also your relationship with the sources of your data and Big Data (internal and external) will have “constraints” that will vary with time. The “building bricks”? What you are used to now, plus new elements required to make Big Data “digestible” by mere human decision makers, i.e. data sources, collectors/consolidators of data, algorithms processing and “augmenting” data, and obviously the applications to be used by end-users. This will create plenty of opportunities for Consultants: instead of selling man hours, they will sell “data products”, including the time by their own staff to help adapting and adopting them Data scientists: to help structure data products from A to Z, either by starting with raw data, scouting for data product components from suppliers, or creating new ones by combining elements Customers: by creating potentially “on the fly” new analyses based on elements, internal or external, picked from a “corporate catalogue”. So, will the users of appliances generating data (IoT) become both consumers and suppliers? Will they be able to do as already decades ago started doing artists, i.e. “unbundling” the painting of sculpture from the rights to pictures and reproduction? It would just imply adding more “providers”, better, having new categories of middlemen “consolidating” data, or third parties carrying out those consolidation activities at “arm’s length” to ensure compliance with privacy and any other constraint deriving from law. What will be the main drivers for all those changes?

41


http://www.linkedin.com/in/robertolofaro

Certainly direct pressure and exposure/coalition building on social networking sites (see the “Tell 3000” case on a previous book, obviously available online also for free25). But beside that direct approach (it usually follows a series of “swarming” activities, i.e. a group response converging on a common objective but without a pre-assigned “leader”), there will be an element of lobbying and gradual evolution of laws, following the evolution of the awareness of the side-effects of Big Data and its uses within future Business Intelligence activities. Thanks to the “democratization” of end-user Business Intelligence tools (e.g. via web, mobile, new devices and “knowledge toys”), it will be the combination of layers that will make the difference, with a constant redefinition of the set of tools and infrastructure components to be used. Considering the technological elements now (2015) available on the market, and demand for skilled human resources, it would probably take some trial-and-error to create a multi-layered market like the one discussed in this chapter. Nonetheless, it makes sense to start by doing an evaluation of the expertise currently available in your organization (including suppliers), and define a “purpose map” stating the desirability of each element, e.g. identify potential weaknesses in your current Business Intelligence mix revise your vendor mix, i.e. charting out which skills might make sense to bring in house assess which data expansions should be added to your product or services evaluate who should be the counterparts in your data (or processed data, i.e. “filtered through algorithms”) outline the potential need to build an industry or multi-party coalition to provide those data, data consolidation, algorithms, etc., or even just to define the “rules of the game” (as it has been done with the Cloud Security Alliance, focused on reassuring potential customers of the viability of using the Cloud for business-sensitive services and data).

25

http://www.robertolofaro.com/BSN2013

42


25 years of lessons (and counting) on "relevant data on the human and technical side of quantitative decision making

As any checklist, the one on the previous page is just an example: by working jointly in your industry or with your suppliers, you might identify other options. To share again the list of key questions presented for BYOD and IoT but that really apply for any integration within your business of data generated elsewhere or outside your control: Motivation Are you choosing to create it- or it has been imposed by external sources? Stakeholders Did you research and identify all the stakeholders involved, and the impacts on their activities of introducing the proposed changes? Degrees of freedom Have you assessed the tolerable level of control that you can keep on each layer within your data provisioning, e.g. according to privacy? Control Can you enforce rules to monitor the use of data? Management Have you the ability to update the policy and, accordingly, modify the corporate resources “on loan” by involving those relevant remotely and instantaneously? Compartmentalization Did you define rules per domain, including how to evolve them within the general policy and within each domain? Decision points Have you identified who will be the reference decision maker in each unit? Communication Have you defined how to communicate and ensure feed-back, e.g. to update the rules or to add new devicespecific rules?

43

Profile for Roberto Lofaro

25 years of lessons (and counting) on "relevant data"  

Advertisement
Advertisement