Analytics Innovation, Issue 8

Page 1

T H E

L E A D I N G

V O I C E

O F

A N A L Y T I C S

ANALYTICS INNOVATION

NOV 2016 | #8

BAD DATA

COSTING

IS

COMPANIES

FORTUNE A /11

+

How Analytics Prevents Fraud With fraudulent activity a growing problem, companies are looking for any way to fight back, and analytics may provide the answer / 6

Embedded Analytics is the Future of BI Embedded analytics are having a huge impact on the way that companies are dealing with and analyzing the ir data moving forward, but what exactly are they? / 18


Analytics Festival november 29 & 30 2016, Chicago

summits include Predictive Analytics Innovation Business Intelligence Innovation HR & Workforce Analytics Innovation Social Media & Web Analytics Innovation speakers include:


ISSUE 8

EDITOR’S LETTER Welcome to the 8th Edition of the Analytics Innovation Magazine In the days approaching the election, Huffington Post mounted what could only be described as a sustained assault on Nate Silver’s 538 website, publishing no fewer than three articles virulently criticizing his predictions for the US Presidential election and the models he used to make them. In his attack on Silver in Huffington Post, Ryan Grim wrote: ‘The short version is that Silver is changing the results of polls to fit where he thinks the polls truly are, rather than simply entering the poll numbers into his model and crunching them.’ His article concluded, ‘If you want to put your faith in the numbers, you can relax. She’s got this.’ Given that many will have taken Grim’s advice and had faith in the numbers, it is understandable that this faith is shaken, maybe even gone. The numbers failed them. Mike Murphy, a Republican strategist who predicted a Clinton win, summed up the mood towards numbers on MSNBC, saying ‘My crystal ball has been shattered into atoms. Tonight, data died.’ But what do flaws in the election modeling and polls actually tell us this time? Is data really dead?

The answer is that no, data is not dead. Indeed, it is arguably more important than ever. The only thing the election has shown - yet again is that you cannot use data without context. There were a number of reasons polls were wrong on an individual basis. Pollsters are less likely to question new voters, in this case white working class Americans, who voted in high numbers. It has also been claimed that there were many so-called ‘Shy Trump’ supporters, echoing a phenomenon seen in Britain when the polls got it so wrong about Brexit and a Conservative Party majority in 2015, hiding their voting intentions for fear of being labeled racist and sexist. Frank McCarthy, a Republican consultant with the Keelen Group, a consulting firm in Washington, DC, said: ‘People have been told that they have to be embarrassed to support Donald Trump, even when they’re answering a quick question in a telephone poll.’ He added that, ‘What we’ve been hearing from the [Republican National Committee] for months is there’s a distinct difference on people who get polled by a real person versus touch tone push poll,

Politics is ultimately about more than numbers, it is about people, and people are hard to predict, but not completely impossible. As Pradeep Mutalik noted ahead of the election, ‘Aggregating poll results accurately and assigning a probability estimate to the win are completely different problems. Forecasters do the former pretty well, but the science of election modeling still has a long way to go.’ As always, if you have any comments or would like to submit an article, please don’t hesitate to contact me at jovenden@theiegroup.com

James Ovenden managing editor

/3


business analytics innovation summit january 25 & 26 2017 las vegas

speakers include:


contents 6 | HOW ANALYTICS PREVENTS FRAUD

16 | MACHINE LEARNING AND PRIVACY: A PROBLEM?

With fraudulent activity a growing problem, companies are looking for any way to fight back, and analytics may provide the answer

With adoption of machine learning increasing rapidly, it is time to start thinking about the implications of a technology that many still do not fully understand

8 | HOW AI CAN SAVE THE LEFT

Right wing politics are on the rise across the West, and it appears the left is doomed. The left needs to do better at keeping on top of technology if it is to remain relevant 11 | BAD DATA IS COSTING COMPANIES A FORTUNE

The cost of poor data quality is estimated by IBM to be roughly $3 billion a year in the US alone. We look at what can be done 14 | DOES DATA EXACERBATE POVERTY?

Much has been made about how data can help the poorest, allocating resources to those most in need. But could it actually be making the problem worse?

18 | EMBEDDED ANALYTICS IS THE FUTURE OF BI

Embedded analytics are having a huge impact on the way that companies are dealing with and analyzing their data moving forward, but what exactly are they? 21 | WILL THE CHIEF ANALYTICS OFFICER REPLACE THE CHIEF DATA OFFICER?

The Chief Data Officer has become a prominent member of the boardroom in recent years, but are they already on the way out to be replaced by Chief Analytics Officers?

WRITE FOR US

ADVERTISING

Do you want to contribute to our next issue? Contact: jovenden@theiegroup.com for details

For advertising opportunities contact: pyiakoumi@theiegroup.com for details

managing editor JAMES OVENDEN

| assistant editor charlie sammonds | creative director nathan wood

contributors megan rimmer, david barton, alex lane, olivia timson

/5


How Analytics Prevents Fraud Megan Rimmer, Industry Commentator

Criminals are now working collaboratively as never before, and the approach to using data to stop them must be same

/6


THE NATURE OF FRAUD HAS CHANGED DRAMATICALLY since the dawn of the digital age. According to a recent Office for National Statistics (ONS) survey, one in 10 people in the UK have fallen victim to cybercrime. Companies are no different, with so-called business email compromise schemes netting in billions of dollars for criminal gangs. International money transfer company Xoom, for example, was tricked into sending $30.8m of corporate cash to an overseas account.

At the recent Big Data & Analytics for Banking Summit in Melbourne, Steve York, General Manager of Group Compliance, Security & Business Resilience at Bank of Queensland, discussed how the nature of fraud had changed, with criminals knowing that they only have a certain amount of time to take money from accounts of people whose data they have sold, and they are subsequently selling information on using the dark web to other criminals.

This is not a new problem, but it would be churlish to blame companies, as the threat is always evolving. In the 2016 Faces of Fraud Survey, sponsored by SAS, just 34% of surveyed security leaders said that they have high confidence in their organization’s ability to detect and prevent fraud before it results in serious business impact, 56% of whom cited the high level of sophistication and rapid evolution of today’s schemes.

In the 2016 Faces of Fraud Survey, sponsored by SAS, just 34% of surveyed security leaders said that they have high confidence in their organization’s ability to detect and prevent fraud before it results in serious business impact, 56% of whom cited the high level of sophistication and rapid evolution of today’s schemes

One of the other reasons many of the respondents gave was a lack of awareness among customers and/ or partners and employees, cited by 56% and 52% respectively. In order to correct this, both companies and banks - anyone with a serious interest in preventing fraud on a larger scale - are turning to data analytics to identify potential schemes and prevent them before they can take place. The speed at which money now moves during transactions is an obstacle that cybercriminals have long overcome, and financial institutions have to prevent fraud at the same speed to protect their customers and assets or be held liable for any theft. Machine learning algorithms can help banks find anomalies in the data indicative of fraud in real time, without disturbing the flow of legitimate transactions that must flow seamlessly.

common practice. In the 2016 Faces of Fraud Survey, 68% of insurance respondents and 64% of financial services respondents said that within-industry data would be very valuable or valuable, while 53% of insurance and 48% of financial services reported that outsideindustry data would be very valuable or valuable. Fraud is a growing problem, despite the efforts to prevent it. Criminals are now extremely skilled at getting hold of people’s data, through tricking both individuals and companies who hold that data. For police, it is often almost impossible to track down criminals to track down those who would steal data as they often operate out of countries like North Korea and Romania. Fraud needs to be stopped in real time, and companies, banks, and individuals need to work together and share all the data possible to ensure this happens.

Criminals are now working collaboratively as never before, and the approach to using data to stop them must be same. Fiserv is one company leading the way with its new, advanced predictive scoring model specific to Automated Clearing House (ACH) transactions. Designed to deal with fraudulent electronic payments as they happen, Payment Fraud Manager detected more than 90% of fraud in Fiserv model validation tests, while reviewing only 2% of the transactions. The algorithm leveraged industry-level data from hundreds of financial institutions of all sizes across the US, as is now /7


AI

How Can Save The Left James Ovenden, Managing Editor

IN 2010, UK CONSERVATIVE MINISTERS KWASI KWARTENG, the new international development secretary Priti Patel, Dominic Raab, Chris Skidmore and the new justice secretary Elizabeth Truss authored a paper entitled Britannia Unchained, in which they laid out the qualities of the ideal modern worker. This worker, they wrote, works ‘on a freelance basis. They can net £600 a week in take-home pay. But they have to work for it – up to 60 hours a week.

Harris overestimates the durability of the so-called ‘sharing economy’ and underestimates the speed at which technologies such as AI will become reality and change the nature of employment again In a recent article in the UK’s Guardian newspaper, journalist John Harris noted this paper in his dissection of the decline of the left in western politics. Many of the causes he set out in his essay were persuasive, but he fell down in one /8

vital area on which the future of politics and the economy hinges the nature of employment. Harris argued that the vision set out in Brittania Unchained reflected the realities of the modern age, a sharing economy in which services and consumers are linked directly to one another using platforms like Uber and Airbnb. The left, on the other hand, is stuck with a vision of the world of work that he called either ‘naive or dishonest’. He argued that their failure to recognize that the nature of employment had shifted from that of the industrial revolution - a mass of unionized grunts - to the so-called ’Uber economy’ - individuals working on a freelance basis - had enabled the Conservatives to essentially become the party of the worker. Harris wrote,


’For clued-up Tories, it is time to rebrand as “the workers’ party” – in which the worker is a totem of rugged individualism, not a symbol of solidarity.’ In this world, ’The acceptance of insecurity becomes a matter of heroism, and a new political division arises between the grafters and those – as Britannia Unchained witheringly puts it – “who enjoy public subsidies”. In other words, the “skivers” versus the “strivers”.’ Harris is clearly right to argue that by jobs in other fields, often better and more enjoyable jobs. There is a simple reason that this argument is no longer valid - Moore’s law. Moore’s law dictates that the number of transistors per square inch on integrated circuits doubles every year. In the 40 years since

Gordon Moore, co-founder of Intel, made that observation, the transistor count of computer processors has risen from 2,300 to more than four billion, and with each doubling comes a leap in the sophistication of the logic the chip can handle. In Erik Brynjolfsson’s book, The Second Machine Age, he argued that ‘The accumulated doubling of Moore’s Law, and the ample doubling still to come, gives us a world where supercomputer power becomes available to toys in just a few years, where ever-cheaper sensors enable inexpensive solutions to previously intractable problems, and where science fiction keeps becoming reality. Sometimes a difference in degree (in other words, more of the same) becomes a difference in kind (in other words, different than anything else). The story of the /9


second half of the chessboard alerts us that we should be aware that enough exponential progress can take us to astonishing places.’

The left is stuck with a vision of the world of work that he called either ‘naive or dishonest We are already arriving at these ‘astonishing places’ and leaving the sharing economy behind. Uber drivers, for example, will soon be replaced by driverless cars. But we are yet to scratch the surface. This speed at which we are advancing is simply too fast for the jobs that are being destroyed to be replaced. Intelligent automation uses machine learning and deep learning algorithms in ways that are making computers better than humans at a number of skilled-knowledge tasks, and enables advances in robotics that are fast making technology better at a wide variety of manual labor roles. Even activities like writing news stories and researching legal precedents are being completed by robots. The ‘strivers’, as the Conservatives called them, simply don’t stand a chance. Bertrand Russell once argued that ‘modern methods of production have given us the possibility of ease and security for all; we have chosen, instead, to have overwork for some and starvation for others. Hitherto we have continued to be as energetic as we were before there were machines; in this we have been foolish, but there is no reason to go on being foolish forever.’ This is a truism that must be realized by the political parties if we are to stand any chance of surviving the coming AI revolution, and it is the left best positioned to rule such a world. It has always been the case that the happiest countries - particularly the / 10

Nordic countries - are those with low inequality and greater social insurance. Essentially, people are happier when they have a safety net as it means they do not need to worry as much about what will happen should some misfortune occur. In the UK, when the Conservative party came to power in 2010 they commissioned a report into what made people happy, and they found exactly this. They buried the report, as it went against their principles. The US and the UK mostly leave the poor to fend for themselves, and they have been successful in inseminating this idea

that welfare is for slackers and is ergo bad throughout the population. As it becomes clear that the ‘strivers prosper’ mantra is no longer valid, this will soon lose its power. And the left is well positioned, as the public has far greater trust in their ability to manage the welfare state and support those unfortunate enough to be out of work. The left is destined to lose elections for the immediate future regardless of who is in charge of the Labour party because, largely, their policies do not suit the realities of the times. But these realities are chan fast, and the left needs to position itself better to exploit this.


predictive analytics innovation summit february 22 & 23, 2017 san diego

speakers include:


Bad Data

IS A

Costing Companies

Fortune David Barton, Head of Analytics

THE COST OF POOR DATA QUALITY IS TREMENDOUS. Estimated by IBM to be roughly $3 billion a year in the US alone, it costs organizations between 10-30% of their revenue a year. Subsequently, despite the promise of big data, just 25% of businesses are successfully using it to optimize revenue, while the rest are losing out on millions. / 12

The sum of money IBM believes is bein g thrown away may seem unbelievable, but it makes sense when you consider how often data is used in everyday working practices, and the impact that wrong data could have a result. The primary cause of bad data is simple - data decay. Data decay is estimated to be as much as 70% in B2Bs.


Using out of date data is like filling a competitive egg eater’s bowl up 70% with rotten eggs - while they might look right, if they don’t stay down then the outcome isn’t going to be pretty for anybody. Data is constantly decaying. Lives change every day - people move houses, they switch jobs, they change numbers, and their contact details change as a result. It can even happen that streets will get renamed and area codes change. As a consequence, email addresses change at a rate of about 23% a year, 20% of all postal addresses change every year, and roughly 18% of all telephone numbers change each year. If you’re not on top of these changes, your sales teams will not be calling the right numbers, your marketing teams are not sending campaigns to the right email addresses, and you do not have anywhere near the understanding of your potential reach that will enable you to reach clients. You’re wasting manpower and money that’s far better allocated elsewhere.

Data usually trumps gut instinct, but bad data can wreak

havoc

Another cause of bad data is corruption as it passes through the organization from the initial source to decision makers. In a recent interview with us, Vijay A D’Souza, Director of the Center for Enhanced Analytics at the US Government Accountability Office, explained that, ‘Regardless of the goals, it’s important to understand the quality of the data you have. The quality determines how much you can rely on the data to make good decisions.’ This data is liable to be guided by false assumptions and drawn from sources like illconsidered market research. One example of this is work carried out by the municipal authority in charge of Boston, Massachusetts. They released a mobile app called Street Bump in 2011 in an attempt to find a more efficient way to discover roads that needed repair by

crowdsourcing data. The app used the smartphone’s accelerometer to detect jolts as cars went over potholes and GPS to correlate it to where the jolt was felt. However, the system reported a disproportionate number of potholes in wealthier neighborhoods, having oversampled the younger, more affluent citizens with better digital knowledge who were willing to download the app. A landmark paper in 2001 showed that legalizing abortion reduced crime rates - a conclusion with major policy implications. But, in 2005, two economists at the Federal Reserve Bank of Boston showed the correlation was due to a coding error in the model and a sampling mistake. This example pre-dates the era of what we now see as big data. These problems are not new, but they show the dangers of quantitative models of society. Another consequence of poor data quality is the additional costs it causes IT systems. According to estimates by vendor Effectual Systems, content management systems are made up of between 50-75% junk data, and fixing it manually can cost IT $600,000 and 12,000 man-hours per year. The solution is clear, but it is not easily done. To clean it is costly, it takes time and a real willingness to do it. Data needs to be assessed frequently for signs of bad data, and assumptions underlying analysis need to be challenged by decision makers at every stage. If something seems wrong, you need to look back at the data you have rather than simply assuming it’s right. Data usually trumps gut instinct, but bad data can wreak havoc, and it is better to be safe than sorry.

/ 13


Does Data Exacerbate Poverty ? James Ovenden, Managing Editor

THERE HAS BEEN MUCH WRITTEN ABOUT HOW BIG DATA can help to eradicate poverty, with significant analyses done to gauge the scale of the problem and determine its causes. One such project, run by Harvard, analyzed 1.4 billion federal tax records on income and life expectancy. They found that the average life expectancy of the lowestincome classes in America is now equal to that of Sudan or Pakistan, while the richest men in the US now outlive the poorest by a fairly appalling 15 years.

/ 14


While you can thank big data for highlighting this terrible state of affairs, you can, according to some, also hold it partially responsible for creating it. In a new book titled ‘Weapons of Math Destruction’, Data scientist and former Wall Street banker, Cathy O’Neil, has detailed the many ways mathematical models and big data are being used as ideological tools by the powerful in ways that exacerbate oppression and inequality by using it to justify their decisions, ’deliberately [wielding] formulas to impress rather than clarify.’ O’Neil argues that big data isn’t always better data and is often biased against women and the poor. Those living in poor neighborhoods are, for example, often targeted with ads for predatory payday lenders because the data suggests that they are most likely to buy it, perpetuating the cycle of poverty. She also cites examples of credit scores being used by HR teams in recruitment, ignoring potentially talented candidates on the misguided assumption that a poor scores correlates with weaker job performance. O’Neil is not the first to notice the relationship between data and poverty. Michele Gilman, a law professor at the University of Baltimore and a former civilrights attorney at the Department of Justice, has also noted that standards of privacy around data collection are far lower for the poor. Not only is this a gross indignity, the large volume of data collected often obstructs attempts to escape poverty. Gilman goes as far as to say that ‘data collection is where the poor are most stigmatized and humiliated.’ When we talk about protecting individuals’ right to privacy, poor people on welfare are rarely, if ever, considered. In many states, to qualify for food stamps, applicants

even have to undergo fingerprinting and drug testing, and they are constantly checked up on to ensure they are as poor as they say they are. The data that’s gathered also often ends up feeding back into police systems, further perpetuating the cycle of surveillance and limiting their opportunities. This data is also handled with less care. Gilman notes that welfare programs collect massive amounts of data that is often stored in potentially unsecure databases for unknown amounts of time, with unspecified permissions control, or criteria for caseworker access, leaving them extremely vulnerable to rogue actors.

Welfare programs collect massive amounts of data that is often stored in potentially unsecure databases for unknown amounts of time, with unspecified permissions control, or criteria for caseworker access, leaving them extremely vulnerable to rogue actors This higher level of surveillance for the poor helps to further engender an atmosphere of distrust between the lower classes and the authorities, as well as reinforcing stereotypes - further serving to marginalize poorer communities. The negative impacts are, perhaps, most profound in the way law enforcement uses data collection to target the poor. Data often reinforces existing prejudices among police officers that the poor are more likely to commit crimes and helps to justify / 15


them. O’Neil’s prime example in her thesis is recidivism models, which are used across the country by judges in sentencing convicts. She notes that, ‘People are being labeled high risk by the models because they live in poor neighborhoods and therefore, they’re being sentenced longer. That exacerbates that cycle. People are like, ‘Damn, there are some racist practices going on.’ What they don’t understand is that that’s never going to change because policemen do not want to examine their own practices. What they want, in fact, is to get the scientific objectivity to cover up any kind of acts for condemning their practices.’

applied to crime data can have an impact racially. He argues that data mining looks to find patterns in data, so if ‘race is disproportionately (but not explicitly) represented in the data fed to a data-mining algorithm, the algorithm can infer race and use race indirectly to make an ultimate decision.’ The defining factor of crime is poverty, and this is an issue that still disproportionately impacts black people. Therefore, not only is the data helping to keep people in a cycle of crime and therefore perpetuating poverty, it is doing so along racial lines - particularly incendiary at a time when tensions between minorities and authorities are at an all time high.

In a National Review article, David French argued that Math is not racist and O’Neil is just a ‘social justice warrior’ with funny ideas around what is ‘fair’. However, her ideas are further reinforced by Mathematics PhD Jeremy, who identified that machine learning algorithms when

O’Neil acknowledges that data is not inherently bad, but that individuals and society mis-use it to draw so-called ‘natural conclusions.’ She argues that, ’If we hand over our decision-making processes to computers that use historical data, it will just repeat history. And that

If we hand over our decisionmaking processes to computers that use historical data, it will just repeat history. And that simply is not okay / 16

simply is not okay.’ However, it is not just a case of mis-use, it’s also the case that we collect too much. The way we both collect data about the poor and use it must be entirely re-examined. The digital age has brought with it many benefits, and has been used to streamline many aspects of the welfare system. Racism and class politics have long been built into people in power’s assumptions/prejudices, and it was hoped that big data would eradicate these. However, it seems that it is has simply helped build on historical biases. Data should be used as a tool to liberate the disenfranchised, and to do this ethical considerations need to be built into processes around data analysis, and we need to put more onus on the privacy rights of the poor - or at least as much as we put on those of the rich, otherwise the machines who learn from the past are doomed to repeat it.


Machine Learning & Privacy:

A Problem? Alex Lane, Research Analyst

SINCE THE DAWN OF BIG DATA, PRIVACY CONCERNS have overshadowed every advancement and every new algorithm. This is the same for machine learning, which learns from big data to essentially think for itself. This presents an entirely new threat to privacy, opening up volumes of data for analysis on a whole new scale. Many standard applications of machine learning and statistics will, by default, compromise the privacy of individuals represented in the data sets. They are also vulnerable to hackers, who would edit the training data, compromising both the data and the final goal of the algorithm. / 17


A recent project that demonstrates how machine learning could directly be used in the invasion of privacy was carried out by researchers at Cornell Tech in New York. Ph.D. candidate, Richard McPherson, Professor Vitaly Shmatikov, and Reza Shokri applied basic machine learning algorithms - not even specifically written for the purpose - to identify people in blurred and pixelated images. In tests where humans had no chance of identifying the person (0.19%), they say the algorithm had 71% accuracy. This went up to 83% when the computer was given five opportunities.

Many standard applications of machine learning and statistics will, by default, compromise the privacy of individuals represented in the data

Blurred and pixelated images have long been used to disguise people and objects. License plate numbers are routinely blurred on television, as well as the faces of underage criminals, victims of particularly horrific crimes and tragedies, and those wishing to remain anonymous when interviewed. YouTube even offers its own facial blurring tool, developed to mask protestors and prevent potential retribution. What’s possibly the most shocking part of the research is the ease with which the researchers were able to make it work. The team used mainstream machine learning methods where the computer is trained with a set of example data rather than programming. McPherson noted that, ‘One was almost a tutorial, the first one you download and play with when you’re learning neural nets.’ They also noted that the training data could be as simple as images on Facebook or a staff directory on a website. For numbers and letters (even handwritten), the training data is publicly available online, opening up the further risk of fraud. / 18

Machine learning cannot reverse the pixelation and recreate images, so anyone worried about blurred pictures of them can rest vaguely easy, at least for now

The stated purpose of the research was to warn privacy and security enthusiasts of the threat from machine learning and AI to be used as a tool for identification and data collection. However, the threat may not be as bad as they claim. Machine learning cannot reverse the pixelation and recreate images, so anyone worried about blurred pictures of them can rest vaguely easy, at least for now. It can’t actually recreate the pictures. It is also only successful when identifying things it’s been specifically trained to look for. Although, hackers could quite easily train the system using photos taken from social media. And, in the case of protesters - YouTube’s purported - whose faces will likely already be on file, it is cold comfort. As machine learning become increasingly powerful, algorithms could conceivably make high confidence predictions without having direct access to your private information. This was already seen to an extent when retailer Target’s predictive algorithm suggested a girl was pregnant and sent her promotional material accordingly - accidentally revealing the secret to her father, but in future it may not even be necessary to have access to any of her personal details to figure it out. In this world, there is no such thing as private information. Given the emphasis that so many put on privacy, the reaction to this is likely to be highly negative.


Embedded Analytics Is The Future Of BI Olivia Timson, Marketing Analytics Expert

DATA IS ONE OF THE DRIVING FORCES FOR MOST COMPANIES in today’s digital business environment. They are consequently, going to every effort to collect and analyze it for insights that can be used to support their decision making processes. This data is often not especially hard to collect. However, when it comes time to analyze the data, the nature of many BI tools on the market mean that users are often forced to leave their preferred business applications if they want to do it, siloing analytics within dedicated platforms.

Embedded analytics refers to consumerfacing BI and analytics tools that have been integrated into software applications, operating as a component of the native application itself rather than a separate platform

The consequence of this is that end users are hindered in their efforts to understand, explore, and visualize customer data at the speed necessary to exploit it to full effect. One solution rapidly being adopted by countries across the globe is embedded analytics. Embedded analytics refers to consumer-facing BI and analytics tools that have been integrated into software applications, operating as a / 19


component of the native application itself rather than a separate platform. Conducting analysis in this manner allows end users to work with higher quality data as standards of governance are higher, it allows them to discover insights quicker as time is not wasted requesting reports from external agents, and it allows them to distribute findings throughout the organization to other employees. One of the primary beneficiaries of embedded analytics is actually the consumer. Analytics can be embedded into customer portals so they can ask questions of the data themselves, giving them easier access to things like invoices, delivery tracking, and billing. This has the added bonus of lowering the burden on customer services, who no longer have to field requests for such information, freeing up time to address more complex customer issues that genuinely require a human touch. The popularity of embedded analytics has grown exponentially over the past several years. Business users are now adopting embedded analytics at twice the speed they are traditional Business Intelligence (BI) tools, according to the results of a new study from self-service analytics firm Logi Analytics. The fourth annual State of Embedded Analytics Report also recently found that the market could potentially grow to $46.19 Billion by 2021 - CAGR of 13.6% - while 43% of application users already leverage embedded analytics regularly and 87% of application providers claimed embedded analytics is important to their users, up from 82% in 2015. In today’s modern business world, organizations need to be as reactive as possible. Customer service is an excellent example of where embedded analytics can prove useful, but the applications are far more widespread and can benefit / 20

every aspect of an organization. The primary value of embedded analytics rests in the ability to embed reports and data into the applications that business users use the most, whether that be the company portal or another site, essentially streamlining the whole process. For this value to be realized, companies are already adopting embedded analytics in their droves, and those that haven’t already will likely need to in the near future.

Business users are now adopting embedded analytics at twice the speed they are traditional Business Intelligence (BI) tools, according to the results of a new study from selfservice analytics firm Logi Analytics


Premium online courses delivered by industry experts

academy.theinnovationenterprise.com

/ 21


if a CAO is not particularly good at evangelizing and selling the capability, you might want to reconsider the appointment

WILL THE

Chief Analytics Officer

REPLACE THE

Chief Data Officer ? David Barton, Head of Analytics / 22


THE MAKE-UP OF THE C SUITE HAS changed dramatically over the last few years, with the growing importance of data and technology creating new roles in the boardroom to provide them representation befitting their preeminence in the decision-making process.

moved on, and leveraging this infrastructure is now the key. The CAO is subsequently set to become one of the most strategic roles in their organization. In this new paradigm, the CDO is likely to see their importance diminish, and they will have to adapt accordingly.

Those companies performing best in their data efforts often have a Chief Data Officer (CDO) in place, with a Forrester Research survey of 3,005 global data and analytics decision-makers finding that 45% of respondents had appointed a CDO. Gartner estimates that this number will increase to 90% of large companies by 2019. However, confusing the issue is the rise of the Chief Analytics Officer (CAO), who at many organisations is now joining or replacing the CDO.

In a recent interview with us, Dr. Zhongcai Zhang, Chief Analytics Officer at New York Community Bancorp, Inc, noted: ’For data democratization to be done correctly, a delineation of two-part data ownership is important: while IT owns the technical side of data, the CAO should be responsible for the content ownership of data. The former is focused on data flow, storage (including some cleansing) and retrieval processes, and the latter often has to know the intricacies and pitfalls of the data across the enterprise repositories. This is how an analytics project often consumes 70% of the time in data preparation.’

In companies that have both titles, the CDO is there to concentrate on the infrastructure - that it is collected, stored, and managed correctly - while the CAO focuses on making strategic use of the data to create real business value - that it is analyzed, understood, and actionable. However, many companies now have a mature big data infrastructure in place. This is thanks to the good work done by the CDO, but companies have

Titles are less important to an organization’s financial future than ensuring that it has the skills it needs, but the skills required to fulfil these different functions are not the same. Those needed by the CDO are more technical, while the CAO role has a strong emphasis on

leveraging insights by understanding the business. They must have an evangelism about analytics and what can be done with them in order to sell them to decision makers. Many managers fail to understand the subtleties in these differences. CAO, Adam Kornick at Aviva, argues that if a CAO is not particularly good at evangelizing and selling the capability, you might want to reconsider the appointment. CDOs are not, traditionally, salespeople in the same way that a CAO is required to be, and trying to crowbar them into the role so as to simplify things and have one person on the board responsible for data initiatives is unlikely to be successful. There is a clear risk in the C-suite growing too bloated, suits piling into the boardroom with all of them shouting over each other to be heard. This is particularly dangerous when it comes to data, where there should only be one source of truth - a truth that could become confused with too many voices in play at the top. However, while many of the new roles emerging may seem interchangeable, simply shifting responsibilities wherever it feels like they could be shifted will never work. The attitude that the CTO is technical so they must be able to able to handle data, that the CDO understands data so they must be able to understand analytics, is wrong. They are distinct roles, and companies that can afford to have both a CDO and a CAO should, and those that can’t should focus on getting a CAO in place once they have the infrastructure set up. Ultimately, data is now of such importance and is used in so many facets of everyday operations, that it’s reckless to leave it all to one person. In the future, it is not going to be a question of either/or, it will be an imperative to make room for both.

/ 23


For more great content, go to Innovation Enterprise On Demand

www.ieondemand.com

Over 5000 hours of interactive on-demand video content

There are probably of definitions the single job of Head of Innovation and any withemployee, them dozens of perspectives What would happendozens if a company fundedfor every new product idea from no questions asked? on it should beAdobe done.did Without anythat. official credentials the Randall subject will I was asked give my personal account As how an experiment, exactly In this session,on Mark share thetosurprising discoveries of running an in innovation team in the of an innovation-hungry organisation that started on the highfor street Adobe made creating Kickbox, thecontext new innovation process that’s already becoming an industry model and has innovation. grown to employ 16,000 people overa 80 years. In red the box pastpacked year orwith so Iimagination, have learnedmoney that when comes igniting Each employee receives mysterious and ait strange to innovation culture trumps everything and there really aren’t any rules. In order to get by, I stick some guiding game with six levels. Learn why the principles behind Kickbox are so powerful, why Adobe is opentosourcing the principles and lots gutany feel. Join me forcan an honest andprinciples straightforward perspective entire process andof how organization tap these to ignite innovation.on a modern job without a Mark Randall's serial entrepreneurial career conceiving, designing and marketing innovative technology spans nearly 20 years and three successful high-tech start-ups. As Chief Strategist, VP of Creativity at Adobe, Mark Randall is focused on infusing divergent thinking at the software giant. Mark has fielded over a dozen award-winning products which combined have sold over a million units, generated over $100 million in sales and won two Emmy awards. As an innovator, Mark has a dozen U.S. patents, he’s been named to Digital Media Magazine’s “Digital Media 100 and he is one of Streaming Magazine’s “50 Most Influential People.”

/ 24

Stay on the cutting edge Innovative, convenient content updated regularly with the latest ideas from the sharpest minds in your industry.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.