3 minute read

Building a Data Product Capability

Analytics has long been touted as the answer for organizations floundering in a sea of data. But internal data solutions have often fallen short for a variety of reasons. External offerings have a far better track record, which is why thinking of data as an asset and data analytics or data delivery as a product can help companies ramp up the value they derive from their internal solutions.

We see significant payback when companies build internal analytics solutions with a product mindset. It’s an approach that will help ensure your analytics answer the most important questions and drive desired outcomes. But doing it once in one corner of the organization only goes so far. How can companies adopt a product mindset consistently, so that it is embedded in the fabric of the enterprise? The key is to develop a data product capability.

Advertisement

Organizations with a data product capability are able to strengthen the connection between the development of internal analytics products and the delivery of ongoing, long-term value. Because they have taken the time to develop a moderndatastrategy , they have a scalable framework for collecting, gathering, structuring and processing information. And because they treat information delivery as a product, they are continually on the hunt for ways to improve it and harness it for better decision-making.

In our experience, there are several critical steps to developing a data product capability. While these may sound simple, they require companies to make some significant changes to how they view and develop their internal data analytics solutions.

UNDERSTAND AND SIMPLIFY THE DATA LANDSCAPE

One of our large retailer clients was managing its merchandising operations on a legacy data platform that provided no analytics capabilities whatsoever. The company was maintaining multiple data feeds, more than 600 standard reports, 1,200 tables, and some 800 views. As a result, there was considerable confusion. For example, it turned out that the same metrics were being reported differently in different reports, leading to head-scratching mismatches and conflicts over data integrity.

In addition, the company’s internal customer team was juggling countless ad hoc requests for different types of reports, dashboards, and other forms of support. They spent all of their time reconciling the differences in various reports and trying to fulfill these new ad hoc demands, versus developing valuable insights to improve decision-making. The ongoing barrage of requests for data tools also meant continual re-inventing of the wheel.

It was clear that before attempting to build anything, we would need to get a handle on what data different groups were using, why they were using it, how it was being fed into the system, and a broad range of other questions.

To be sure, such a discovery process can take time, but it is central to a product approach to solution development. Understanding end-user needs–from manufacturing to sales and marketing–is essential to determining what the final solution will look like. Furthermore, asking the question “why are things so complicated?” is what puts you on the path to simplification.

The simplification process includes cleansing the data, defining key metrics, and streamlining the various data sources. Once the data landscape has been simplified, each metric will have a single definition for all stakeholders, and each will reside in a single place that multiple parties can access when developing solutions. Having a single source of truth for every metric gets rid of the data mismatches that occur when developers build ad hoc extraction mechanisms.

This article is from: