When data science rhymes with common sense

If we take the market management framework as a framework for building this strategy, we realize that different methods of data analysis are applied at each stage of the process.

Data science is present at every stage of building a business strategy. Indeed, the use of data processing by functional departments (CEO, VP Sales, CFO, CMO, CDO, etc.) finds its place in every stage of the development of the company’s business strategy. If we take the market management framework as a framework for building this strategy, we realize that different methods of data analysis are applied at each stage of the process.

In fact, whether it is secondary research or clustering methods to define and segment the market to attack, compensation approaches to identify market drivers and write the value proposition, probabilistic models to measure the performance of a new product or models of elasticity to optimize the price, Assortment and promotional plan, each step requires a decision-making that can be supported with data analysis.

We take the example of the retail industry to illustrate the use of these data modeling methodologies to assist in decision making at each of these stages.

A PROBABILISTIC APPROACH FOR EVALUATION OF NEW PRODUCT PERFORMANCE

In order to evaluate the performance of a new product that has not yet been introduced to the market, it is first necessary to show a panel of consumers this new product in the middle of competing products and write down the ranking. new product versus competing products.

This ranking gives us an idea of ​​the classification of the market shares of the new product in its competition universe. However, when we show the new product, we assume that this new product has a 100% awareness, which is not absolutely the reality when introducing a new product.

Generally, a new product launch campaign is accompanied by an advertising action whose pressure is measured by what are called GRP (Gross Rating Points), ie the percentage of the target audience that has seen the ‘ads. For example, 300 GRP corresponds to reaching an average of 3 times 100% of your goal.

Using the law of the negative binomial as the law of distribution of the number of advertising contacts, we obtain the percentage of the population that has had at least “n” advertising contacts. If we assume that with 5 contacts and more we know the product, then we have the notoriety of the product. Thus, we can integrate this increase in notoriety into the market share ranking and thus estimate the increase in market share of the new product.

A PROBABILISTIC APPROACH TO AVOID EXHAUST

There is another probabilistic approach that allows us to respond to a very important use case in the field of retail, whether for distributors or manufacturers: how to avoid product shortages in the sales plant or in the warehouse. This issue can cause significant loss of revenue and lead to customer dissatisfaction. This point is especially difficult to understand with a data approach for products that structurally sell very little (such as the 30-year-old Chivas bottle).

One solution may be to calculate the average sales per product per week and per point of sale and distribute that average value per point of sale according to a Poisson’s law that fits very well with products that “spin” little. From this distribution we can calculate the probability that the product is out of service and try to define an action plan so that this probability decreases.

ELASTICITY MODELS FOR MIX MARKETING IMPACT ASSESSMENT

Models of elasticity are relevant to assess the impact of the marketing mix (price, promotion, assortment, etc.). In fact, to stay in the retail sector, consumer product sales figures show great variability from one store to another or from one week to the next. This variability of the data, a fundamental element for the models of elasticity, allows to explain the sales by a set of factors of impact (price, size of the store, reference).

In terms of price, elasticity allows us to say how much sales would increase if the price fell by 10% for example. Sometimes this ongoing price-sales relationship is not significant because there are price threshold phenomena beyond which sales drop sharply. It is key to consider both elasticity / threshold effect approaches.

To return to this notion of data variability, this key element in building models of elasticity, is much more difficult to obtain in the context of e-commerce. Indeed, while as seen above, physical commerce generates disparities between stores and between weeks (references, prices, and different promotions by store or week), online commerce does not generate the same variability. In fact, all products are available for construction on a commercial site.

More generally, more data is being created every day (mostly unstructured) and it is very important to take advantage of the variability of all this data instead of wanting to add it up and therefore lose variance. I like the analogy with water: a lake is a body of still water that is a nice nice place to swim but nothing happens, while mountain torrents flowing at high altitudes create electricity and generate value.

Data analysis has become essential in the development of business strategies. In recent years, it has become a part of our lives as citizens through everyday services and AI and IoT technologies. Therefore, it is increasingly important to understand what data can and cannot do, in order to make more accurate the experience we can bring as humans to complement the data.

Leave a Comment