### Joannes Vermorel

The stock associated to each SKU is an anticipation of the future. From a more technical viewpoint, the reorder point of the SKU can be seen as a quantile forecast. The quantile indicates the smallest amount of inventory that should be kept to avoid stock-outs with a probability equal to the service level.

While this viewpoint is very powerful, it does not actual says anything about **the risk of overstocking**, i.e. the risk of creating dead inventory, as only the stock-out side of the problem is directly statistically addressed. Yet, the overstocking risk is important if **goods are perishable** or if demand for the product can brutally disappear – as it happens in **consumer electronics** when the next-generation replacement enters the market.

Ex: Let’s consider the case of a western retailer selling, among others, snow chains. The lead time to import the chains is 3 months. The region were the retailer is located is not very cold, and only one winter out of five justify the use of snow chains. For every cold winter, the local demand for snow chains is of 1,000 kits. Now, in this context any quantile forecasts with a service level above 80% suggest to have more than 1,000 kits in stock in order to keep the stock-out probability under 20%. However, if the winter isn’t cold, then the retail will be stuck its entire unsold stock of snow chains, 1,000 kits or more, possibly for years. The reorder point calculated the usual way through quantiles focuses on upward situations with peaks of demand, but does not tell anything about downward situations where demand evaporates.

Yet, the risk of overstock can be managed through quantiles as well, however it requires a **second quantile calculation** to be performed leveraging a distinct set of values for tau (τ not the service level) and lambda (λ not the lead time).

In the usual situation, we have:

R = Q(τ, λ)

With

- R is the reorder point (a number of units)
- Q is the quantile forecasting model
- τ is the service level (a percentage)
- λ is the lead time (a number of days)

As illustrated by the example here above, such a reorder point calculation can lead to large values that do not take into account the financial risk associated with a drop of demand where the company ends up stuck with dead inventory.

In order to handle the risk of overstocking, the formula can be revised with :

R = MIN(Q(τ, λ), Q(τx, λx))

With

- τx is maximal acceptable risk of overstocking
- λx is the applicable timespan to get rid of the inventory

In this case, the usual reorder point gets capped by an alternative quantile calculation.

The parameter τx is used to reflect the **acceptable risk of overstock**; hence, instead of looking at values at 90% as it done for usual service levels, it’s typically a low percentage, say 10% and below that should be considered.

The parameter λx is used to represent the **duration that would put the inventory value at risk** because the goods are perishable or obsolescent.

Ex: Let’s consider the case of a grocery store selling tomatoes with a lead time of 2 days. The retailer estimates that within 5 days on the shelf, the tomatoes will have lost 20% of their market value. Thus, the retail decides that the stock of tomatoes should remain sufficiently low so that the probability of not selling the entire stock of tomatoes within 5 days remains less than 10%. Thus, the retailer adopts the second formula the reorder point R with τ=90% and λ=2 days in order to maintain a high availability combined with τx=10% and λx=5 days in order to keep the risk of dead inventory in control.

At present time, Salescast does not natively support a double quantile calculation, however, it’s possible to achieve the same effect by performing two runs with distinct lead time and service level parameters.