Metrics available to assess the performance of a forecast are many:

  • Mean Absolute Error (MAE)
  • Mean Squared Error (MSE)
  • Mean Absolute Percentage Error (MAPE)
  • Pinball Loss Function

In this post, we will try to address the question of the ‘best’ forecasting metric. It turns out to be simpler than most practioner would expect.

Among those, MAE and MAPE are probably the most widely used metrics by practicitioners both in retail and manufacturing. Let’s start by having a look at graphs for those two metrics.

  Plot of the Mean Absolute Error. X = real (forecast is 1). Y = error.

The behavior of the MAE is resonably straightforward. The one tricky aspect - from a mathematical viewpoint - is that the function is not differentiable everywhere (not for x=1 in the example here above).

 

Plot of the Mean Absolute Percentage Error. X = real (forecast is 1). Y = error.

The MAPE, however, is a lot more convoluted. Indeed, the behavior between over and under forecasts is very different: the under forecast error is capped to 1 whereas the over forecast error tends to infinity toward zero.

This later aspect in particular tends to wreak havoc when combined with out-of-stock (OOS) events. Indeed, OOS generate very low actual sales values, hence potentially very high MAPE values.

In practice, we suggest to think twice before opting for MAPE, as interpreting results is likely to a be a small challenge in itself.

The best metric should be expressed in Dollars or Euros

From a mathematical perspective, some metrics (such as L2) are considered as more practical for statistical analysis (because of being differentiable for example), however, we believe that this viewpoint is moot when facing real business situations.

The one and only unit to be used to assess the performance of a forecast should be money. Forecasts are always wrong, and the only reasonable way to quantify the error consists of assessing how much money the delta between forecast and reality did cost to the company.

Modeling business costs

In practice, defining such an ad-hoc cost function requires a careful examination of the business, triggering questions such as:

  • How much does inventory cost?
  • How much inventory obsolescence should be expected?
  • How much does stock-out cost?

As far company politics are concerned, modeling the forecast error as, say, a percentage, hence ignoring all those troublesome questions, has the one advantage of being neutral - leaving the rest of the company with the burden of actually translating the forecast into a course of action.

The process of establishing a sensible cost function is not rocket-science, however, it forces, within the company, the entity in charge of the forecasts, to write down explicitely all those costs. By doing so, choices are made, not beneficiting every division of the company, but clearly beneficiting the company itself.