In statistics, the accuracy of forecast is the degree of closeness of the statement of quantity to that quantity’s actual (true) value. The actual value usually cannot be measured at the time the forecast is made because the statement concerns the future. For most businesses, more accurate forecasts increase their effectiveness to serve the demand while lowering overall operational costs.
Use of the accuracy estimates
The accuracy, when computed, provides a quantitative estimate of the expected quality of the forecasts. For inventory optimization, the estimation of the forecasts accuracy can serve several purposes:
 to choose among several forecasting models that serve to estimate the lead demand which model should be favored.
 to compute the safety stock typically assuming that the forecast errors follow a normal distribution.
 to prioritize the items that need the most dedicated attention because raw statistical forecasts are not reliable enough.
In other contexts, such as strategic planning, the accuracy estimates are used to support the whatif analysis, considering distinct scenarios and their respective likelihood.
Impact of aggregation on the accuracy
It is a frequent misconception to interpret the quality of the forecasting model as the primary factor driving the accuracy of the forecasts: this is not the case.
The most important factor driving the value of the accuracy is the intrinsic volatility of the phenomenon being forecasted. In practice, in commerce or manufacturing, this volatility highly correlated to the aggregation level:
 larger areas, such as national forecasts vs local forecasts, yield more accuracy.
 idem for longer periods, such as monthly forecasts vs daily forecasts.
Then, once a level of aggregation is given, the quality of the forecasting model plays indeed to primary role in the accuracy that can be achieved. Finally, the accuracy decreases when looking further ahead in the future.
Empirical accuracy vs real accuracy
The term accuracy is most frequently used referring to quality of a physical measurement of some kind. Unfortunately, this vision is somewhat misleading when it comes to statistical forecasting. Indeed, unlike the physical setup where the measurement could be compared to alternative methods, the real accuracy of forecast should be strictly measured against data you don’t have.
Indeed, once the data is available, it is always possible to produce perfectly accurate forecasts, as it only requires mimicking the data. This single question has kept statisticians puzzled for more than a century, as a deeply satisfying viewpoint has only been found at the end of the 20th century with the advent of VapnikChervonenkis theory^{1}.
The accuracy of the forecasts can only be practically measured against available data; however, when the data is available, those forecasts aren’t true forecasts anymore, being statements about the past rather than being statements about the future. Thus, those measurements are referred as the empirical accuracy, as opposed to the real accuracy.
Overfitting problems can lead to large discrepancies between the empirical accuracy and the real accuracy. In practice, a careful use of backtesting can mitigate most overfitting problems when forecasting timeseries.
Popular accuracy metrics
There are many metrics to measure accuracy of forecasts. The most widely used metrics are:
 MAE (mean absolute error)
 MAPE (mean absolute percentage error)
 MSE (mean square error)
 sMAPE (symmetric mean absolute percentage error)
 Pinball loss (a generalization of the MAE for quantile forecasts)
 CRPS (a generalization of the MAE for probabilistic forecasts)
In practice, a metric should be favored over another based on its capacity to reflect the costs incurred by the company because of the inaccuracies of the forecasts.
Lokad’s gotcha
It’s better to be approximately correct than exactly wrong. In our experience dealing with commerce or manufacturing companies, we routinely observe that too little attention is paid to the choice of the accuracy metric.
Indeed, the ideal metric should not return values expressed as percentages, but should return Dollars or Euros, precisely reflecting the cost of the inefficiencies caused by the inaccurate forecasts. In particular, while most popular metrics are symmetric (the pinball loss being a notable exception), risks of overforecasting vs underforecasting are not symmetric in practice. We suggest adopting a viewpoint where the metric is closer to an economic cost function – carefully modeled to fit the business constraints – rather than a raw statistical indicator.
Also, it’s quite important not to perform any planning implicitly assuming that the forecasts are exact. Uncertainty is unavoidable in business and should be accounted for.
Further reading
 Video. Accuracy in sales forecasting, Matthias Steinberg, September 2011
 The best forecast error metric, Joannes Vermorel November 2012
 Accuracy financial impact on inventory, Joannes Vermorel, February 2012
Notes

Wikipedia. Vapnik–Chervonenkis theory ↩︎