Lately a couple of customers have been asking whether Lokad was keeping track of its past forecast errors in order to improve its future forecasts.

The answer is simple: yes, we do, but there are more than that. In particular, we do not wait for

  • the forecasts to be requested,
  • the course of events to happen,
  • the historical data to be updated,

to finally compare our past forecasts with what really happened. Indeed, such an approach would be way too slow and inefficient.

Instead, we are using cross-validation methods adapted for the purpose of time-series forecasting.The process is more simple than it sounds, let’s start with an example.

Let assume that we have a single time-series worth 1 year of weekly sales data (i.e 52 points). We want to produce 4-weeks sales forecasts - but we also want to estimate the forecasting error.

  • take the N first points (with N = 10 initially).
  • create a forecasting model based on those N points.
  • create a 4-weeks ahead forecast based on this model.
  • compare the forecast with the complete series.
  • increment N of 1 point (i.e. 1 week).
  • repeat.

With cross-validation, we can accurately estimate the expected forecast error of a forecasting model. In particular, if you have two different models, cross-validation can help you choosing the best one (*). Cross-validation can also be used to adjust model parameters - in order to find the parameters that best fit the data.

The Lokad team continuously monitors accuracy on delivered forecasts with such cross-validation methods and keeps working on more accurate forecasting models. Thus, we do keep track of our forecast errors, but without waiting for them to happen.

(*) If you try too many models, then you are likely to end-up with overfitting issues, but this problem is beyond the scope of this post.