Lokad's forecasting engine is somewhat a black box in the sense that you don't get the detail of how the forecasts are actually produced. Nevertheless, we are not secretive about big picture and methodology. Lets take a brief look at what is inside. Please see our technology page and the technology FAQ for more in-depth explanations.
Our technology relies on four important building blocks:
Our library of statistical models contains well over 100 models. We constantly work on new models that are put into production. Our portfolio contains well known classics such as autoregressive, moving average, (double, triple) exponential smoothing, Box-Jenkins, Holt-Winters, ARMA, ARIMA, etc. We furthermore work with very advanced models, or rather advanced approaches as we prefer to call them such as bayesian, vast margin, mixture/boosting, meta-heuristic-approaches (genetic algorithm, neural networks, genetic programming and other evolutive / adaptive approaches), etc.
The model selection mechanism assures that we identify and apply the best model combination for each individual product in your portfolio. We do this by benchmarking a large number of models against each other in an internal competition to identify the most accurate combination. This assures that each product is forecasted using the best set of models we have in our portfolio. Actually, model selection is in many cases more difficult than designing the models themselves.
Our multiple time series approach is based on the simple idea that looking at a single product at time is a rather myopic approach to forecasting. For example, in forecasting seasonality of a specific chocolate bar, seasonality of other chocolate bars is a valuable source of information. For each individual product, we analyze the complete product portfolio for correlations. By doing so, we typically uncover a multitude of overlapping patterns such as seasonalities, cannibalizations, trends, network effects which help to improve the forecast for the product at hand.
Finally cloud computing delivers the immense amount of computing power that is needed to make our model library, our selection mechanism and our multiple time-series approach work. As an example, to produce forecasts for a sample of 1.000 products, there is already about 1.000.000 of potential correlations to be analyzed. This requires about a thousand times more processing power compared to the classical forecasting toolkits which precisely avoid those sort of computing-heavy models. And actually we have won the first worldwide Microsoft Azure Partner of the Year Award for this technology.