Forecasting Engine

Over the last decade, data related technologies have evolved tremendously. Companies went from using numerical recipes that had been known and used since the 19th centuries, to Big Data oriented technology powered by Machine Learning and Deep Learning. Lokad has been focused on keeping ahead of things and bringing the best science can provide to supply chain optimization.

Historical progression of Lokad's forecasting technology

6 Generations of Forecasting

Take a trip down memory lane and discover the different generations of our forecasting technology.

The Right Mix of Ingredients

A Recipe for Success

Lokad’s technology is not about leveraging one (or several) magical statistical model. It’s a combination of ingredients working together to create the proper alchemy. In our early years, we realized pretty fast how big the gap was between pure mathematical modeling and the reality of supply chains.

What worked wonders in theory was utterly inefficient when applied to real businesses: the data was unclean, not deep enough, too sparse, the sheer volume of references or entries in the sales history for some businesses made entire classes of models extremely hard to use, and then the constraints of the supply chain itself made it so that improving the classical accuracy metrics of the forecasts actually degraded the business’ performance.

Lokad had to come up with the proper technological answers to all of these issues and to drastically change its view on forecasting and supply chain optimization.

Correlations

with Deep Learning
correlations-grey

When looking at a single product at a time, there is simply not enough data to produce an accurate statistical forecast. Indeed, on most consumer markets, the lifecycle of a product is less than 4 years, which means that, on average, most products don’t even have 2 years of history available - that is, the minimal depth to perform a reliable seasonality analysis when looking at a single time-series. We address the problem through statistical correlations: the information obtained on one product helps to refine the forecast of another product. For example, Lokad autodetects the applicable seasonality for a product even if the product has only been sold for 3 months. While no seasonality can be observed with only 3 months of data, if older, longer-lived products are present in the history, then the seasonality can be extracted there and applied to newer products.

Computing Power

Through Cloud Computing and GPUs
power-clouds

While leveraging correlations within the historical data vastly improves the accuracy, it also increases the amount of computations to be performed. For example, to correlate 1,000 products looking at all possible pairs, there are a bit less than 1,000,000 combinations. Worse, many companies have a lot more than 1,000 products. By leveraging cloud computing and Graphics Processing Units (GPUs), when clients push their data to us, we allocate the machines just when we need them; then, less than 60min later on, we return the results while we deallocate the machines accordingly. Since the cloud we use (Microsoft Azure) is charging us by the minute, we only consume the capacity that we really need. As no company needs to forecast more than once per day, this strategy cuts hardware cost by more than than 24x compared to traditional approaches.

Probabilities

To Embrace Business Constraints

barchart-grey

The traditional forecast is a median forecast, that is, a value that has 50% chance to be above or below future demand. Unfortunately, this classic vision does not address the core concerns of supply chain: avoiding stock-outs and reducing inventory. In 2016, Lokad introduced the notion of probabilistic forecasts for supply chain where the respective probabilities of every level of future demand get estimated. Instead of predicting on value per product, Lokad predicts the entire probability distribution. Probabilistic forecasts vastly outperform classic forecasts for slow movers, erratic sales and spiky demand. We believe that 10 years from now, all companies serious about inventory optimization will have gone probabilistic, probably leveraging a descendant of this technology.

From a Mathematical Library to an End-to-End Solution

We have a large library of statistical models. It includes well-known classics such as Box-Jenkins, exponential smoothing, autoregressive and all their variants. Additionally, since classic models poorly leverage correlations, we have developed better models that take advantage of all the data made available to us. Since the very beginning, we have been continuously monitoring the quality of the forecasts we deliver and running simulations to carefully assess the remaining weaknesses of our technology. We keep improving our models and feeding our library with new ones and new paradigms. Therefore, our clients benefit from an ever improving technology.

However, we realized a long time ago that this was not enough and that we needed to dig deeper into the reality of supply chain and the constraints and specificity of each business. Therefore, not only do we not require any kind of statistical skills from our clients, but we manage the entire process to provide a fully usable solution, complete with precise purchase orders, dispatch or pricing suggestions and dashboards of key performance indicators to assess their accuracy.

Our Supply Chain Scientists are there to help you include all your business insights in a tailor-made implementation. This is made possible through the use of our supply chain oriented programming language, Envision. Its flexibility allows us to fine tune scripts fully able to reflect the specificities of your business, in order to offer a perfect complement to our forecasting technology.