Deep Learning for more accurate Supply Chain Forecasts

Deep Learning Forecasting

Home » Forecasting » Here

Deep learning forecasts, or simply deep forecasts, represent a significant improvement over Lokad's previous generation of forecasting technology. Compared to classic forecasting methods, deep learning provides not only unmatched statistical accuracy, but also delivers probabilistic forecasts, which are essential as far as supply chains and inventories are concerned.

Image

Embracing uncertainty

Companies are frustrated with forecasts that keep failing them. Traditional forecasting approaches are expected to produce correct figures, but they don't. Naturally, the future is uncertain, and when a given tool or solution fails to deliver the correct figures as expected, the benefits fail to materialize as well. No amount of fine-tuning the existing forecasting models, and no amount of R&D to develop better models - in the traditional sense - can fix this problem. Methods like safety stock analysis are supposed to handle uncertainty, but in practice, it's an afterthought. Probabilistic forecasts provide an entirely new way of looking at the future In supply chain management, costs are driven by extreme events: it's the surprisingly high demand that generates stock-outs and customer frustration, and the surprisingly low demand that generates dead inventory and consequently costly inventory write-offs. As all executives know, businesses should hope for the best, but prepare for the worst. When the demand is exactly where it was expected to be, everything goes smoothly. However, the core forecasting business challenge is not to do well on the easy cases, where everything will be going well, even considering a crude moving average. The core challenge is to handle the tough cases; the ones that disrupt your supply chain, and drive everybody nuts.

Back in 2016, Lokad developed a radically new way of forecasting, namely probabilistic forecasts. Then, more recently those forecasts got another massive upgrade through deep learning. Simply put, a probabilistic forecast of demand does not merely give an estimate of the demand, but assesses the probabilities of every single future. The probability of 0 (zero) units of demand is estimated, the probability of 1 unit of demand is estimated, of 2 units of demand, and so on... Every level of demand gets its estimated probability until the probabilities become so small that they can safely be ignored.

These probabilistic forecasts provide an entirely new way of looking at the future. Instead of being stuck in a wishful thinking perspective, where forecast figures are expected to materialize, probabilistic forecasts remind you that everything is always possible, just not quite equally probable. Thus, when it comes to preparing for the worst, probabilistic forecasts provide a powerful way of quantitatively balancing the risks (while traditional forecasts remain blind to the latter).
While risk analysis tends to be an afterthought in traditional forecasting approaches,
Lokad is bringing the case front and center with probabilistic forecasts.

From a practitioner’s perspective

Deep learning and probabilistic forecasts might sound intimidating and technical. Yet, the chances are, if you are a supply chain practitioner, you have been doing "intuitive" probabilistic forecasting for years already: think of all the situations where your basic forecasts had to be revised up or down, because the risks were just too great... This is exactly what probabilistic forecasts are about: properly balancing real-world decisions when facing an uncertain future. While risk analysis tends to be an afterthought in traditional forecasting approaches, Lokad is bringing the case front and center with probabilistic forecasts.

The data output of the probabilistic forecasting engine are distributions of probabilities. From a practical perspective, while this information is extremely rich (it is, after all, a glimpse at many possible futures!), it is also fairly impractical to use in its raw form. As a result, Lokad provides an entire platform, all the necessary tools and team support, to allow your company to turn these probabilities into business decisions, such as reorder quantities.

Lokad’s webapp features Big Data processing capabilities, and allows you to create the necessary business logic that turns these forecasts into decisions, which are specifically adapted to your business. These decisions can be adjusted to fit your particular supply chain constraints, such as MOQs (minimum order quantities) for example, your economic drivers, such as risks associated with shelf-life expiration, and your processes, such as daily purchase orders to be made before 8am every day. Image
No matter how many weeks or months of manpower are dedicated to making manual solutions work,
there is a constant need for more fine-tuning

Robotization through artificial intelligence (AI)

Supply chain management frequently involves many products moved across many locations. Traditional forecasting solutions tend to rely heavily on fairly manual adjustments whenever advanced statistical patterns, such as new products or promotions, are involved.

However, at Lokad, our experience indicates that if a forecasting solution requires fine-tuning there is just no end to it: no matter how many weeks or months of manpower are dedicated to making the solution work, there is a constant need for more fine-tuning, just because there are too many products, too many locations and the business keeps changing.

Therefore, at Lokad, we have decided to opt for a full robotization of the forecasting process:

  • zero statistical knowledge is required to obtain forecasts
  • zero fine-tuning is expected to be provided for adjusting forecasts
  • zero maintenance is required to keep the forecasts aligned with your business

Such robotization requires what could be colloquially referred to as AI capabilities. Lokad is delivering these capabilities through its deep learning forecasting technology. Intuitively, when looking at products one by one, the amount of information available per product is typically too insignificant to carry out accurate statistical analysis. However, by looking at correlations across all products ever sold, it becomes possible to auto-tune the forecasting models, as well as to compute much better forecasts that leverage not only the data of one specific product itself, but also the data of all the products seen as similar to it from a forecasting perspective.

Contrary to older approaches, our deep forecasting engine learns - on its own - the oddities that can be observed in the data. For example, selling 1 unit today for a product usually hints that there will more demand (as opposed to zero demand) for the product in the near future. Yet, sometimes, in maintenance scenarios, it can imply the opposite, when 1 unit sold means that, precisely, no such unit will be resold anytime soon.

For challenging verticals, like fashion, our deep learning engine leverages subtle clues about the products in order to take into account all the factors that contribute to the scale of the success of a product launch, for example.
At Lokad, forecasting is a work in progress.

The origin of our deep learning forecasts

Lokad didn't invent either deep learning or probabilistic forecasts. However, we pioneered a production-grade implementations of these statistical theories, specifically tailored for supply chain needs. Our deep learning forecasts are the 5th generation of our forecasting technology.

Our forecasting engine 5.0 is now using a grid of GPUs (Graphics Processing Units) rather than solely relying on CPUs (Central Processing Units). GPUs are giving us access to unprecedented amounts of processing power, which we can turn into superior forecasts.

From the experience gained through the previous iterations of this technology, we have amassed a considerable amount of know-how when it comes to designing a forecasting engine suitable for covering a wide range of business situations. Image

The very idea of estimating probabilities, rather than an average, came from our early years when we were still trying to get the classic approach to work. It took us quite a few failures to realize that the classic approach was intrinsically flawed, and that no amount of R&D could fix a broken statistical framework. The statistical framework itself had to be fixed in the first place, in order to get the forecasting model to work.

In addition, each iteration of our forecasting engine has been a generalization - from a mathematical perspective - of the previous version, with every new generation of our forecasting engine being capable of handling more situations than the previous one. Indeed, it's better to be approximately correct than exactly wrong. The toughest situations are encountered when the forecasting engine cannot generate forecasts that would be the most appropriate to fit a given business situation, because the engine is not expressive enough. Or when the forecasting engine cannot process the input data, which would be truly relevant for gaining statistical insights on any given situation because, once again, the engine lacks expressiveness.

At Lokad, forecasting is a work in progress. While we are proud of what we have built with our probabilistic forecasting engine, this is not the end of our efforts. Unlike on-premise solutions, where upgrading to a new tool is a challenge of its own, Lokad’s clients benefit from our next generation forecasting engine as soon as it becomes available.


Our forecasting FAQ


Which forecasting models are you using?

Our deep forecasting engine is using a single model built from deep learning principles. Unlike classic statistical models, it's a model that features tens of millions of trainable parameters, which is about 1000 times more parameters than our previous, most complex, non-deep machine learning model. Deep learning dramatically outperforms older machine learning approaches (random forests, gradient boosted trees). Yet, it's worth noting that these older machine approaches were already outperforming all the time-series classics (Box-Jenkins, ARIMA, Holt-Winters, exponential smoothing, etc).

Do you learn from your forecasting mistakes?

Yes. The statistical training process - which ultimately generates the deep learning model - leverages all the historical data that is available in Lokad. The historical data is leveraged through a process known as backtesting. Thus, the more historical data is available, the more opportunities the model has to learn from its own mistakes.

Does your forecasting engine handle seasonality, trends, days of week?

Yes, the forecasting engine handles all the common cyclicities, and even the quasi-cyclicities, whose importance is frequently underestimated. As for the code, the deep learning model intensively uses a multiple time-series approach to leverage the cyclicities observed in other products, in order to improve the forecasting accuracy of any one given product. Naturally, two products may share the same seasonality, but not the same day-of-week pattern. The model is capable of capturing this pattern. Also, one of the major upside of deep learning is the capacity to properly capture the variability of the seasonality itself. Indeed, a season may start earlier or later depending on external variables, such as the weather, and those variations are detected and reflected in our forecasts.

What data do you need?

In order to forecast demand, the forecasting engine needs to be provided - at least - with the daily historical demand, and providing a disaggregated order history is even better. As far as the length of the history is concerned - the longer it is, the better. While no seasonality can be detected with less than 2 years of history, we consider 3 years of history to be good, and 5 years excellent. In order to forecast the lead times, the engine typically requires the purchase orders to contain both the order dates and the delivery dates. Specifying your product or SKU attributes helps to considerably refine the forecasts too. In addition, providing your stock levels is also very helpful to us, for getting a first meaningful stock analysis over to you.

Can you forecast my Excel sheet?

As a rule of thumb, if all of your data fits into one Excel sheet, then we usually cannot do much for you, and to be honest, nobody can either. Spreadsheet data is likely to be aggregated per week or per month, and most of the historical information ends up being lost through such aggregation. In addition, in this case, your spreadsheet is also not going to contain much information about the categories and the hierarchies that apply to your products either. Our forecasting engine leverages all the data you have, and doing a test on a tiny sample is not going to give satisfying results.

What about stock-outs and promotions?

Both stock-outs and promotions represent bias in historical sales. Since the goal is to forecast the demand, and not the sales, this bias needs to be taken into account. One frequent - but incorrect - way of dealing with these events consists of rewriting the history, to fill in the gaps and truncate the peaks. However, we don't like this approach, because it consists of feeding forecasts to the forecasting engine, which can result in major overfitting problems. Instead, our engine natively supports “flags” that indicate where the demand has been censored or inflated.

Do you forecast new products?

Yes, we do. However, in order to forecast new products, the engine requires the launch dates for the other “older” products, as well as their historical demand at the time of the launch. Also, specifying some of your product categories and/or a product hierarchy is advised. The engine does indeed forecast new products by auto-detecting the “older” products, which can be considered as comparable to the new ones. However, as no demand has yet been observed for the new items, forecasts fully rely on the attributes that are associated with them.

Is it possible to adjust the forecasts?

A decade of experience in statistical forecasting has taught us that adjusting forecasts is never a good idea. If forecasts need to be adjusted, then there is probably a bug in the forecasting engine that requires to be fixed. If there is no bug to be fixed, and the forecasts are carried out just as expected from a statistical perspective, then adjusting them is probably the wrong answer to the problem. Usually, the need to adjust forecasts reflects the need to take into account an economic driver of some kind, which impacts the risk analysis “on top” of the forecast, but not the forecast itself.

Do you have experience with my vertical?

We have experience with many verticals: fashion, fresh food, consumer goods, electronics, spare parts, aerospace, light manufacturing, heavy manufacturing, etc. We also handle diverse types of industry players: e-commerce businesses, wholesalers, importers, manufacturers, distributors, retail chains, etc. The easiest way to be sure that we have experience with your vertical is to get in touch with us directly.

Do you use external data to refine the forecasts?

We can use competitive pricing data typically obtained through 3rd party companies that specialize in web scraping for example. Web traffic data can also be used, and possibly acquired, to enrich the historical data in order to boost further the statistical accuracy. In practice, the biggest bottleneck in using external data sources isn't the Lokad forecasting engine - which is fairly capable - but setting-up and maintaining a high-quality data pipeline attached to those external data sources.