FAQ: Demand Forecasting

Lokad has evolved from its early demand forecasting origins of the late 2000s to a predictive optimization leader for supply chains, focusing on superior future event assessments while navigating real-world complexities.

Intended audience: supply chain practitioners, demand and supply planners, business analysts.

Last modified: March 7th, 2024

An automaton in a business suit, powered by 18th-century machinery, creates a time-series graph.

Forecasting principles

As Keynes observed, it is better to be approximately correct than exactly wrong. This principle applies to most supply chain situations (and non-supply chain scenarios), but it is especially valid as far as forecasting is concerned. When it comes to forecasting, Lokad is doing better than simply avoiding being exactly wrong; we routinely vastly outperform not only our competitors, but also research teams1— occasionally re-defining the state of the art. However, during the last decade we came to realize that the biggest limiting factor of the traditional forecasting perspective was not accuracy, but rather expressiveness.

Classic forecasts—i.e., point time-series forecasts—just do not tell enough about the future. Yet, time-series forecasts have become so prevalent that many practitioners forget how incomplete—not just inaccurate—they happen to be. Time-series forecasts treat the future of the business like the movement of the planet: a phenomenon where the observer has nothing to do with the observed objects. However, supply chains are not like astronomy, and companies (unlike planets) actively influence the direction of their supply chains. Fundamentally, the future is not preordained; it is what you make of it.

Strangely enough, the entire mainstream supply chain theory is constructed on top of time-series forecasts, leading to all sorts of bizarre turns. Pricing—an obvious way to steer demand—is usually eliminated from the picture, rendering it an entirely separate concern to planning. This is manifestly incorrect given their clear interrelation.

Another dimension entirely absent from the traditional timeseries perspective is uncertainty. This uncertainty is something that traditionalists believe can be tackled by pursuing greater accuracy in isolation—often dedicating vast resources to this end. However, supply chains keep proving that the uncertainty associated with future events is irreducible, and that supply chain problems require more than isolated tweaks—i.e., local optimization. Not only is future uncertainty irreducible, global markets appear quite skilled at throwing challenges in both old ways (e.g., wars, tsunamis) and new ways (e.g., lockdowns, inventive regulations).

Probabilistic forecasts

Lokad’s first major departure from the classic time-series forecasting perspective was probabilistic forecasts, initiated back in 2012 through quantile forecasts—which can be seen as an incomplete probabilistic forecast. Probabilistic forecasts consider all possible futures (i.e., demand, lead time, etc.) assigning probabilities to every single outcome. As such, probabilistic forecasts embrace the irreducible uncertainty of future events instead of dismissing the case entirely. Since 2012, probabilistic forecasts have proved, over and over, to be a vastly superior approach when it comes to risk management for supply chain. This is true for everything from small local decisions, like picking the right quantity for a SKU, up to the big decisions, like closing a long-term, multi-million-dollar service contract.

Furthermore, Lokad did not (and still does not) limit itself to probabilistic forecasting of demand. All the other sources of uncertainties are now quantified by the Lokad platform. These uncertainties include varying lead times, varying scrape rates, varying customer returns, etc. More broadly, all the uncertain future events must be forecasted, ideally through probabilistic forecasts. Thus, nowadays, Lokad routinely forecasts more than a dozen distinct sorts of future events. Importantly, these alternative forecasts are not time-series forecasts. We are not trying to express multiple disparate values/units (e.g., demand, lead time, etc.) using a time-series. In fact, most of the time the problem we are forecasting does not even fit the narrow framework imposed by a time-series.

Programmatic forecasts

Lokad’s second significant departure from the classic forecasting perspective was its programmatic shift, first with deep learning back in 2018, then with differentiable programming in 2019. The dominant view was that forecasting ought to be approached as a ‘packaged’ technological product. Lokad, like most of its peers, even referred to its ‘forecasting engine’—a monolithic software component dedicated to this very task. However, this perspective is lacking on two major fronts.

First, the ‘forecasting engine’ perspective assumes that there is some standard way to organize the input data that will be fed to the engine. This is not the case. The very structure of the input data—in the relational sense (e.g., SQL)—very much depends on the specifics of the business systems in place in the company. Forcing the historical data, as found in business systems, into a pre-conceived data model, as it is required by a forecasting engine, leads to all sort of issues. While Lokad managed (through ever-increasing sophistication) to engineer a forecasting engine vastly more flexible than what our competitors still offer, we also realized that this approach was a technological dead end. The forecasting engine is never flexible enough, and invariably ends up dismissing critical but nuanced aspects of the business.

Programmatic approaches, however, proved a vastly superior solution. This is where predictive modeling challenges are approached through programmatic paradigms as opposed to a rigid monolithic software. Lokad started in 2018 with deep learning frameworks—as commonly used by the broader community—but ended up revamping the technology entirely in light of the strides made in differentiable programming in 2019. The intent behind this complete technological overhaul was to turn relational data into a first-class citizen, unlike deep learning frameworks that treated—and still treat—it as a second-class one. While relational data dominates in supply chain, this is not the sort of data that captures the interest of the broader machine learning community (where images, natural language, voice, etc., predominate).

Second, the ‘forecasting engine’ perspective leaves no room for the company to shape its own future. No matter the sophistication of the engine, the paradigm implies there is a two-stage process taking place, with the forecasting/planning phase followed by an optimization/execution stage. This paradigm leaves little or no room to go back and forth between planning and execution. In theory, it is possible to repeatedly apply the forecasting engine over scenarios that have been adjusted according to the forecasts obtained through previous iterations. In practice, the process is so tedious that nobody really does that (at least not for long).

Bottom line: programmatic approaches are a game changer. This is because it becomes possible to operate bespoke feedback loops—between planning and execution—reflecting subtle but profitable options that the company would likely otherwise miss. For example, if the client is an aviation MRO company, it becomes possible to consider buying and selling rotables at the same time—the sales of unused parts funding the acquisition of now much-needed parts. Such interactions are not necessarily complex or even challenging, but discovering them requires the fine print of the business to be carefully considered. Non-programmatic approaches invariably fail to capture this fine print, pushing the supply chain practitioners back to their spreadsheets2. Differentiable programming proves to be a game changer on this front as well.

Frequently Asked Questions (FAQ)

1. Forecasting Algorithms and Models

1.1 Can you provide an overview of the forecasting engine(s) you use?

Lokad’s predictive capabilities are built on top of the differentiable programming capabilities of Envision, the DSL (domain-specific programming language) engineered by Lokad for the predictive optimization of supply chain. Thus, instead of having an ‘engine’, Lokad has programmatic building blocks that can be easily assembled to craft state-of-the-art predictive models.

Our predictive models include (but also exceed) the delivery of state-of-the-art time-series demand forecasts, as demonstrated by Lokad achieving first place (out of roughly 1000 competitors) at the SKU-level in an international forecasting competition based on Walmart datasets. The fine print of the method is given in a public paper. The programmability of Lokad’s platform provides flexible capabilities that cannot be replicated through a traditional “forecasting engine”. In fact, our last “forecasting engine” was phased out in 2018 in favor of a programmatic approach, precisely due to this limitation.

Moreover, we typically refer to ‘predictive modeling’ rather than ‘forecasting’, because it is not just future demand that needs to be quantitatively estimated but all sources of uncertainty. These classes include future lead times, future returns, future scrap rates, future source prices, future competitors’ prices, etc. Through differentiable programming, Lokad delivers forecasts that go well beyond what is traditionally expected from a forecasting engine. These extended forecasts are critical to deliver an end-to-end supply chain optimization, rather than an isolated demand plan.

Finally, Lokad delivers a ‘probabilistic predictive model’. Probabilistic forecasting (or ‘probabilistic modeling’) is critical to deliver risk-adjusted optimized decisions. Without probabilistic forecasts, supply chain decisions are fragile against any variation, generating steady overheads for situations that could have been largely mitigated through slightly more prudent decisions.

See Differentiable Programming in Envision for more on the fine print behind this critical tool, as well as History of the Forecasting Engine of Lokad to review our forecasting progression

1.2 Can you generate a baseline forecast based on statistical models?

Yes. Lokad can generate a baseline demand forecast based on low-dimensional parametric models, i.e., a statistical model. We do this using Envision, Lokad’s DSL (domain-specific programming language), specifically engineered for the predictive optimization of supply chains. Through Envision’s differentiable programming capabilities, it is also straightforward to learn parameters leveraging historical demand data.

There are two key limitations to the traditional forecasting perspective that has been superseded by the newer technologies offered by Lokad. First, point time-series forecasts (aka “classic forecasts”) do not capture the irreducible uncertainty of the future. In fact, they dismiss uncertainty entirely by expressing future uncertainty as a single value (e.g., demand) instead of a probability distribution of values.

As a result, through traditional time-series forecasts is not possible for the client to generate risk-adjusted decisions—e.g., ones that reflect the financial impact of ordering X units or X+1 units, or perhaps ordering none at all. This lack of risk awareness (i.e., in a quantitative sense) is invariably very costly for the client, as it leads to poor financial decision-making (e.g., POs, allocations, etc.). Lokad addresses this issue through probabilistic forecasting, as they embrace future uncertainty instead of dismissing it.

Second, demand forecasting, while being arguably the most important type of forecast, is not the only type of forecast. Lead times, returns, scrap rates, and all the other areas of future uncertainty need to be forecast as well. Lokad addresses this issue through programmatic predictive modeling.

1.3 What kind of data analysis and algorithms does the solution employ to generate accurate demand forecasts?

Lokad uses differentiable programming while leveraging detailed historical data and—if relevant—selective external data to generate demand forecasts and manage other supply chain complexities (e.g., stockouts and promotions).

Differentiable programming—used to learn parametric models—is the leading technique to generate accurate demand forecasts. As demonstrated in the M5 forecasting competition, based on retail data from Walmart, Lokad utilized this approach and placed number one at the SKU level (competing against about 1000 teams worldwide). This accomplishment qualifies the approach as state-of-the-art.

However, the M5 only scratched the surface when it comes to demand forecasts, as Lokad’s approach lends itself to countless ‘complications’, such as dealing with stockouts, promotions, returns, perishability, etc. Structured predictive modeling for supply chain provides the specifics of how Lokad tackles these complications.

Data-wise, Lokad leverages all the relevant historical sales data, down to individual transactions (if this data happens to be available). We also leverage other historical data that supplement the demand signal, such as historical stock levels, historical prices, historical competing prices, historical display ranks (ecommerce), etc. Lokad’s technology has been engineered to make the most of all the data that happens to be available, as well as to mitigate the effects of the data that is unfortunately not available.

External data might be used if deemed relevant to refine the demand forecasts. However, in our experience, data beyond competitive intelligence rarely brings an accuracy improvement worth the substantial engineering efforts that are associated with the preparation of those datasets (e.g., social data, weather data, etc.). Leveraging such datasets should be reserved for mature companies that have already exhausted all easier venues to improve the forecasting accuracy.

1.4 Do you reduce forecast error through machine learning techniques?

Yes. Lokad uses differentiable programming and deep learning for reducing forecasting error. We occasionally use alternative techniques, such as random forests or gradient boosted trees. Also, we use machine learning (ML) techniques to revisit ‘classic’ statistical methods (e.g., autoregressive models), but with much improved methods when it comes to learning the relevant parameters of the methods.

Though Lokad does use ML, it should be noted that it is not a homogenous body of work, rather a shared perspective on how to approach data. Considering that machine learning, as a field of research, has been around for over three decades, the term in fact covers a wide range of techniques; some considered state-of-the-art and some that are fairly obsolete.

From our perspective, the most important paradigm shift in ML, particularly for supply chain purposes, is the transition from feature engineering to architecture engineering. Simply put, the machine learning techniques have themselves become programmable. Both deep learning and differentiable programming reflect this newer perspective that favors architecture engineering over feature engineering, and this is why Lokad uses this approach.

For supply chain purposes, architectural engineering is key to reflect, within the predictive model, the very structure of the problem being addressed. Though this might seem like an abstract consideration, it is the difference between a forecast that systematically mismatches the data of the ERP, and a forecast that really embraces the situation.

1.5 How do you identify and predict demand patterns to prevent stockouts and overstocking?

Lokad reduces stockouts and overstocking through probabilistic forecasting, which embraces the uncertainty of future demand by providing probabilities of large demand deviations. This approach allows Lokad to provide risk-adjusted decisions to clients, which enables better choices (e.g., POs) and reduces stockouts and overstocking. This approach contrasts with traditional point time-series forecasts, which ignore financial risks and rely on reducing forecast errors in isolation.

Putting aside other possible causes—such as varying lead times—stockouts and overstocking typically reflect unexpected (future) demand. Lokad directly addresses this problem through probabilistic forecasting. Unlike the mainstream supply chain methods that ignore the irreducible uncertainty of the future, Lokad embraces uncertainty in a strict quantitative sense. Probabilistic forecasts provide the probabilities of observing large demand deviations, something which is essential if one wishes to compute risk-adjusted decisions.

Risk-adjusted decisions not only consider the probability of facing unusual events (e.g., very low or very high demand), but also the financial risks associated with those outcomes. As a rule of thumb, there are highly asymmetrical costs when it comes to having too few or too many units. A risk-adjusted decision minimizes the expected losses by steering the client in the most ‘prudent’ or ‘rewarding’ direction.

In contrast, and despite their popularity, periodic point time-series forecasts (aka “classic forecasts) are completely dismissive regarding these risks. This perspective is aimed at lowering the forecast error in isolation to the degree error becomes inconsequential. However, this is wishful thinking as future uncertainty is irreducible. This is why point forecasts fail at preventing stockouts and overstocking in a satisfying manner.

In short, it does not matter whether a crude or sophisticated model is used when the underlying assumptions/tools (e.g., point time-series forecasts) are fundamentally flawed.

See probabilistic forecasting for the fine print on this concept.

1.6 How do you handle seasonality in demand?

Executive summary: Lokad handles seasonality in demand through differentiable programming, using low-dimensional parametric models that hard-code the structure of various cyclicities, such as yearly, weekly, and event-specific patterns. This automated approach ensures accuracy and stability in demand forecasting by considering all patterns impacting demand simultaneously, without need for manual intervention.

Seasonality, also referred to as the yearly cyclicity, is one of the many cyclicities that Lokad handles. We can also handle the weekly cyclicity (i.e., day of the week effect), monthly cyclicity (i.e., the paycheck effect), and the quasi-yearly cyclicities (e.g., Easter, Ramadan, Chinese New Year, Black Friday, etc.).

Our go-to technique to deal with cyclicities is differentiable programming. We leverage low-dimensional parametric models that structurally reflect the target cyclicities. In other words, we pick models where the structure of the cyclicity is a given, hard-coded by Lokad’s supply chain scientists. This is designed to help us quantify the magnitude of the fluctuations associated with the target cyclicities—rather than merely identifying/discovering their existence.

Once the numerical recipe has been engineered by Lokad’s supply chain scientists, the overall optimization process is entirely automated. In particular, Lokad’s supply chain optimization does not require any kind of manual intervention (i.e., micro-managing the seasonality profile), nor does it rely on exceptions for recent products or for products that have been launched yet. Lokad’s approach might seem somewhat novel, but it is critically important for supply chain purposes.

First, it delivers more accurate results, as the machine learning process does not try to discover the cyclicity, rather the cyclicity is taken a given (and widely acknowledged by supply practitioners already). This is even more critical in situations where the amount of data is limited.

Second, it delivers more stable results, constraining the shape of the demand function to be learned. This approach goes a long way to mitigate numeric artifacts where the estimated future demand fluctuates widely while the input data does not.

Finally, differentiable programming, used by Lokad to build (machine learning) models from the client’s data, allows us to jointly address all the cyclicities, as well as all the other patterns that shape the observed demand patterns (e.g., stockouts or promotions). Cyclicities cannot be estimated in isolation, nor in sequence, from the other patterns that impact demand. All these patterns, and their respective parameters, have to be jointly estimated.

See Structured Predictive Modeling for Supply Chain for the fine print on differentiable programming and its place in supply chain optimization.

1.7 Do you have long-term (more than 3 years ahead) forecasting capabilities to predict future demand and make a proposal for adjustments accordingly? What is the maximum horizon of forecast that can be generated?

Yes. Lokad can forecast indefinitely far into the future, hence there is no maximum horizon.

Given the nature of future uncertainty, forecast inaccuracy steadily increases as the forecast horizon grows. While it is technically straightforward to produce a long-term forecast, it does not mean that this forecast can be trusted for supply chain purposes. No matter how fancy the underlying model might be, forecasting is ultimately attempting to guess what the road will look like while looking in the rearview mirror.

Furthermore, the capacity to make manual adjustments to an otherwise automated forecast tends to make the situation worse. Once forecasts have been manually altered by ‘experts’, organizations invariably place excessive trust in them. Numerous benchmarks performed by Lokad indicate that experts rarely outperform crude averaging methods when it comes to long-term forecasts. As such, manually adjusted forecasts typically benefit from an undeserved aura of expertise that makes organizations over-reliant on them. This practice of manual tweaking even survives after the numbers inevitably turn out to have been poor guesses.

As an overall commentary on long-term forecasting, we agree with the perspective of Ingvar Kamprad (IKEA founder), who wrote in The Testament of a Furniture Dealer: “exaggerated planning is the most common cause of corporate death”. Generally speaking, unless the client company is dealing with exceptionally stable market conditions (e.g., public utilities), we do not advise steering one’s supply chain through long-term forecasts. Lokad’s team of supply chain scientists is available to provide guidance on better (and saner) approaches which uniquely reflect the specific requirements of each client company.

1.8 Can you provide a minimum of 28 days rolling item/store forecasts?

Yes, Lokad can forecast indefinitely far into the future, even at the SKU-level for a large retail chain.

For our retail clients, we routinely have forecasting horizons of 200 (or more) days, while operating at the SKU-level. These mid-term horizons are useful to properly assess the risks associated with dead inventory for slow movers. Furthermore, Lokad’s platform is highly scalable, thus dealing with tens of millions of SKUs while processing years of daily historical data is not difficult. In fact, Lokad’s platform can effortlessly scale to accommodate even large retail networks without the need for any prior capacity planning.

See also Forecasting Algorithms and Models 1.7 in this FAQ.

1.9 Can you utilize external data sources and/or indicators to improve the accuracy of the demand forecast?

Yes. For example, Lokad routinely uses competitive intelligence (i.e., published prices of competitors). In certain industries, public indicators can be of great use (e.g., projected aircraft fleet sizes for aviation MROs). Lokad’s programmatic platform is uniquely suitable for leveraging varied data sources—beyond the historical data obtained from the business systems.

On the matter of external data, there are two sources that are counter-intuitively almost never worth the engineering efforts: weather datasets and social network datasets. Weather datasets are very unwieldly (i.e., very large and very complex) and, realistically, they are not really better than seasonal averages beyond two weeks ahead (give or take). Social network datasets are also very unwieldly (i.e., very large, very complex, and heavily populated by garbage data), and also lean heavily on short-term effects—typically spanning a few days.

We do not argue that no value can be extracted from either weather data or social network data, as we have already succeeded in doing so for some clients. However, not all forecasting accuracy improvements are worth the engineering efforts to get them. Our clients have to operate with limited resources, and usually those resources are better invested in refining other aspects of the end-to-end supply chain optimization. This is a more prudent approach than seeking the last 1% (typically not even that much) extra accuracy through external datasets that are 2 or 3 orders of magnitude larger than the client’s own historical datasets managed.

1.10 How do you cope with different levels of rate of sale, from less than 1 per week to thousands per day?

To manage varying sales rates, Lokad uses probabilistic forecasts for sparse demand, employing specialized data structures like Ranvar for efficiency across all sales volumes, thus simplifying supply chain challenges.

When it comes to varying magnitudes of sales rates, the main challenge lies with the small numbers as opposed to the large ones—large numbers being comparatively much easier to process. In order to cope with sparse demand, Lokad leverages probabilistic forecasts. Probabilistic forecasts assign a probability to every discrete event such as the probability of selling 0 units, 1 unit, 2 units, etc. Probabilities eliminate entire classes of problems associated with fractional demand values as traditionally obtained with mainstream supply chain methods.

Under the hood, probabilities over a short series of discrete possibilities are represented as histograms (or similar data structures). These data structures are very compact and thus entail low compute overheads. However, when dealing with sparse demand, a naïve implementation of such data structures (e.g., keeping 1 bucket per unit of demand) would become dramatically inefficient when presented with non-sparse demand distribution(s) that involves thousands of units of demand per period.

Thus, Lokad has engineered special data structures, like the Ranvar (see below) that guarantee constant-time and constant-memory overheads, for the algebraic operations featuring probability distributions. Ranvar gracefully approximates the original probability distribution when numbers become large while keeping the precision loss inconsequential from a supply chain perspective. Data structures like Ranvar largely eliminate the need to isolate and target sparse-demand, while preserving all the desirable small-integer patterns when dealing with sparse demand.

See our public video lecture Probabilistic Forecasting for Supply Chain and our public documentation Ranvars and Zedfuncs for the fine print on this point.

1.11 Do you forecast in different units (unit, price, case, weight, etc.)?

Yes, Lokad’s platform is programmatic. We can re-express our forecasts in any desirable unit. Furthermore, we can accommodate situations where multiple units are involved. For example, containers are limited both in terms of weight and volume. As such, the projection of future container use may have to factor both of these constraints to properly assess how many containers are likely going to be needed.

1.12 Do you support multiple forecasting algorithms (e.g., linear regression, exponential smoothing, moving average, ARIMA, etc.)?

Yes. Lokad’s platform is programmatic, so we can support all classic forecasting models (such as those listed in the question).

It is important to note that most of the “classic” forecasting models (e.g., linear regression, exponential smoothing, moving average, ARIMA etc.) are not considered state-of-the-art anymore, and do not appear as top performers in public forecasting competition. In particular, most of those models fare poorly when it comes to accommodating the usual complications found in supply chain situations (e.g., stockouts, cannibalizations, quasi-seasonal events like Chinese New Year, etc.).

Usually, Lokad’s supply chain scientists craft a bespoke numerical recipe to cover the forecasting needs of the client company. Our supply chain scientist forecast necessary demand as well as all the other uncertain supply chain factors, such as lead times, returns, scrap rates, competitor prices, etc. Moreover, the forecasting algorithm(s) must be adapted to capitalize on the available data while mitigating the data distortions that are inherent to supply chain operations (e.g., the demand frequently bounces back at the end of a stock-out event).

See our public video lecture No1 at the SKU-level in the M5 forecasting competition for details of Lokad’s forecasting bona fides.

1.13 What level of granularity is sent back for the forecast?

Lokad can accommodate any granularity in its forecasts. This means we can forecast at the most disaggregated granularities, such as down to the SKU or even forecasting demand per client per SKU (if it makes sense), as well as forecast up to company-wide forecasts.

As forecasts are numerical artifacts intended to serve the generation of optimized supply chain decisions, Lokad’s supply chain scientists adjust the granularity of the forecasts to exactly match the decisions that the forecasts are intended to support. In particular, if there are multiple supply chain decisions to support, then there are usually multiple forecast granularities as well.

However, Lokad goes beyond merely adapting the granularity of the forecast (i.e., picking a certain level within a given hierarchy). We adjust the whole forecasting perspective to better reflect the task at hand. For example, for a B2B retailer, it might make sense to forecast client churn, as the client’s stock (serving a steady demand for a given SKU) might turn overnight into dead stock. This might happen if all (or most of) the demand came from one large client who has suddenly churned. Lokad is capable of forecasting the probabilities of churn alongside the demand for a given SKU. Subsequently, we can combine the two forecasts as needed to optimize the pertinent inventory decisions.

1.14 Can you generate quantitative forecasts using weekly sales data?

Yes. Our forecasting capabilities are very flexible. We can, for example, accommodate weekly sales data instead of raw transactional data (our preference).

It is worth noting that flattening transactional data into a weekly time-series is a lossy process, which is to say that critically useful information might be lost in the process. Once lost, this information cannot be recovered, no matter how sophisticated the forecasting model might be.

For example, imagine a DIY retailer selling light switches. This retailer observes 1 unit of demand per day (on average) for a given SKU in a store replenished every day of the week. If the bulk of the demand comes from customers purchasing 1 unit at a time, then 4 units in stock is probably going to provide a decent service level. However, if the bulk of the demand comes from customers typically purchasing half a dozen units at once (with 1 customer showing up per week on average), then 4 units in stocks equates to a terrible service level.

This demonstrates the problem with arbitrary aggregation. Once sales data has been weekly aggregated, for example, the difference between the two situations described above is lost. This is precisely why Lokad prefers to handle raw transactional data whenever possible.

1.15 Do you generate a daily (or intraday) forecast from daily history, or do you apply daily patterns to a weekly statistical forecast?

When daily historical data is available (or, even better, data at the transaction level), we typically jointly learn all the relevant cyclicities—day of the week, week of the month, week of the year—in order to improve forecasting accuracy. Through Lokad’s platform, it is very straightforward to include (or exclude) any given cyclicity or quasi-cyclicity (e.g., Easter, Chinese New Year, Ramadan, etc.).

The hierarchical decomposition that separates the day-of-week cyclicity from the week-of-year cyclicity may or may not be used by Lokad. Our platform, however, can support both options. This concern (to decompose or not to decompose) is not exclusive to the cyclicities, and similar concerns must be addressed for all the other patterns.

The choice of the most suitable model is left to Lokad’s Supply Chain Scientists. Their choice is based on a careful examination of the specific patterns observed in the supply chain of interest.

1.16 Do you auto-adjust the forecast during the day (or week) based on actual sales vs the expected sales?

Lokad refreshes its predictive models daily to correct any errors from incorrect data entries, ensuring forecasts are accurate and up-to-date. This approach counters numerical instabilities in older technologies, using stable and precise models to prevent erratic forecast changes and improve supply chain decisions.

As a rule of thumb, Lokad refreshes (re-trains) all its predictive models every time we get a fresh batch of historical data. For the majority of our clients, this happens once every day. The most important reason for this is to make sure that incorrect data entries—that have already been fixed—do not linger due to the persistence of ‘broken’ forecasts that been generated in the past (based on those incorrect entries). Lokad’s functionality makes the daily refresh of the predictive models a non-issue, even considering very large supply chains.

On the other hand, some outdated forecasting technologies suffer from numerical instabilities. As a result, supply chain practitioners may fear a system that is refreshed too frequently, because, in their experience, it means that the forecasts will act erratically. From Lokad’s perspective, a predictive model that erratically “jumps around” due to the arrival of daily data increments is, in fact, a defective model that needs fixing. Delaying the refreshes to mitigate the problem cannot be considered a reasonable fix, as the forecast accuracy needlessly suffers by not considering the most recent events.

Lokad solves this problem by adopting classes of predictive models that have correct-by-design properties when it comes to numerical stability. Differentiable programming is particularly effective at engineering models that are both very stable and very accurate.

See Refresh Everything Every Day for more on this point.

1.17 How do you establish a confidence level that the actual sales level will continue into the future?

We use probabilistic forecasting and stochastic optimization to assess all potential outcomes and their probabilities, allowing for risk-adjusted supply chain decisions. Each potential outcome has a confidence interval, which can be used to express confidence levels.

When probabilistic forecasts are used, as recommended by Lokad, all possible futures get an estimated probability. In turn, confidence intervals are straightforward to obtain from a probabilistic forecast. The confidence intervals can be used to establish a ‘confidence level’ according to a certain degree of risk (e.g., worst 5% scenario vs worst 1% scenario).

However, the implicit assumption behind ‘confidence levels’ is that the supply chain decision depends on the original forecast(s). The probabilistic forecasting perspective forecasting entirely changes how we approach the whole question of forecasting (in)accuracy. When probabilistic forecasts are available, the supply chain decisions (e.g., a given purchase order) can suddenly benefit from a risk-adjusted optimization. In other words, the decision can be optimized for all the possible futures and their respective probabilities, and each decision ranked in terms of their financial impact(s).

The technical term for this “optimization under uncertainty" is stochastic optimization. Lokad delivers both probabilistic forecasting and stochastic optimization.

1.18 Can you combine multiple forecasting algorithms?

Yes, although we ceased to recommend this practice about a decade ago. Combining multiple forecasting algorithms (aka “meta-models”) in a production setting typically generates suboptimal supply chain decisions—precisely why we do not recommend this approach.

Combining multiple forecasting models is one of the easiest options to improve synthetic results, typically obtained through backtesting. However, this “meta-model” (the product of combining multiple underlying forecasting models) is usually unstable, in that it keeps “jumping” from one model to the other. As a result, supply chain practitioners are routinely confused by sudden deviations or “changes of mind” by the meta-model. Even worse, meta-models are, by design, fairly opaque because they are a mix of several models. Even when the underlying models are simple, the meta-model that results from their mixing is not.

Thus, any “extra accuracy” gained, through the use of meta models, in the benchmark (i.e., “synthetic results”) is invariably lost in production (i.e., real-world scenarios) due to second-order effects such as the increased instability and the increased opacity of the forecasts.

1.19 Do you auto-select a best-fit model for the forecasts?

Yes, Lokad delivers a singular, effective predictive model for supply chain forecasting. We avoid “meta-models” due to their real-world underperformance and opacity.

Lokad’s Supply Chain Scientists deliver each client a singular predictive model rather than an amalgamation of different algorithms that compete for selection, as per the “meta-model” approach. This meta-model approach is something Lokad ceased to operate about a decade ago.

It is worth noting that, at a technical level, Lokad has no problem in operating an “inner competition” of forecasting models—i.e., a pool of models where the best one is automatically selected as needs dictate. Such an approach is technically straightforward. The reason Lokad avoids this practice is the benefits associated with meta-models are synthetic (i.e., visible in benchmarks) and do not translate to real-world supply chain scenarios. Our experience indicates that meta-models invariably perform worse than their non-composite counterparts.

Meta-models primarily reflect outdated forecasting technologies where a collection of defective models is assembled: the first model is bad on seasonality; the second model is bad on short time-series; the third model is bad on erratic time-series; etc. Building a meta-model gives the illusion that the model has attenuated its constituent defects, however, the defects of each model will routinely resurface as the model selector logic itself has its own limitations. Worse, meta-models typically undermine the trust of the supply chain practitioners as this design proves “opaque by design”.

This is why Lokad’s approach is to craft a predictive model that is exactly as simple it can, but not simpler. When designed with suitable supporting technologies, like differentiable programming, this single model deals with the entire supply chain scope for the client company, without the need to resort to a mixture of models.

See also Forecasting Algorithms and Models 1.18 in this FAQ.

1.20 Can you run forecasting tournaments, automatically selecting the best model with the best parameterization? Do you do that with machine learning?

Yes. Lokad can do this though we do not recommend this approach. Combining models via machine learning (to create “meta-models") does not yield benefits in a production setting. We advocate a single-model approach instead.

About a decade ago, we used to leverage meta-models for forecasting. Meta-models are models that represent a combination of other models, and/or a model that is a selection of other models. The mixture and/or selection of the underlying models was also done with machine learning techniques—typically random forests and gradient boosted trees.

However, despite improving synthetic results via benchmarking (typically conducted with backtesting), the meta-model approach invariably degrades the real-world outcome(s) for the client. The automatic selection of the model leads to erratic forecasting “jumps” when the meta-model transitions from one model to another. The use of machine learning techniques for the model selection also tends to aggravate this behavior by making the transitions even more erratic.

Thus, while the Lokad platform does support forecasting tournaments, we do not recommend using such approaches for production purposes. In particular, recent forecasting competitions show that a single unified model outperforms more complex meta-models, as illustrated by Lokad placing first at the SKU-level in a worldwide competition involving a Walmart dataset (see below).

See also Forecasting Algorithms and Models 1.18 in this FAQ.

1.21 How do you ensure more granular information for each item/store is used while avoiding noise and overfitting of the model?

Lokad utilizes differentiable programming to enhance forecasting accuracy, an approach that allows us to tailor models to specific data structures and manage overfitting by controlling model expressiveness. This approach effectively addresses the “law of small numbers” by incorporating minimal (yet crucial) expert guidance to optimize data efficiency.

The problems of noise and overfitting are chief motivators for why Lokad uses differentiable programming in its forecasting. Though differentiable programming, Lokad’s supply chain scientists have full control over the very structure of the model. Differentiable programming lets them craft a model that espouses the input data (including its relational structure). Furthermore, differentiable programming lets them restrict the expressiveness of the model in order to keep overfitting under control.

Differentiable programming has been a breakthrough for Lokad in order to cope with the ‘law of small numbers’ that governs supply chains—i.e., forecasts must always be done at the level/granularity that reflects the supply chain decisions of interest, such as ‘by SKU by day’. However, by doing so, forecasting models face situations where the number of relevant data points are counted as single-digit numbers.

The breakthrough of differentiable programming is that it lets a Supply Chain Scientist (usually employed by Lokad, but possibly employed by the client company) inject some high-level prior knowledge into the predictive model (e.g., a selection of the relevant cyclicities) in order to make the most of the very few data points that are available. Unlike “expert systems” of the 1980s, differentiable programming requires very limited guidance from a human expert—yet this limited guidance can make all the difference when it comes to data efficiency.

2. Forecast Management and Adjustments

2.1 Can users visualize forecasts? Can they aggregate the forecasts at different levels (e.g., warehouse, store, shop)?

Executive summary: Yes, Lokad’s platform offers robust data visualization (in constant-time) for inspecting and aggregating forecasts at any desired level(s).

Lokad’s platform provides extensive data visualization capabilities that can be used to inspect time-series forecasts. In particular, it is straightforward to aggregate forecasts according to any hierarchy (e.g., locations, regions, product categories, etc.) and according to any granularity (e.g., day, week, month, etc.). Moreover, Lokad’s platform ensures constant-time display for these reports, meaning they are rendered in under 500 milliseconds — assuming the end-user has enough bandwidth to load the report in this time frame.

However, this question implicitly assumes that we are talking about point time-series forecasts (aka classic demand forecasts). While Lokad’s platform does support point time-series forecasts, these forecasts are obsolete now on two accounts.

First, point forecasts present one future value as if it was THE future (i.e., exactly what will happen). In this regard, it treats the future as the symmetric of the past. However, the uncertainty of the future is irreducible, and the future, as seen from a supply chain perspective and not a physicist perspective, is not the symmetric of the past. For this reason, probabilistic forecasts should be favored instead — an approach that considers ALL possible future outcomes (e.g., demand values) and ascribes probabilities to each one. In terms of risk management, this provides a much more robust defense against the irreducible uncertainty of the future.

However, while probabilistic forecasts can be expressed at any levels (e.g., warehouse, store, product, etc.), they are not additive, at least not in the usual sense. Thus, while Lokad’s platform provides all the relevant data visualization capabilities for our forecasts, these capabilities are typically not the ones that supply chain practitioners would expect (at least those who have no prior experience of probabilistic forecasting).

Second, time-series forecasting models are frequently unsuitable because the time-series perspective itself is simplistic and fails to capture the essence of the business. For example, a B2B retailer can have a mix of two types of orders: small orders that customers expect to be readily serviced from the stock of the retailer; and large orders placed months in advance that the customers expect to be serviced on time — precisely because the order was given with so much leeway in the first place. This pattern, however basic, cannot be addressed with a time-series forecast. Furthermore, patterns that do not fit time-series forecasts include shelf-life expirations, cannibalizations, substitutions, competitors’ price changes, etc.

More generally, time-series forecasts are nice for visualization purposes. However, more often than not, at Lokad the underlying forecasting model will not be a time-series one — even if the final data are visualized as a time-series for the sake of convenience.

2.2 What type of forecasting insights should be handled by the experts vs the system/machine?

Experts should focus on the predictive model’s high-level structure (e.g., the relational structure of the input data, the key structural assumptions that can be made about this data, etc.). There is no expectation that experts will have to micro-manage (e.g., manually override) the forecasts themselves.

Given Lokad leverages modern predictive technology — differentiable programming — our Supply Chain Scientists focus almost exclusively on the ‘high-level structure’ of the predictive model. This is contrary to older technologies (now obsolete) that typically expected the expert using them to micro-manage the forecasts, providing corrective insights for all the edge-cases the models presented. Unfortunately, such dated approaches invariably proved to be too tedious for experts to maintain over time. As a result, the companies utilizing them usually lost their experts and then had to revert to using spreadsheets.

In contrast, the high-level structure of the predictive model is something that can be expressed concisely, usually through not more than 100 lines of code. This brevity is true even when considering very complex supply chains. The high-level structure represents the core of the human understanding of the predictive challenge. Meanwhile, the process(es) in charge of ‘learning’ the parameters of the model remain entirely automated. This is done by leveraging the input data (typically the historical data) plus some other data sources (e.g., upcoming marketing campaigns).

2.3 Can the forecasts be manually adjusted/overwritten?

Executive summary: Yes. Though Lokad’s platform supports manual adjustments to forecasts, this is unnecessary given the probabilistic forecasts themselves are designed to account for risk and uncertainty — typically the driving principles behind manual override in the first place.

Lokad’s platform offers extensive programmatic capabilities, thus it is straightforward to support editing capabilities for any forecasting process. However, the need for manual adjustment of forecasts primarily reflects the limitations of obsolete forecasting technologies. Lokad’s use of advanced probabilistic forecasting largely eliminates the need for micro-management of the forecasts. In fact, at Lokad the need for such micro-management effectively disappeared a decade ago.

Manual corrections of the forecasts are typically intended as an indirect way to mitigate risks. The supply chain practitioner does not expect the forecast to become more accurate in a statistical sense, rather they expect the decisions resulting from the adjusted forecast to be less risky (i.e., less costly for the company). However, with probabilistic forecasts, the supply chain decisions (generated by Lokad) are already risk-adjusted. Thus, there is no point attempting to steer the probabilistic forecast to de-risk decisions, as the decisions are inherently designed to be risk-adjusted.

Furthermore, manual corrections of the forecasts are frequently intended to mitigate situations of high uncertainty. However, probabilistic forecasts are designed to embrace and quantify uncertainty. Thus, the probabilistic forecasts already reflect the area(s) of high uncertainty, and risk-adjusted decisions are taken accordingly.

Fundamentally, there is no point in trying to manually fix “incorrect” forecasts. If forecasts are provably less accurate than they are expected to be, then the numerical recipe generating the forecasts should be fixed. If the forecasts are modified for reasons that do not pertain to accuracy, then it is the downstream calculations that need to be adjusted. Either way, manually adjusting forecasts is an obsolete practice that has no place in a modern supply chain.

2.4 Can you integrate user-built forecasting algorithms?

Yes. Lokad enables integration of user-built forecasting algorithms via Envision—our domain-specific programming language (DSL). This flexible, customizable, and scalable DSL can support mainstream and advanced forecasting algorithms and techniques, as needed.

Lokad’s platform is programmatic, something that is a first-class citizen in our technology and delivered through Envision—the DSL (domain-specific programming language) engineered by Lokad for the predictive optimization of supply chains. Through Envision, all mainstream forecasting algorithms (and their variants) can be re-implemented. In addition, Envision also supports quite a few not-yet-mainstream forecasting algorithms, including competition-winning techniques based on differentiable programming and probabilistic forecasting (see below).

Integrating those user-built algorithms in Lokad should not be confused with a “customization” of the Lokad product. From Lokad’s perspective, relying on bespoke algorithms is the normal way to use our service. The Lokad platform provides a safe, reliable, and scalable execution environment to support such algorithms. The implementation of the algorithms (usually referred to as “numerical recipes”) is normally carried out by Lokad’s Supply Chain Scientists. However, if the client company has in-house talents for data science, then those employees can also use Lokad’s platform for this purpose.

Moreover, Lokad’s platform provides an entire IDE (integrated development environment) to craft such user-built algorithms. This capability is critical to make sure that the algorithms are developed within an environment that strictly mirrors the production environment—both in terms of input data and runtime capabilities. With Lokad, once a revised forecasting algorithm is deemed satisfactory (and typically superior to the previous iteration), it can be promoted to production within minutes. On a related note, Lokad’s platform provides extensive ‘by design’ guarantees to entirely eliminate classes of issues when promoting algorithms from prototype to production status.

See No1 at the SKU-level in the M5 forecasting competition for more on Lokad’s forecasting techniques.

2.5 How do you explain what the solution is doing to arrive at a forecast or purchase order so that the user can understand, interrogate, and explain it to other stakeholders in the business?

Lokad’s platform leverages a flexible domain-specific programming language (Envision) that allows us to craft intuitive dashboards to demonstrate the key metrics and decisions for the client. These dashboards are built in collaboration with the clients in such a way that they can quickly and conveniently understand. For more complicated points, Lokad’s Supply Chain Scientists are in charge of both designing and explaining the algorithms (“numerical recipes”—the things that generate the forecasts and supply chain decisions) and their results to clients. These experts are trained to provide relevant business, economics, and data science insights to the clients to help them understand what is happening “behind the scenes”.

The Supply Chain Scientist, employed by Lokad, is the person who writes the numerical recipe (algorithm) supporting the predictive model (and thus its decision-making process). The Supply Chain Scientist is personally responsible for defending and explaining the adequacy of the forecasts and all the decisions that are generated by the numerical recipe.

Thus, while situations vary from one client company to the next, each situation has a human copilot (the Supply Chain Scientist). It is not an impersonal “system” that is responsible for a forecast or a decision; it is a set of numerical recipes that are under the direct control of a named Supply Chain Scientist. This responsibility includes the “white-boxing” of the numerical recipes, i.e., making its results accessible to and understandable by the shareholders.

To support this process, our Supply Chain Scientists use tools like backtesting to support and demonstrate their analysis. However, and more importantly, they make informed judgment calls about the assumptions that go into their numerical recipes (such as pertinent constraints and drivers). Ultimately, the “adequacy” of a numerical recipe depends on whether it reflects the intent of the business, and this is something that the Supply Chain Scientist establishes through careful inspection of the client’s supply chain situation (as well as consultation with the client).

See our Public Demo Account Video for an overview of how Lokad prepares data and visualizes results for clients.

2.6 Can the forecast be split into set items and BOMs (Bills of Materials)?

Yes, Lokad can deliver forecasts at any level. This is due to the extensive programmatic capabilities of our probabilistic modeling. We can split the forecast between set items and BOMs, as well as cope with situations where items can either be consumed as part of BOMs or sold independently.

Furthermore, when BOMs (Bills of Materials) are present, we not only forecast the demand for the inner items, but we optimize the supply chain decisions to reflect that distinct assemblies internally compete for the same inner parts. That is, situations where respective BOMs overlap. This optimization can lead to refusing to sell a “lone” part if this part would endanger the availability of bigger and more critical BOM(s).

2.7 Do you auto-recommend meta-parameters for your forecasting algorithms?

Yes. The standard practice at Lokad is that predictive models must operate fully unattended. Lokad’s Supply Chain Scientists are responsible for setting appropriate meta-parameters. Either the meta-parameters are stable enough to be hard-coded, or the numerical recipe includes a tuning step dedicated to identifying an adequate meta-parameter value. Either way, the algorithm (aka “numerical recipe”) can be run unattended.

Lokad uses much fewer meta-parameters compared to most other competing solutions. This is because differentiable programming, Lokad’s preference in this regard, is a general parameter-fitting paradigm. Thus, when differentiable programming is available, most parameters are learned. The technology is extremely powerful when it comes to learning all sorts of parameters, not just the “traditional” ones (e.g., seasonality coefficients).

As a result, from the Lokad perspective, most values that would be considered a “meta-parameter” by our peers are just “regular parameters” that do not require specific attention. As a rule of thumb, most predictive models operated in production by Lokad have very few meta-parameters (fewer than 10). Our clients are typically never expected to fine-tune these numbers however, as it is the responsibility of our Supply Chain Scientists.

2.8 Can the product adjust forecasts through causal variables?

Yes.

This is one of the core strengths of differentiable programming—the technological approach favored by Lokad for predictive modeling. Differentiable programming is a programmatic paradigm, hence including an explanatory variable is a given. Even better, the causality mechanism gets reified in the model; it comes with its own “named” parameters. Thus, not only do the forecasts leverage the causal variable, but it is done in way that it can be audited and investigated by supply chain practitioners.

For example, when the retail price tag is used as a causal variable, the exact demand response related to variations of prices can be plotted and investigated. This result can, on its own, be of prime interest for the company. If the company happens to be a retail store network, this can be used to steer liquidation events in the stores that respond the most strongly to the discounts. This can minimize the total volume of discounts needed to fully liquidate aging stock.

2.9 Is the product capable of forecast experimentation and algorithm development and/or customization?

Yes. Our Supply Chain Scientists routinely experiment with forecasting models, allowing new algorithms to be developed and older algorithms to be further adapted. This is possible because Lokad’s platform is programmatic and features a flexible DSL (domain-specific programming language) named Envision, which was designed explicitly for the predictive optimization of supply chain.

The Lokad perspective states that experimentation and customization of the predictive models are not a workaround to cope with the limitations of the forecasting technology. Rather, it is the intended way to use Lokad’s solution in the first place. This approach delivers not only superior results in terms of forecasting accuracy, but results that also provide to be a lot more “production-grade” than alternative “packaged” approaches.

We do not complain about “bad data”; the data is just what it is. Our Supply Chain Scientists make the most of what happens to be available. They also quantify, in Euros or Dollars (or whatever currency is desired) the benefits of improving the data so that the company can pinpoint the data improvements that yield the greatest returns. Improving data is a means, not an end. Our Supply Chain Scientists provide guidance when the extra investment is simply not worth the expected supply chain benefits.

2.10 Is it possible to iterate and to refine the feature engineering underlying the forecasting?

Yes.

Lokad’s Supply Chain Scientists routinely adjust the features that go into a predictive model. This is possible because Lokad’s platform is programmatic and features a flexible DSL (domain-specific programming language) named Envision, which was designed explicitly for the predictive optimization of supply chain.

It should be noted, however, that during the last decade feature engineering (as a modeling technique) has been on a downward trend. In fact, it is gradually being replaced by model architecture engineering. In short, instead of changing the feature to better fit the model, the model is changed to better fit the feature. Differentiable programming, Lokad’s preferred approach for predictive modeling, supports both feature engineering and architecture engineering. However, the latter is usually more suitable in most situations.

See also Forecast Management and Adjustment 2.9 in this FAQ.

3. Forecast Accuracy and Performance Measurement

3.1 What is your organization’s insight on forecast performance and how should forecast performance be measured?

Forecast accuracy must be measured in Dollars or Euros (or the client’s desired currency) of impact. This refers to the return on investment (ROI) of decisions taken on the basis of the forecast. Measuring percent points of error is simply not sufficient. Forecast accuracy must also encompass all the areas of uncertainty, not just future demand, e.g., lead times, returns, commodity prices, etc. These are all factors that vary and need to be forecast, just like future demand.

Traditional metrics like MAPE (mean absolute percentage error), MAE (mean absolute error), MSE (mean square error), etc., are technical metrics that may be of some interest for a Supply Chain Scientist, but, from a supply chain perspective, they are fundamentally both blind and misleading. The fine-print of this argument can be found in Lokad’s public lecture on Experimental Optimization.

Thus, these metrics should not be communicated to the broader organization, as it will only generate confusion and frustration. On the contrary, it is usually straightforward to make the forecast more accurate—in a statistical sense—while degrading the perceived quality of service by the clients, and while raising the operating costs for the suppliers (that retaliate by raising their prices).

Forecast metrics only matter where they support the generation of better supply chain decisions. As far as Lokad is concerned, generating the most financially sensible reorder quantities, production quantities, shipped quantities, prices, etc. are the details worth focusing on. Everything else, including forecasting error in isolation, is tangential to the core business concern of maximizing return on investment.

See also Lead time forecasting.

3.2 How do you measure the performance of the forecasts vs actual sales?

If the model is forecasting ‘sales’, then measuring the accuracy of the ‘sales forecast’ is straightforward: any of the usual indicators, like the MAE (mean absolute error), will work. However, the catch is that most companies want to forecast ‘demand’, not sales. However, the historical sales data is an imperfect proxy of the historical demand. Stockouts and promotions (and possibly competitors’ moves) distort historical sales.

Thus, the challenge is establishing the original ‘demand’ while the historical data only reflects the historical sales. For this purpose, Lokad employees a variety of techniques. Indeed, the nature of the distortion between the (observed) sales and the (hidden) demand varies greatly depending on the type of business being considered. Cannibalizations and substitutions complicate the situation further.

Most of Lokad’s techniques abandon time-series models that cannot, by design, apprehend the necessary information. In fact, most of the time, the sales data get ‘enriched’ with extra information (such as stockout events) that can be leveraged to get a better model of the hidden demand. However, this extra information rarely fit into the (simplistic) time-series paradigm. The alleged sophistication of time-series models is irrelevant if the required data exists outside their operating paradigm (i.e., cannot be captured or expressed by them).

See Structured Predictive Modeling for Supply Chain for more on this point.

3.3 Do you provide reports on forecast accuracy? Do you provide an outlook on the projected forecast error?

Executive Summary: Yes. For simplicity, Lokad’s platform can express its probabilistic forecasts (and thus error) in intuitive graph format. This takes the form of a traditional time-series graph where the forecast error (“uncertainty”) grows along with the time horizon. This shotgun effect graph helps to visualize how the range of potential values (e.g., demand) expands as one looks further into the future. These reports are available to clients at all times in their Lokad account(s).

Half of the challenge in improving the accuracy of a predictive model is crafting adequate reporting instruments. This task is carried out by Lokad’s Supply Chain Scientists. As Lokad uses probabilistic forecasts, the projected error typically exhibits a “shotgun effect” where the expected forecast error steadily grows with the forecasting horizon. These reports are accessible by the client company within the Lokad platform.

However, under the probabilistic forecasting rubric, “forecast accuracy” is largely relegated to a second-class technicality. Under this approach, the primary goal is producing risk-adjusted financial decisions that consider the totality of the clients economic drivers and constraints, as well as reflecting the high uncertainty of future values (such as demand or lead times). For example, if uncertainty is especially high, the corresponding decisions are typically more conservative. As such, it is unwise to measure probabilistic forecast accuracy in isolation; rather, one should review the ROI associated with the risk-adjusted decisions generated using the probabilistic forecasts.

With classical forecasts (also called deterministic forecasts, in opposition to probabilistic forecasts), almost every instance of forecast inaccuracy turns into costly, bad decisions for the client. This is why companies are so adamant about “fixing” their forecasts. Yet, five decades since the inception of the modern time-series statistical forecasting techniques, companies are still nowhere near having “accurate” forecasts. At Lokad, we do not believe that a “super-accurate” forecasting technique is around the corner. We believe that the uncertainty of the future is irreducible to a large extent. However, when combining probabilistic forecasts with risk-adjusted decisions, the negative consequences of the high uncertainty are largely mitigated.

As a result, forecasting accuracy ceases to capture the interest of anyone but the technical experts dealing with the predictive model itself. The stakes are simply not high enough anymore for the rest of the organization to care.

3.4 What is the expected percentage of automated and accurate forecasts?

100%, if we define “accurate” as good enough to direct sound supply chain decisions. This does not mean that every forecast is precise. On the contrary, through probabilistic forecasting, Lokad embraces the irreducible uncertainty of the future. Frequently, the uncertainty is large, and consequently the probabilistic forecasts are very dispersed. Consequently, the risk-adjusted decisions that are generated based on those forecasts are very prudent.

Unlike many obsolete technological solutions, Lokad treats every single (probabilistic) forecast that cannot be used for production purposes as a software defect that must be fixed. Our Supply Chain Scientists are there to ensure that all those defects are fixed long before going to production. Our timeline for the resolution of this class of problem is usually halfway through the onboarding phase.

On the other hand, classic forecasts (also referred to as ‘deterministic’ forecasts) invariably wreak havoc when they are inaccurate, because insane downstream supply decisions are made based on those forecasts. In contrast, probabilistic forecasts embed their own quantification of the expected uncertainty. When demand volumes are low and erratic, probabilistic forecasts reflect the high intrinsic uncertainty of the situation. The calculation of Lokad’s risk-adjusted decisions very much depends on the capacity to assess risks in the first place. This is what the probabilistic forecasts are entirely designed for.

3.5 Can you track metrics such as MAPE (Mean Absolute Percentage Error), MPE (Mean Percentage Error), MAE (Mean Absolute Error) over time?

Yes.

Lokad’s platform is programmatic and it is straightforward to track all the usual metrics such as MAPE, MEP, MAE, etc. We can also track all the slightly less usual metrics, such as customized iterations of those metrics preferred by the client company. For example, “weighted” variants, such as weighted MAPE, weighted MAE, etc., where the weighting schemes depend on specific business rules.

Lokad can collect and consolidate relevant/preferred metrics over time as new forecasts are generated. We can also re-generate metrics by “playing back” the historical data (i.e., backtesting), if the client company wants to assess the expected statistical performance of a revised forecasting model.

However, the metrics mentioned above all relate to classic forecasts (also referred to as deterministic forecasts). Deterministic forecasts should be considered obsolete for supply chain purposes as they are not designed (or able) to tackle the uncertainty associated with future values (such as demand or lead times). They aim to identify a single possible future value, rather than all probable future values and their likelihoods. For this reason, Lokad utilizes probabilistic forecasts, an approach that quantifies the uncertainty time-series forecasts ignore.

3.6 Can you compare multiple scenarios using user-defined metrics (e.g., turnover, profit, cost, risk, etc.)?

Yes.

Lokad’s platform is programmatic, thus it can introduce complex metrics guided by many business rules (e.g., user-defined metrics). It can also introduce complex alternative scenarios where the structure and/or capacities of the supply chain network are modified (beyond just inflating/deflating demand and lead times, for example). This helps Lokad to enhance risk management, strategic planning, and decision-making by preparing for diverse potential supply chain situations and outcomes.

It is worth noting that typical “scenario” management capabilities are obsolete—from Lokad’s perspective. As Lokad operates probabilistic predictive models, in a sense, every supply chain decision that we generate is already risk-adjusted. That is to say, already optimized with respect to all possible future values (e.g., demand) considering their respective probabilities.

Thus, “scenarios” in Lokad are not used to assess “future variations” as said variations are already fully integrated into the base operating mode of Lokad. Scenarios are used to deal with drastic changes beyond variations, typically more aligned with what practitioners would refer to as ‘supply chain design’, such as modifying the topology of the network, the capacity of the network, the location of the suppliers, etc.

3.7 Do you track and monitor forecast accuracy and forecast error (and eventually other demand metrics) with different defined lags?

Yes. Lokad tracks predictive errors with many metrics, including the horizon/lag dimension. Lokad tracks predictive accuracy across all forecasts, including demand, lead time, returns, etc.

The quality of all predictive models is horizon dependent. Usually, the further ahead the forecast, the larger the uncertainty. Lokad’s platform has been designed to make it straightforward to track a wide variety of metrics considering the applicable horizon/lag. This principle is not only applied to demand forecasts, but to all forecasts, including lead time forecasts, return forecasts, etc.

Also, it must be noted that probabilistic forecasts provide a direct quantitative assessment of the uncertainty that grows with the horizon. Thus, the horizon-dependent growing error is not merely measured but also predicted. As the supply chain decisions optimized by Lokad are risk-adjusted, our decisions automatically reflect the extra risk associated with decisions that depend on longer-term forecasts (compared to shorter-term forecasts).

3.8 Can you aggregate data at the product/branch level for validating the statistical forecast?

Yes, Lokad tracks predictive errors and biases at many levels, including the relevant hierarchical levels (e.g., by product, by branch, by category, by region, by brand, etc.) when hierarchies are present. Lokad’s differentiable programming technology even allows us to refine forecasts at given granularity to minimize an error or bias that happens at another granularity.

More generally, on the validation side, as the Lokad platform is programmatic, the historical forecasts can be re-aggregated in any way deemed fit by the client company. Similarly, the metric used to validate the aggregated forecasts may differ from the metric used to validate the disaggregated forecasts, if using an alternative metric is deemed preferable by the client company.

4. Data Management and Cleansing

4.1 Do you auto-identify data errors?

Yes. Lokad’s Supply Chain Scientists meticulously create “data health” dashboards for each client’s project. These data health dashboards are designed to automatically identify any data issue(s). Moreover, these dashboards identify the criticality of the issue(s) and the ownership of the issue(s).

The criticality of the issue determines whether it is acceptable, or not, to generate supply chain decisions based on the data where the issue is present. Sometimes, it means restricting the acceptable decisions to a sub-scope within the client company that is deemed “safe” from the issue/problem. In reality, expecting a 100% problem-free dataset is typically not realistic when it comes to large company. Thus, the supply chain optimization must be able to operate (to some degree) with imperfect data, as long as the imperfection does not put the sanity of the supply chain decisions in jeopardy.

The ownership of the issue defines who is responsible for the resolution of the issue. Depending on the type of issue, the problem can originate from entirely different places within the client company. For example, truncated historical data is very likely a problem for the IT department, while negative gross margins (i.e., the sell price is below the buy price) belong to either procurement or sales.

Identifying nontrivial data errors is a problem of general intelligence that requires in-depth understanding of the supply chain of interest. Thus, this process cannot be automated (yet); it is currently beyond what software technologies can deliver. However, once a given issue is identified, a Supply Chain Scientist can automate future detections. In practice, our Supply Chain Scientists proactively implement the most frequent sort of issues as part of the initial draft of the “data health” dashboards.

See Data Health in The Data Extraction Pipeline for more on the data health.

4.2 Do you automatically cleanse historical data?

Executive Summary: Yes, in the sense that Lokad does not expect our client(s) to manually preprocess the business data before providing it to us. Furthermore, the entire data pipeline (constructed between Lokad and each client) runs unattended with all processes fully automated.

Lokad rarely “cleanses” historical data; at the very least, not in the usual sense. There are quite a few obsolete technologies that require extensive preparation (“cleansing”) of the historical data to operate. For example, old time-series systems typically expected demand drops (stockouts) and demand spikes (promotions) to be corrected to keep the forecasts sane.

This is a reflection of the limitations of the time-series approach. As a result, the historical data must be extensively prepared to make it more amenable (somehow) to a defective system (time-series). Referring to this process as “data cleansing” is misleading because it gives the impression that the problem lies with the historical data, while the root cause is the defective design of the system processing the historical data.

In contrast, Lokad’s predictive modeling technology goes far beyond the time-series approach. Through differentiable programming, we can process any kind of relational data, instead of being stuck with a “time-series”. This means that all the causal factors (e.g., pricing, stocks, events, etc.) that underly either the demand or the lead time are explicitly factored into the model. Causal integration is far superior to cleansing the data—when applicable—because the cleansed data is unreal (no one will ever know for sure what the demand value would have been if the stockout had not happened).

Occasionally, the business data (historical or not) requires corrections. Lokad attempts to deliver those corrections automatically whenever possible, possibly leveraging machine learning depending on the scenario. For example, the mechanical compatibility matrix between cars and parts can be automatically improved with a semi-supervised learning method (see Pricing Optimization for the Automotive Aftermarket).

4.3 Do you let users manually cleanse historical data?

Yes, if the client desires this functionality, Lokad can provide a workflow for this purpose. However, we typically do not recommend end-users to manually cleanse data.

Other software/solutions impose numerous manual tasks on their end-users. In contrast, Loakd’s Supply Chain Scientists craft end-to-end algorithms (“numerical recipes”) that make do with the data as it exists. For us, manual data cleansing by the client is the exception, not the norm.

See also Data Management and Cleansing 4.2 in this FAQ.

4.4 How will the data be cleansed, managed, and maintained to avoid unnecessary model error?

Lokad’s Supply Chain Scientists are in charge of the setup of the data pipeline. The data has to be prepared, but most importantly predictive models have to be engineered to fit the data as it exists presently. The Supply Chain Scientist introduces the instruments (e.g., dedicated dashboards) to monitor the raw input data and the prepared data to make sure that the supply chain decisions generated by Lokad are sound.

Many alternative solutions only look at the problem thought the lenses of data preparation, where any incorrect output must be fixed by adjusting the input. Such solutions are not programmatic, thus the core models cannot be modified—only their inputs can be modified. However, Lokad adopts a different technological approach. We support a programmatic predictive technology (via differentiable programming). Thus, when facing improper outputs (i.e., bad supply chain decisions), we can either fix the inputs or the models (or both).

Almost invariably, it is the combination of the two adjustments—better data preparation and better data processing—that leads to satisfying results, and omitting one of the two is a recipe for underwhelming results.

See also Data Management and Cleansing 4.2 in this FAQ.

See also The Data Extraction Pipeline for more information on the automated transfer of data between clients and Lokad.

4.5 Do you manage and maintain master data (supporting the forecasting efforts)?

Yes, if it is requested by the client company.

However, we strongly recommend not using Lokad’s platform for this purpose. In our opinion, analytical tools (like Lokad) should be kept strictly decoupled from data entry tools, such as a master data management system.

As a rule of thumb, to avoid vendor lock-in, we suggest avoiding all-encompassing enterprise software tools. The design requirements for master data management are completely unlike those for predictive analytics. Lokad’s platform might be a decent master data manager, but it will never be a great one (our design leans too heavily on predictive analytics for that), and conversely, most master data managers are absolutely terrible for analytics.

4.6 Can users upload sales and marketing inputs (including future plans/insights)?

Yes.

Lokad’s platform is capable of receiving and processing multiple data sources, in many data formats, including Excel spreadsheets. Our platform is also capable of processing data as it is found in sales and marketing divisions (i.e., at whatever granularity they store it).

Sales and Marketing teams rarely provide data organized at the SKU-level—or even SKU x Location, our preferred level of granularity. Given this limitation, Lokad’s platform is designed to leverage input data (e.g., from Sales and Marketing) that are at different levels of granularity from the intended output forecasts (e.g., SKU x Location).

4.7 Do you archive historical demand and forecasts in order to analyze the waterfall forecast?

Yes, we typically archive all past forecasts, including demand, lead time, returns, etc.

We have engineered advanced compression techniques to limit the data storage overheads associated with large-scale archival strategies. We have also adopted an overall design that ensures the archived data, even in vast quantities, do not interfere with the day-to-day performance of the platform (e.g., calculations and dashboard displays do not slow down due to data being archived).

The engineering of the Lokad platform differs significantly from alternative solutions that are severely penalized, either in costs or in performance (or both) when extensive archival strategies are implemented. While those alternative solutions nominally offer extensive archival capabilities, in practice such archives are severely truncated in order to keep the solution running. This is not the case with Lokad. Even considering large-scale client companies, keeping years’ worth of archives floating around is typically a non-issue.

4.8 Do you archive manual input(s)/override(s) in order to analyze the impact of adjustments on demand metrics?

Yes. Lokad archives all manual inputs, including manual file uploads of Excel spreadsheets. When manual inputs are used to alter predictive models (“overrides”, typically with the intent of refining models/forecasts), we use those archives to quantify the improvement (or degradation) in terms of predictive accuracy introduced. This work is normally carried out by Lokad’s Supply Chain Scientists.

Lokad’s platform features complete versioning capabilities for both the data and the code/scripts. This is critical as we need to make sure when backtesting that the “regular” business data (typically the historical data obtained from the business systems) used alongside the manual inputs is exactly the same as it was when the manual inputs were originally provided.

The business data is typically automatically refreshed. However, using the latest version of the business data does not properly reflect the situation as it was at the moment the manual correction or input was provided. Similarly, the predictive code used by Lokad might have evolved since the point of time when the manual input was provided, too. In fact, the manual input might have been provided to cope with a defect in the predictive code that has since been resolved.

Lokad’s platform covers those situations as well, preventing entire classes of incorrect conclusions. Consider situations where manual inputs are later assessed as “incorrect” when, in fact, they were relevant when considering the exact conditions at the time the manual inputs were originally provided.

5. Product Classification and Clustering

5.1 Do you identify slow movers and lumpy demand patterns?

Executive Summary: Yes, Lokad’s predictive technology provides a very thorough quantitative characterization of all the SKUs of interest.

In particular, Lokad’s probabilistic forecasting approach is well suited to address intermittent and erratic demand patterns. By assessing the probabilities for rare events, Lokad can identify the “lumpiness” of demand—something that typically reflects individual consumers purchasing many units at once. For example, one customer buys the entire available inventory of (identical) light switches in a hardware store, thus introducing a SKU-level stockout.

Differentiable programming, Lokad’s machine learning paradigm, is ideal for coping with the ‘law of small numbers’ that characterizes most supply chain situations. Slow movers, by design, come with very limited data points. Similarly, spikes found in lumpy demand are also, by design, rare. Thus, the data efficiency of the predictive model is paramount. In this regard, differentiable programming is superior to alternatives in its capacity to reflect high-level insights provided through the very structure of the model itself.

Alternative solutions usually fail in the presence of slow movers and lumpy demand patterns. Classic forecasts (i.e., non-probabilistic forecasts) cannot address slow movers without resorting to fractional demand that is not “real”. This fractional demand (e.g., 0.5 units), though “mathematically” correct, is not a viable way to make sensible supply chain decisions as one, naturally, must order whole numbers of units.

Similarly, classic forecasts cannot mathematically reflect the “lumpiness” of demand.

For example, a probabilistic forecast can reflect that a bookshop sells 1 unit per day (on average), composed of a mix of 1 professor buying 20 books per month on average, plus 1 student buying 1 book every 2 days (on average).

This information will be reflected in the model’s probability distribution of demand. However, for a classic time-series forecast, conveying the nuanced reality of demand, such as sporadic bulk purchases, is not feasible. It would only predict an average demand of 1 book per day, failing to capture the actual pattern of demand and thereby misrepresenting the true nature of sales. This, in turn, greatly limits the extent to which financially-sensible inventory decisions can be made.

5.2 Do you identify slow-moving or obsolete inventory and provide recommendations for “keep or sell”?

Yes. Lokad identifies slow-moving inventory using probabilistic forecasts, allowing early, risk-adjusted decisions to mitigate overstock and dead stock risks. Recommendations extend beyond “keep or sell,” including discounts, relocations, and adjustments to avoid cannibalization.

The identification of slow-moving or obsolete SKUs (demand-wise) is done with probabilistic demand forecasts. Probabilistic forecasts are excellent for identifying and assessing risks, including risks of inventory overstocks and dead stock inventory. This allows us to produce decisions that are risk-adjusted using when combined with our stochastic optimization capabilities. Thus, inventory risks are quantified for all SKUs at all stages of their lifecycle. This design is critical as it allows us to identify as early as possible (and address) most inventory situations before they become problematic.

Finally, Lokad is not restricted to mere “keep or sell” recommendations. We can provide clients with recommendations reflecting the whole spectrum of available options. For example, Lokad can recommend discounts or promotions to help liquidate the stock. We can also recommend moving the stock elsewhere if other channels happen to exhibit high demand. We can recommend temporarily pausing or de-ranking another product that accidentally cannibalizes demand for another SKU.

In short, Lokad’s Supply Chain Scientists are there to make sure that no stone is left unturned before declaring any stock is “dead”.

See also Product Classification and Clustering 5.1 in this FAQ.

5.3 Do you let users manage hierarchical product data workflows (top-down, middle-out and bottom-up)?

Yes. Given the Lokad platform is programmatic, we can address any reasonably well-specified workflow for our clients. Examples include any workflow operating along the client’s existing product hierarchies.

In our opinion, the client’s ROI (return on investment) for the letting its employees navigate such workflows is very unclear. The very need for such workflows reflects profound defects in the supply chain software that need to be fixed from the inside out—leveraging as much automation as possible.

Lokad’s platform provides extensive capabilities to visualize the data along all the relevant dimensions: product hierarchies, regions, time horizons/lags, suppliers, customer types, etc. These capabilities are instrumental in identifying both defects and areas of further improvements. However, leveraging these capabilities for a ‘workflow’ is typically misguided (though straightforward for Lokad). Rather, we recommend directly modifying the underlying numerical recipes (code) operated by Lokad in order to remove the need for supply chain practitioners to manage the workflows at all.

Many alternative solutions do not feature programmatic capabilities. As a result, when a defect is identified, there are usually no options but to either wait for the next version of the software (possibly years in the future) or to go for customization—a path typically spelling trouble, as the client company ends up with an unmaintained software product.

Yes.

Lokad’s platform provides extensive capabilities that allow users to bring together items (e.g., SKUs, products, clients, suppliers, locations, etc.) according to a whole spectrum of factors – including manual inputs.

Given Lokad’s platform is programmatic, as long as the grouping or proximity criterion can be expressed numerically, it is straightforward to have client items grouped accordingly. This task is carried out by Lokad’s Supply Chain Scientists.

On a related note, Lokad’s platform can also leverage relationships between hierarchically-related items for either predictive or optimization purposes. In particular,

Lokad’s platform adopts a relational perspective for all its numerical tooling. The relational perspective goes beyond time-series and graphs by blending both relational and hierarchical data. This relational perspective permeates our tooling, including our machine learning tools. This aspect is critical to leverage available relationships beyond mere display purposes.

5.5 What type of product classification does you offer (ABC /XYZ…) based on historical sales data?"

Executive Summary: Lokad can offer flexible ABC and ABC XYZ product classifications, adapting to variations and exclusions, if the client desires. However, we view these classifications (and their contemporaries) as outdated. Lokad’s position is that modern supply chain management should focus on actionable insights that lead to risk-adjusted decisions, rather than relying on simplistic categorization tools.

Lokad’s platform supports all the mainstream classification schemes, including ABC and ABC XYZ Analysis, etc. As Lokad’s platform is programmatic, it is also straightforward to accommodate all the subtle variations that exist while carefully defining such classes (e.g., subtle exclusion rules). However, product classifications (such as those listed above) are a technologically obsolete approach to supply chain problems and optimization.

Some supply chain software vendors, especially those featuring obsolete technologies, proudly feature ABC Analysis or ABC XYZ analysis. Yet, invariably, the classifications these tools provide are leveraged to mitigate the numerous defects of the software solution the client is already using, thus treating the symptoms but not the cause of the problem(s). Those tools are used as crude attention-prioritization mechanisms. This is not a suitable way to approach the issues of interest, such as intermittent or volatile demand.

First, the fundamental defects must be addressed to relieve the supply chain practitioners from such tedious reviews. Second, volume-driven classifications are far too crude to be of any practical value and make a very poor use of the time of supply chain practitioners.

This is why Lokad’s Supply Chain Scientists guide clients towards decisions that reflect the financial impact of a potential supply chain decision/call-to-action (typically measured in Dollars or Euros). Unless items and decisions are prioritized with respect to their bottom-line ROI (return on investment), any attempt at “prioritization” or “optimization” is fundamentally moot.

See ABC XYZ in 3 Minutes and ABC Analysis Does Not Work for more on the limitations of these classification tools.

5.6 Do you provide product and/or store clustering/stratification?

Yes.

Lokad’s platform provides clustering/stratification capabilities for any item of interest, such as stores, products, clients, SKUs, suppliers, etc. This is thanks to our platform’s processing capabilities when it comes to relational data. This lets us address complex items that cannot be “flattened” into a fixed series of properties. Also, through differentiable programming, Lokad can learn/tune the similarity metrics used to group items in ways that are particularly useful for a given task, such as forecasting.

See Illustration: Clustering for more on Lokad’s clustering capabilities with only a few lines of Envision code.

5.7 Do you refine the forecast with product/location hierarchies and/or clustering?

Yes.

Lokad takes full advantage of the relational structure of input data. Our differentiable programming approach is particularly adept at processing relational data. This is how Lokad can leverage hierarchies, lists, options, graphs, numerical and categorical attributes for its predictive models. Also, our predictive models forecast all sources of supply chain uncertainty, including demand, lead times, returns, yields, commodity prices, etc.

Clustering can be used to identify a relevant pattern for the forecast of interest. For example, all the typical cyclicities (e.g., day of the week, week of the month, week of the year, etc.) and the quasi-cyclicities (e.g., Easter, Chinese New Year, Ramadan, Black Friday, etc.) can benefit from this sort of technique. Lokad’s platform provides extensive support to instrument clustering for predictive purposes.

See Illustration: Cluster-based cyclicities for more on this point.

6. Events and Explanatory Variables

6.1 Do you identify exceptional events (e.g., out-of-stock events) and promotions in historical data?

Executive Summary: Yes. Lokad enriches historical data with known exceptional events using predictive programming, improving accuracy over traditional time-series forecasting. This approach handles incomplete data and can reconstruct lost events (as a workaround for when a direct recording of historical events is unavailable).

Historical data comes with numerous events that distort measurements (e.g., demand, lead time, etc.). Lokad operates via predictive programming paradigms, such as differentiable programming, that let us enrich the baseline history with all those events. However, as a rule of thumb, those exceptional events are not “identified”— they are already known. If notable events have been lost, then it is possible for Lokad to operate a predictive model to reconstruct such events.

Old, now-obsolete forecasting technologies used to be incapable of dealing with anything except plain/naked time-series. As a result, every distortion that ever applied to demand had to be corrected beforehand, otherwise the forecasts would be severely degraded/biased. Unfortunately, this approach is defective by design because those time-series forecasts end up being constructed on top of other forecasts, thus piling up inaccuracies.

Lokad’s predictive technology does not suffer from the same problem as it supports extra explanatory variables. Instead of pretending that we know for sure what would have happened without the historical events (such as a stock-out), the predictive model reflects the explanatory variable in its outputs (i.e., its forecasts). This methodology does not require a phased approach to forecasting. Furthermore, it can leverage incomplete data, such as a stock-out encountered at the end of the day after a record sale of units—information that is still highly relevant, even in its incomplete form.

If notable events (e.g., stockouts) have been lost or simply never recorded, then Lokad is capable of reconstructing said events through an analysis of the historical data. However, no matter how statistically accurate this reconstruction might be, it will always be less accurate than a direct recording of the events as they unfold. This is why Lokad will typically historicize indicators such as stock levels whenever those indicators are not properly archived in the respective business systems.

6.2 Do you identify exceptional events and (moving) holidays?

Yes. Lokad’s predictive models adapt to exceptional events and holidays. Our Supply Chain Scientists assess impacts, providing clients with a transparent model and insights into the effects of a specific event on the client’s supply chain dynamics.

Lokad identifies all the exceptional events and adapts the very structure of its predictive models to reflect them. However, for all the quasi-cyclical patterns (e.g., Easter, Chinese New Year, Ramadan, Black Friday, etc.) the identification is a given—we already know that the event exists and is impactful. The only question left to answer is the quantification of the impact of the event.

By letting Supply Chain Scientists make a high-level assessment about a well-known event’s impact (or lack thereof), we get a predictive model featuring a much greater data efficiency. High data efficiency is critical to keep the predictive model accurate when there is little data available, as is frequently the case in supply chain situations.

Furthermore, when Lokad explicitly identifies and names the patterns, the client’s supply chain staff benefit from a white-box predictive model that comes with semantic factors. For example, the impact of Black Friday (if any) comes with a dedicated factor assessed from the historical data. The supply chain practitioner can then use this factor to gain an understanding of which products are most sensitive to Black Friday specifically, in isolation of all the other patterns that are at play, such as the seasonality (i.e., yearly cyclicity).

See also Events and Explanatory Events 6.1 in this FAQ.

6.3 Do you manage out-of-stock situations as an explanatory variable?

Yes. Lokad incorporates out-of-stock situations directly into its predictive models, addressing both complete and partial stockouts without having to resort to reconstructing “fake” demand in order to fill in gaps in the data. Rather, we directly model what is generally known as the censored demand. Moreover, Lokad is capable of taking partial stockouts into account (when the stockout happens during the working day) and leverage the corresponding information.

More generally, Lokad is also capable of dealing with all the induced artefacts resulting from stock-outs. Depending on the specifics of the client company, those artefacts can vary greatly. For example, there might be a surge of demand at the end of the stock-out period, if consumers are loyal enough to wait. There can also be backorders, while suffering from partial attrition as such consumers may refuse to delay their purchase. Etc.

The Supply Chain Scientists, employed by Lokad, are there to make sure that stockouts are modeled in a suitable manner that genuinely reflect the dynamic of the business of the client company.

See the discussions on “Loss masking” in Structured predictive modeling for Supply Chain and “Incomplete lead-time model” in Lead-time forecasting for more information on how Lokad handles these situations.

6.4 Do you forecast promotions?

Yes. Lokad’s predictive technology can forecast the variation of demand impacted by promotional mechanisms. The promotional mechanism can include variations of price tag(s), changes in display ranks (e-commerce), changes in assortments, visibility changes (e.g., gondolas in retail), etc. In short, Lokad delivers probabilistic forecasts for promotions, just like it does for all potential sources of supply chain uncertainty (e.g., demand, lead time, returns, etc.).

Lokad’s supply chain decisions—such as inventory replenishments—take into account not only the future planned promotional activity, but the potential for such activity. For example, if the client company has the possibility to do promotions, and its customers (typically) respond well to promotions, it means that the client company can be a little bit more aggressive with its stocks. This is because promotions are an effective tool to mitigate overstocks. Conversely, if the client company has a clientele that is largely unresponsive to promotions, then it must be more attentive to overstocks. This is because it lacks this mechanism to mitigate them.

Lokad generates such risk-adjusted (and option-adjusted) decisions generated by leveraging probabilistic forecasts. These forecasts are essential to assess the risks in the first place. Following that, we use stochastic optimization—in simple terms, a mathematical operation—to craft decisions that maximize the client’s ROI (return on investment) given their multiple sources of uncertainty (e.g., demand, lead time, promotions, returns, etc.).

6.5 Do you identify and forecast new product launches and substitutions?

Executive Summary: Yes, Lokad forecasts demand for all products, including new ones. We do this regardless of the amount of historical data that happens to be available for products—which will likely be zero if the product has not been launched yet.

In order to produce statistical forecast in the mentioned conditions, Lokad usually leverages (a) the whole history of launches within the client company, (b) the attributes of the product to position it in the offering, (c) the alternative products that provide both a baseline and potential for cannibalization, and (d) the marketing operations that support this specific launch.

If a product is positioned in the client’s offering as the explicit replacement for an older product, then the forecasting task is much more straightforward. However, we do not recommend adopting this approach unless the client’s supply chain staff are convinced that the old and new products are truly equivalent for consumers. In practice, a product launch is rarely a one-to-one replacement between new and old products. Thus, Lokad uses superior technology to leverage all historical data, rather than designating one product to provide the pseudo-history of the new product being launched.

Moreover, Lokad generates probabilistic forecasts for product launches. This is particularly important because classic (i.e., non-probabilistic) forecasts entirely dismiss the hit-or-miss patterns that tend to be prevalent when launching new products. Probabilistic forecasts, on the other hand, quantify this uncertainty, thus allowing us to generate risk-adjusted supply chain decisions.

In most business systems, the launch date of the product is properly identified, and thus there is no need for identification per se. However, if the launch data is not recorded or incorrectly recorded, then Lokad can proceed with an actual reconstruction of this information. Naturally, the earlier sales records represent a baseline for the launch.

However, sometimes in case of intermittent demand, it may take a long time for the product to sell its first unit. Lokad’s Supply Chain Scientists have various heuristics at their disposal to accommodate these situations.

See also Events and Explanatory Events 6.1 in this FAQ.

6.6 How do you forecast new items or new locations with no sales history?

Lokad utilizes previous launches and current sales, emphasizing the importance of attributes (formal and textual), to predict demand for new items/locations.

While an item might be ‘new’, it is typically not the first ‘new’ item to be launched by the client company. Lokad’s predictive technology leverages the previous item launches, as well as the current sales volumes, in order to forecast demand for a new item. In particular, the availability of formal attributes (e.g., color, size, shape, price point, etc.), as well as textual attributes (e.g., label, short description, comments, etc.) are critically important to mathematically place the item in the broader offering of the company.

The process with new locations is similar, although the data is typically much more limited. While it is common for companies to launch thousands of new products per year (especially in verticals like fashion), very few companies can claim to launch even a hundred new locations per year. Yet, by leveraging the attributes and the characteristics of the new location, Lokad can produce a forecast even when this particular location has no sales history.

See also Events and Explanatory Events 6.5 in this FAQ.

6.7 Do you consider predecessor items, possibly flagged or equivalent/similar items?

Yes, if launched items come with ‘predecessor’ or ‘similar’ items, Lokad’s predictive technology is capable of leveraging this information to refine its forecasts.

We can accommodate the whole spectrum of confidence in the provided information, ranging from ‘this new product is a near-perfect equivalent to this other product’ to ‘these two products are vaguely alike’. Multiple predecessors can be provided as well if there is no clear ‘most similar’ item.

While old (now obsolete) forecasting technologies forced the supply chain practitioners to manually pair old and new products, this is not the case with Lokad. Assuming that some basic information is available, our technology is capable of leveraging the historical data—from other products—to forecast a new item. Relevant basic information includes the product label(s) and the price point(s).

As a rule of thumb, we encourage enriching the master data to foster better automated associations. This is, in our opinion, preferrable to forcing the client’s supply chain staff into the tedious activity of manual pairing. The ROI (return on investment) for improving the master data is usually vastly superior to pairings, as master data can also directly impact numerous post-launch operations.

See also Events and Explanatory Events 6.5 in this FAQ.

6.8 Do you detect cannibalization? Do you assess impact on the cannibalizing product and on the cannibalized products?

Yes, Lokad’s predictive technology factors cannibalizations (and substitutions) as part of its demand analysis.

Though situations vary, the model is typically symmetrical, hence the model quantifies both the product that is cannibalizing and the product that is cannibalized. Our approach takes into account the composition of the offering, which can vary from one store to the next, or from one sales channel to the next.

If customers can be identified (note: with anonymous identifiers, as Lokad does not need/use personal data), then Lokad can exploit the bipartite graph that connects customers and products. This temporal graph (connecting products and customers through their transactions) is usually the best source of information for quantifying cannibalizations. If this information is not available, Lokad can still operate albeit with reduced accuracy when it comes to the fine-print of the cannibalizations themselves.

Lokad’s predictive techniques depart quite radically from classic time-series models. Time-series models are simply not expressive enough to deal with cannibalizations. In fact, once the historical data has been transformed into time-series data, most of the relevant information for addressing cannibalizations has already been lost. This lost information cannot be recovered later, no matter how sophisticated the time-series models are. In contrast, Lokad uses differentiable programming for its predictive models—an approach that is much more expressive than dated (and obsolete) time-series models.

6.9 Do you allow adding or updating explanatory variables? Can those variables be manually updated?

Yes. Lokad’s platform is programmatic and quite literally as flexible as an Excel spreadsheet when it comes to the inclusion of updates of explanatory variables. It is also possible to, if desired, have the explanatory variables conveyed through actual spreadsheets.

Differentiable programming, Lokad’s approach to predictive modeling, makes it straightforward to learn models that embed arbitrary explanatory variables. The explanatory variables do not have to be expressed in “forecasted units” or be otherwise aligned with the forecasting process. Through differentiable programming, it is possible to integrate explanatory variables while leaving many relationships “unquantified”—thus leaving the learning process to Lokad’s platform. Also, the quantification of the relationship(s) is made available to the supply chain practitioner. This way, the supply chain practitioner can gain insights on whether the explanatory variable is really gaining traction within the predictive model.

Some old (now obsolete) forecasting technologies enforced a direct relationship between the explanatory variables and the desired forecasts. For example, the explanatory variables had to be linearly related to the demand signal; the explanatory variables had to be expressed at the same granularity as the forecasts; and/or the explanatory variables had to be homogenous with the historical data, etc. Lokad’s technology does not suffer from these limitations.

Moreover, the programmatic capabilities of Lokad’s platform can organize the explanatory variables to make their maintenance as simple as possible for the client’s supply chain staff. For example, it is possible to start with an Excel spreadsheet to reflect the explanatory variables, and later transition to automated data integration. This transition can occur once the extra accuracy (gained through those explanatory variables) is deemed sufficient to automate the data transfer.

See discussion of “Covariable integration” in Structured Predictive Modeling for Supply Chain for more on this point.

6.10 Do you allow manual adjustment(s) of the forecast for future events with no previous historical data?

Yes. Lokad always makes it possible to manually adjust forecasts, whether we are looking at items with or without historical data. We can also track the quality/accuracy of manual adjustments. However, when using modern predictive technology, manual adjustments are usually unnecessary and overall discouraged.

The first reason why supply chain practitioners feel the need to manually adjust forecasts is they want to alter the resulting supply chain decisions that are derived from the forecasts (e.g., a purchase order). In those cases, most of the time, the supply chain practitioner faces a risk that is not appropriately reflected by the forecasts. It is not that the forecasts should be higher or lower than they are, rather that the resulting decision needs to be steered up or down to reflect the risk. Lokad addresses this problem through probabilistic forecasts and risk-adjusted supply chain decisions. The forecasts already reflect all the possible future values (e.g., demand) and their respective probabilities. Thus, our suggested decisions are already risk-adjusted. If the decisions come out wrong while the forecast is correct, then it is usually the economic drivers associated with the decision that need to be adjusted, not the forecast itself.

The second reason for manually adjusting a forecast is the forecast is blatantly wrong. However, in those situations, the (underlying) forecasting model itself must be fixed. Not fixing them simply means supply chain staff must continue treating the symptoms of the problem (inaccurate forecasts) rather than the illness itself (a faulty forecasting model). If one does not fix the model, the forecasts will be refreshed as newer data becomes available and either the bad forecasts will resurface, or the original correction (if it stays) becomes itself a source of forecast inaccuracy.

In short, if the forecasting model lacks sufficient accuracy (typically due to missing information), then the input of the model should be enriched to take relevant missing information into account. Either way, keeping a defective forecasting model in operation is never the appropriate response.

6.11 Do you refine forecasts through marketing and special campaigns?

Yes, Lokad refines its forecasts with this information (if/when it is made available to us).

Differentiable programming—Lokad’s predictive modeling technology—is adept at processing extra data types/sources, even if they do not structurally match the original historical demand data (the kind found in typical client business systems).

Differentiable programming can process extra data sources without any expectation that this supplementary data is exhaustive or even completely correct/accurate. Granted, if the data is very incomplete/inaccurate, this does limit the overall accuracy gained from processing this data in the first place.

More importantly, Lokad’s predictive technology changes the way clients approach their marketing campaigns. The classic forecasting perspective treats future demand like the movement of planets: something that is entirely beyond our control. However, marketing campaigns do not fall from the sky. Rather, they reflect explicit decisions taken by the client company. With Lokad’s insights and technology, client companies can re-adjust their marketing campaigns to match what the supply chain can support.

For example, it is pointless to accelerate demand further (by launching a fresh campaign) if all products are already heading for stockouts. Conversely, if overstocks are on the rise, it might be time to reactivate a few campaigns that were previously paused.

6.12 Do you refine forecasts with price elasticity? Can planned future price changes be proactively factored into the forecast/predictive model?

Yes. Lokad’s predictive modeling capabilities cover pricing, including price elasticity, as well as future planned price changes. Lokad’s differentiable programming approach makes it straightforward to include one (or several) price variable(s), both in the past and in the future. The past instances are used to learn the causality between the variation of demand and the variation of price.

Differentiable programming allows us to jointly learn the impact of varying prices alongside all the other patterns that impact demand, such as the multiple cyclicities (e.g., seasonality). The causality model can then be applied to future prices, which can be raised or lowered to reflect the changing pricing strategy of the client company.

However, price elasticity is frequently a rather crude approach to model the effect of varying prices. For example, threshold effects cannot be modeled with elasticity. This include scenarios where consumers respond strongly to a variation of price when a product becomes just cheaper than another equivalent-seeming product. In particular, when competitive prices are collected through a competitive intelligence tool, price elasticity proves insufficient to explain variations of demand that would be best explained by a competitor’s pricing moves.

Lokad’s platform has capabilities that go well beyond merely modeling price elasticity. Lokad can, and frequently does, jointly optimize both procurement and pricing. While the mainstream supply chain perspective treats inventory optimization and pricing optimization as two separate concerns, it is obvious that prices impact demand—even when the price ‘elasticity’ proves to be too crude to accurately reflect this impact. Thus, it makes a lot of sense to coordinate both inventory and pricing policies to maximize the profitability of the supply chain.

6.13 Do you refine forecasts with competition activity (i.e., competitive intelligence data)?

Executive Summary: Yes, Lokad’s predictive technology is capable of leveraging competitive intelligence data to refine demand forecasts (and prices, if requested) for clients. This is only done when the competitive intelligence data is made available to us, as Lokad does not collect competitive intelligence data itself. In our opinion, this task is better left to web data scraping specialists.

Exploiting competitive intelligence data is typically a two-step process. First, we need to associate (somehow) the competitive data points with the offering of the client company. If the client company and its competitors happen to sell the exact same products as identified by their GTIN barcodes, then this process is straightforward. However, there are frequently numerous complications.

For example, the companies may not have the same shipping conditions (e.g., fees and delays), or there might be a temporary promotion only eligible to holders of a loyalty card. Furthermore, competitors do not typically sell the exact same products (at least not in the GTIN sense), yet their offerings, overall, do compete with each other. In these situations, simple one-to-one associations between the respective companies’ products are not relevant anymore. Nevertheless, Lokad’s predictive technology (and Supply Chain Scientists) can address all those complications.

Second, once the associations are established, the predictive model must be adapted to reflect the effect(s) of the competition on demand. Here, the biggest challenge is frequently that the effect comes with a severe lag. In most markets, customers do not monitor the prices of the competitors all the time. Thus, a major price drop by a competitor may remain unnoticed by many customers for a long time. In fact, the dominant effect of being out-competed price-wise is a slow erosion of the client’s market share. Thus, it is a mistake to narrowly assess the impact of the competition “one product at a time”. Company-wide effects must be assessed as well.

Once again, Lokad’s Supply Chain Scientists ensure that the modeling strategy reflects a strategic understanding of the client company (and its place within the market). This strategic understanding includes long-term aspects, such as gaining or losing market share.

See discussions of ‘Solving the alignment’ in Pricing Optimization for the Automotive Aftermarket for more on this point.

See also Events and Explanatory Variables 6.12 in this FAQ.

6.14 Do you refine forecasts with weather forecast data?

Executive Summary: Yes, Lokad is capable of refining its predictive models with weather forecast data. We had our first success in this area back in 2010 when working with a large European electricity producer. Our current predictive technology (differentiable programming) makes the process of integrating weather forecasts easier than it was with previous technologies.

In practice, while it is technically possible to refine forecasts with weather data, very few of our clients effectively use such refinements in production settings. In our opinion, it is usually not worth the effort. There are almost always simpler options that provide a superior ROI (return on investment) for a comparable amount of engineering resources.

Overall, there are two major problems with trying to leverage weather forecast data in this context. The first problem is that those forecasts are short-term. Beyond 2 or 3 weeks ahead, weather forecasts revert to seasonal averages. Hence, once one moves beyond a short horizon, weather forecasts do not provide extra insight beyond usual seasonality. This means that all the supply chain decisions that are not strictly short-term do not benefit from weather forecast data. This severely restricts the applicable scope of this technique.

The second problem is the vast technological complications that the technique entails. Weather is a very local phenomenon, yet when considering large supply chains we are effectively looking at hundreds or thousands (if not tens of thousands) of relevant locations, spread across enormous geographical spaces (possibly multiple continents). As such, each location could have a “weather” of its own (meteorologically speaking).

Furthermore, “weather” is not a single number but a whole collection of them, including temperature, precipitation, wind, etc. Depending on the type of goods being serviced, the temperature may or may not be the dominant factor required to refine a demand forecast.

Fundamentally, trying to refine a demand forecast with weather forecast data allocates resources (time, money, effort, etc.) that could be directed elsewhere (or at least to better refinement efforts). We observe that weather forecasts are almost never a ‘competitive’ option in this regard. Thus, while Lokad is capable of leveraging weather forecasts, we recommend exhausting all other potentially easier avenues of refinement before turning to weather forecast data.

6.15 Do you refine forecasts to reflect a new store opening/old store closing?

Yes.

Lokad predictive technology is capable of precisely modeling the impact of a new store opening and/or an old one closing. Our technology can also model transient closures, such as temporary closures for renovation work. Furthermore, Lokad can (and does) take into account variability in opening hours as well (if the data is made available to us). Lokad’s predictive technology (differentiable programming) is particularly effective at dealing with all these demand signal distortions.

Furthermore, when stores happen to be nearby (e.g., within the same city), we can take into account the substitution effect where customers that used to go to one store (now closed) go to a different one. If some transactions benefit from a customer identifier (note: just the raw identifier, as Lokad does not need any personal data), then we can leverage this information to more accurately assess the exact portion of the clientele that follows a given brand despite the stores moving around.

On the other end of the technological spectrum, time-series (forecasting) models cannot even properly represent the relevant input information. In this case, we refer to the raw transactional data described above, such as can be found if the client operates loyalty card programs.

Notes


  1. No1 at the SKU-level in the M5 forecasting competition, a lecture given by Joannes Vermorel, January 2022 ↩︎

  2. Though Excel spreadsheets are often impressively programmatic, they are simply not suited to the large-scale demands of a real supply chain. For example, Excel is not designed to stably process hundreds of thousands, if not millions, of lines of data, such as those for an extended network of stores, each with its own offering. Nor is it suited for performing computations with random variables—a key ingredient in probabilistic forecasting. See Programming Paradigms as Supply Chain Theory for more on the principles underpinning Lokad’s perspective on probabilistic forecasting and differentiable programming. ↩︎