Supply Chain Lectures

Next LIVE lecture: November 25th, 15h00, CET (Paris) - Watch on YouTube

This ongoing series of lectures presents the foundations of supply chain management: the challenges, the methodology and the technologies. The intent is to allow organizations to achieve superior, ‘real-world’ supply chain performance. The vision presented in these lectures profoundly diverges from the mainstream supply chain theory, and is referred to as the quantitative supply chain. The lectures are presented by Joannes Vermorel, CEO and founder of Lokad. Lokad’s atypical, hands-on approach when it comes to supply chain management has acted as a ‘great filter’ to sort out what looks good from what actually works. The lectures are heavily illustrated with real-world supply chains that Lokad operates on behalf of its clients.

Image

Intended audience: These lectures are intended to all those who have the ambition to improve supply chains, from senior executives to junior analysts and students. The lectures include a series of ‘crash courses’ to keep the prerequisite knowledge to a minimum.

1. Prologue
   1.1 The foundations of supply chain
   1.2 The Quantitative Supply Chain in a nutshell
   1.3 Product-oriented delivery
   1.4 Programming paradigms as supply chain theory
   1.5 21st century trends in supply chain
   1.6 General overview of the lectures
2. Personae
   2.1 Supply chain personae, why and how
   2.2 Paris - a fashion brand with a retail network
   2.3 Santa Clara - an accessory ecommerce
   2.4 Stuttgart - an aftermarket automotive parts distributor
   2.5 Miami - an aviation MRO
   2.6 Geneva - a hard-luxury watch maker
   2.7 Amsterdam - a cheese brand
3. Horizons
   3.1 Quantitative economics
   3.2 Antipatterns
   3.3 Iatrogenics
   3.4 Rationality and science
   3.5 Writing for the web
   3.6 Modern computers
   3.7 Algorithms
   3.8 Numerical optimization
   3.9 Statistical learning
   3.10 Programmable environments and compilers
   3.11 Software engineering
   3.12 Enterprise software
   3.13 Corporate structure and incentives
   3.14 Computer security
   3.15 Information theory
   3.16 Psychology
4. Predictive Modelling
   4.1 Differentiable Programming
      4.1.1 Demand forecasting for Paris (no cannibalization)
      4.1.2 Understanding Differentiable Programming (DP)
      4.1.3 DP as a first-class citizen of Envision
      4.1.4 Embedding Structural prior knowledge with DP
      4.1.5 Client-product affinity for Paris (toward cannibalization)
   4.2 Probabilistic forecasting
      4.2.1 Deterministic vs. probabilistic forecasting
      4.2.2 Accuracy of probabilistic forecasts
      4.2.3 Unscheduled repairs for Miami
      4.2.4 Algebra of random variables
   4.3 Probabilistic programming
      4.3.1 Demand forecasting for Miami
      4.3.2 Generative models
      4.3.3 Innovation state space models
   4.4 Interlude, re-thinking forecasting
      4.4.1 Feedback loops
      4.4.2 Data sources beyond transactional data
      4.4.3 Manual forecasting override (antipattern)
      4.4.4 Naked forecasting (antipattern)
5. Financial Optimization
   5.1 Store inventory allocations for Paris
   5.2 Discrete univariate decisions
      5.2.1 Zedfuncs
      5.2.2 Stock reward function
      5.2.3 Action reward function
      5.2.4 MOQs and other constraints
   5.3 Economic drivers
   5.4 Differentiable Programming (2nd round)
      5.5.1 End of collection discount steering for Paris
      5.4.2 Competitive pricing for Stuttgart
   5.5 Interlude, re-thinking the optimization
      5.5.1 Robotized execution
      5.5.2 Cultivating options
      5.5.3 Cultivating knowledge
6. Software infrastructure
   6.1 The ownership of the numerical recipes
   6.2 Correctness by design
   6.3 Hardware mutualization and hardware miscibility
   6.4 Integrated stack to deliver predictive apps
   6.5 Relational-first to prepare, learn and optimize
   6.6 Deep maintainability
   6.7 Deep security
7. Tactical execution
   7.1 Establishing a scope
   7.2 Participants and their roles
   7.3 Typical timeline of an initiative
   7.4 Data preparation
      7.4.1 Setup of the data pipeline
      7.4.2 Low-level and high-level data health
      7.4.3 Data inspectors
   7.5 Reverse Cartesianism
   7.6 Whiteboxing
   7.7 The joint procedure manual
   7.8 Production rollout
   7.9 The daily routine
   7.10 Assessing the success
8. Strategic execution
   8.1 Hiring a supply chain director
   8.2 Hiring and training a team
   8.3 Cooperation between IT and supply chain
   8.4 Beyond S&OP
9. Negative knowledge
   9.1 Bad execution
      9.1.1 Toxic classics
      9.1.2 Dark UX and dark workflows
      9.1.3 Porcelain software
   9.2 Bad leadership
      9.2.1 From indecision and ignorance to RFP and prototypes
      9.2.2 Inverted Solomon’s Judgment
      9.2.3 Sacrificial offerings to the forecasting gods
   9.3 Pseudo-science
      9.3.1 Simplistic theories suggesting sophistication
      9.3.2 Obscure theories to make you feel initiated


1. Prologue

1.1 The foundations of supply chain

Supply chain is the quantitative yet street-smart mastery of optionality when facing variability and constraints related to the flow of physical goods. It encompasses sourcing, purchasing, production, transport, distribution, promotion, ... - but with a focus on nurturing and picking options, as opposed to the direct management of the underlying operations. We will see how the “quantitative” supply chain perspective, presented in this series, profoundly diverges from what is considered the mainstream supply chain theory.


1.2 The Quantitative Supply Chain in a nutshell

The manifesto of the quantitative supply chain emphasizes a short series of salient points to grasp how this alternative theory, proposed and pioneered by Lokad, diverges from the mainstream supply chain theory. It could be summarized with: every single decision is scored against all the possible futures according to the economic drivers. This perspective gradually emerged at Lokad as the mainstream supply chain theory, and its implementation by (nearly?) all software vendors, remains challenging.


1.3 Product-oriented delivery

The goal of a quantitative supply chain initiative is either to deliver or to improve a software application that robotizes a scope of routine decisions (e.g. inventory replenishments, price updates). The application is viewed as a product to be engineered. The supply chain theory is there to help us deliver an application that steers the company toward supply chain performance, while being compatible with all the constraints that the production entails.


1.4 Programming paradigms as supply chain theory

While mainstream supply chain theory struggles to prevail in companies at large, one tool; namely Microsoft Excel, has enjoyed considerable operational success. Re-implementing the numerical recipes of the mainstream supply chain theory via spreadsheets is trivial, yet, this is not what happened in practice despite awareness of the theory. We demonstrate that spreadsheets won by adopting programming paradigms that proved superior to deliver supply chain results.


1.5 21st century trends in supply chain

A few major trends have been dominating the evolution of supply chains over the last decades, largely reshaping the mix of challenges faced by companies. Some problems have largely faded away, such as physical hazards and quality issues. Some problems have risen, such as overall complexity and competition intensity. Notably, software is also reshaping supply chains in profound ways. A quick survey of these trends helps us understand what should be the focus of a supply chain theory.

1.6 General overview of the lectures

Supply chain is an incredibly multifaceted discipline. This lecture clarifies how we have logically organized the field of study, and how we suggest to “consume” these lectures depending on the various audiences. The crash courses cover the prerequisites. The supply chain ‘personas’ define the problem statement. The quantitative modeling delivers the numerical answers. The IT infrastructure delivers the production substrate. The study of an organization clarifies how a supply chain initiative should be conducted.

2. Personae

A supply chain “persona” is a fictitious company. Yet, while the company is fictive, this fiction is engineered to outline what deserves attention from a supply chain perspective. However, the persona is not idealized in the sense of simplifying the supply chain challenges. On the contrary, the intent is to magnify the most challenging aspects of the situation, the aspects that will most stubbornly resist any attempt at quantitative modelling and any attempt piloting an initiative to improve the supply chain.

2.1 Supply chain personae, why and how

In supply chain, case studies - when one or several parties are named - suffer from severe conflicts of interest. Companies, and their supporting vendors (software, consulting), have a vested interest in presenting the outcome under a positive light. Moreover, actual supply chains typically suffer or benefit from accidental conditions that have nothing to do with the quality of their execution. The supply chain personae are the methodological answer to those issues.

2.2 Paris - a fashion brand with a retail network

Our persona ‘Paris’ is a fast fashion brand operating a retail network in Europe. Their annual turnover is 1 billion €, they operate 1000 stores. They roll out four collections a year, each collection comes with 2,000 products, half reconducted, half novel - which becomes 20,000 references once all the variants (size and color) are taken into account.

2.3 Santa Clara - an accessory ecommerce

TBD

2.4 Stuttgart - an aftermarket automotive parts distributor

TBD

2.5 Miami - an aviation MRO

TBD

2.6 Geneva - a hard-luxury watch maker

TBD

2.7 Amsterdam - a cheese brand

TBD

3. Horizons

The mastery of supply chain heavily leans on several other fields. Presenting the supply chain theory as a flavor of applied mathematics is frequent yet misguided. Those crash courses are intended to provide the cultural background required for a well-though supply chain practice which cannot and should not be reduced to a series of “models”.

3.1 Quantitative economics

Laws in economics are not nearly as strong or unified as their physics counterparts, nevertheless, these laws profoundly characterize the landscape in which supply chains operate. These laws provide prior knowledge that turns out to be very useful when quantitatively modeling supply chains. Conversely, they also shed light on why certain methods, while mathematically seductive, are unsound for supply chain purposes.

3.2 Antipatterns

Antipatterns are the stereotypes of solutions that look good but don’t work in practice. The systematic study of antipatterns was pioneered in the late 1990s by the field of software engineering. When applicable, antipatterns are superior to raw negative results, as they are easier to memorize and reason about. The antipattern perspective is of prime relevance to supply chain, and should be considered as one of the pillars for its negative knowledge.

3.3 Iatrogenics

If we are to intervene on a supply chain, we need a solid idea of not only the benefits of our intervention but also of the harm we may cause indirectly. Naïve interventions guided by naïve rationalism routinely cause more harm than good. Iatrogenics are the study of the unintended undesirable consequences. Medical science paved the way. Supply chain, as a field of study, must develop iatrogenics of its own.

3.4 Rationality and science

Wicked problems deserve in-depth introspection of what we can “science”. A positive rational methodology is needed. Strict formalism - rampant among academic circles - is nothing but naïve rationalism. A “visionary” perspective - rampant among “solution” vendors - risks to be nothing but “happy talks”. These problems are exacerbated by the nature of supply chains, where replicating results proves very difficult. Yet, supply chains aren’t the first discipline to face such problems. Let’s see how epistemology is shedding light on the special case of supply chain knowledge.

3.5 Writing for the web

Supply chains involve the coordination of large teams. It’s a complete platitude to state that good communication skills are critical to ensure the success of virtually anything whenever multinational companies are involved. In this context, written materials are king. Modern supply chains are simply not compatible with any kind of oral tradition. Yet, supply chain practitioners often fare terribly as far as their written communication skills are concerned. Let’s review what usability studies, and some notable experts, have to say on these matters.

3.6 Modern computers

Saying that modern supply chains depend on computers is an understatement. Modern supply chains require computing resources to operate just like motorized conveyor belts require electricity. Yet, sluggish supply chain systems remain ubiquitous, while the processing power of computers has increased by a factor greater than 10,000x since 1990. A lack of understanding of the fundamental characteristics of modern computing resources - even among IT or data science circles - goes a long way in explaining this state of affairs. The software design underlying the numerical recipes shouldn’t antagonize the underlying computing substrate.

3.7 Algorithms

Summary: On one hand, we have the layman who has only vaguely heard the terms a couple of times, and on the other hand, we have the computer science bachelor’s of science who is inclined to think that every problem can be solved through algorithms. In between, vast swathes of enterprise software developers who think they know about algorithms but often fail to see the point as far as supply chain software is concerned.

3.8 Numerical optimization

Numerical optimization - usually referred to as mathematical optimization - is the process of minimizing a mathematical function. Nearly all the modern statistical learning techniques - i.e. forecasting if we adopt a supply chain perspective - rely on numerical optimization at their core. Moreover, once the forecasts are established, identifying the most profitable decisions also happen to rely, at its core, on numerical optimization. We will extensively be leveraging these techniques in the following.

3.9 Statistical learning

Forecasting problems as faced by supply chain can be seen as high dimensional statistical learning problems. This field of study has undergone dramatic improvements, which remain largely misunderstood among “data scientist” circles. We will journey through this field through the resolution of three paradoxes. First, we need to make “probably approximately” correct statements about data we don’t have. Second, we need to tackle problems where the number of variables vastly exceeds the number of observations. Third, the solution may lie in models where the number of parameters vastly exceeds either variables or observations.

3.10 Programmable environments and compilers

The majority of supply chains are still run through spreadsheets (i.e. Excel), while enterprise systems have been in place for one, two, sometimes three, decades - supposedly to replace them. Indeed, spreadsheets offer accessible programmatic expressiveness, while those systems generally do not. More generally, since the 1960s, there has been a constant co-development of the software industry as a whole and of its programming languages. There is evidence that the next stage of supply chain performance will be largely driven by the development and adoption of programming languages, or rather of programmable environments.

3.11 Software engineering

Taming complexity and chaos are the two cornerstones of software engineering. Considering that supply chains are both complex and chaotic, it shouldn’t come as too much of a surprise that most of the enterprise software woes faced by supply chains boil down to bad software engineering. Numerical recipes used to optimize supply chain are software, and thus, subject to the exact same problem. Those problems grow in intensity along with the sophistication of the numerical recipes themselves. Proper software engineering is for supply chains what asepsis is to hospitals: on its own it doesn’t do anything - like treating patients - but without it, everything falls apart.

3.12 Enterprise software

The applicative landscape of modern companies shapes in profound, and frequently counterintuitive, ways how quantitative methods can be implemented, operated and maintained. The idea that a supply chain theory can somehow be isolated from the contingencies of the economic forces that drive enterprise software markets is misguided on two fronts. First, from an academic perspective, it leads to wasted research efforts on supply chains as problems are not approached correctly. Second, from a corporate perspective, either for the software buyer or the software seller, it leads to broken-by-design supply chain technologies, which don’t deliver the intended results.

3.13 Corporate structure and incentives

For a supply chain to be optimized in any meaningful way, the company’s interests must prevail over the interests of the numerous individuals and parties involved in the execution of the supply chain itself: employees, executives, consultants, software vendors, hardware vendors, etc. The organization itself conditions to a large extent the degree of optimization that can even be achieved. Conversely, the organization can be re-engineered to achieve superior supply chain performance. Once again, a proper theory of supply chain cannot be isolated from the very (human) substrate where the supply chains operate.

3.14 Computer security

Cybercrime is on the rise. Ransomware is a booming business. Due to their physically distributed nature, supply chains are particularly exposed. Moreover, ambient complexity is a fertile ground for computer security woes. Computer security is counterintuitive by design, because it’s precisely the angle adopted by attackers to find and exploit breaches. Depending on the flavors of numerical recipes involved in the supply chain optimization, the risk can be increased or decreased.

3.15 Information theory

Forecasting turns historical data, a type of information, into forecast data, another type of information. The “quantity of information” in the forecasts cannot exceed the “quantity of information” in the original data: the forecasting logic merely transforms the data. Information theory provide profound insights on the nature of information itself. In particular, informational entropy proves to be a tool of primary importance to assess the “depth” of the data which can be used for supply chain optimization purposes.

3.16 Psychology

TBD

4. Predictive Modelling

The proper quantitative anticipation of future events is at the core of any supply chain’s optimization. The practice of time-series forecasting emerged in the 20th century and had enormous influence on most large supply chains. Predictive modelling is both the descendent of time-series forecasting, but also a massive departure from this perspective. First, it tackles a much more diverse set of problem instances. Second, due to the nature of supply chain problems, a programmatic paradigm is needed. Third, as uncertainty is usually irreducible, probabilistic forecasts are needed as well.

4.1 Differentiable Programming

Differentiable Programming (DP) is a generative paradigm to engineer a very broad class of statistical models that turn out to excellently suited to address predictive supply chain challenges. DP is the descendent of deep learning, but it departs from deep learning by its intense focus on the structure of learning problems. DP supersedes almost all the entire “classic” forecasting literature based on parametric models. DP is also superior “classic” machine learning algorithms - up to the late 2010s - in virtually every dimension that matter for a practical usage for supply chain purposes, including ease of adoption by practitioners.

4.1.1 Demand forecasting for Paris (no cannibalization)

Let’s build our first demand forecasting model through Differentiable Programming (DP) for the Paris persona, the fashion retail network. The lecture is illustrated through code written in Envision, the DSL (domain specific programming language) developed by Lokad for the predictive optimization of supply chains . As we haven’t formally introduced DP yet, some aspects of this lecture are likely to appear a bit obscure. Those aspects will be revisited in the next session, once we have covered what DP feels like.

4.1.2 Understanding Differentiable Programming (DP)

DP is a programming paradigm where the program itself becomes the very model of interest - either from a learning perspective or from an optimization perspective. DP brings together automatic differentiation and stochastic gradient descent. While DP may appear intimidating, the approach is simpler than most machine learning algorithms. DP embodies the profound shift that has taken place in the machine learning field from ‘programs’ to ‘programming paradigms’.

4.1.3 DP as a first-class citizen of Envision

During the last 5 decades, each critical programming paradigm challenged existing programming languages in profound ways: object-oriented programming, garbage collection, late binding ... DP presents serious challenges for existing programming languages. Envision, developed by Lokad, embraces DP as a first-class citizen in order to make the most of this paradigm. We clarify some of the syntaxic aspects of DP as implemented in Envision.

4.1.4 Embedding Structural prior knowledge with DP

Until the late 1980s, a segment of the machine learning community tried - and mostly failed - at embedding prior knowledge into “intelligent” software via expert systems, instead of extracting the knowledge directly from the data. However, 3 decades later, deep learning achieved considerable success via convolutional layers, which represent a middle-ground approach to prior knowledge: the expert provides the “structure” about what is to be learned from the data. This approach proves to be well-suited for supply chains, especially when it comes to data efficiency and whiteboxing.

4.1.5 Client-product affinity for Paris (toward cannibalization)

The raw transaction history that connects clients and products offers a much richer perspective on the demand compared to the classic daily/weekly/monthly aggregated time-series perspective. This history is better seen as a bipartite, temporal graph between clients and products. In particular, this perspective addresses the long standing question of the “source” of the demand with a more satisfying perspective. DP (almost) trivializes graph learning techniques such as collaborative filtering, while providing the foundation to exploit these results for many supply chain purposes such as planning, pricing or assortment optimization.

4.2 Probabilistic forecasting

A forecast is said to be probabilistic, instead of deterministic, if it contains a set of probabilities associated with all possible future outcomes, instead of pinpointing one particular outcome as “the” forecast. Probabilistic forecasts are important whenever uncertainty is irreducible, which is nearly always the case whenever complex systems are concerned. For supply chains, probabilistic forecasts are essential to produce robust decisions against uncertain future conditions.

4.2.1 Deterministic vs. probabilistic forecasting

The optimization of supply chains relies on the proper anticipation of future events. Numerically, these events are anticipated through forecasts, which encompass a large variety of numerical methods used to quantify these future events. From the 1970s onward, the most widely used form of forecast has been the deterministic time-series forecast: a quantity measured over time - for example the demand in units for a product - is projected into the future. However, this perspective dismisses (almost) entirely the notion of uncertainty, while uncertainty is embraced by probabilistic forecasting.

4.2.2 Accuracy of probabilistic forecasts

No matter what happens, a reasonably well-designed probabilistic forecast indicates that there was indeed a non-zero probability for this outcome to happen. This is intriguing because at a first glance, it may appear as if probabilistic forecasts were somehow immune to the reality, just like a fortune teller making vastly ambiguous prophetic statements that can’t ever be proven wrong, as the fortune teller can always conjure a later explanation about the proper way to interpret the prophecies after the fact. In reality, there exist multiple ways to quantitatively assess the quality of a probabilistic forecast.

4.2.3 Unscheduled repairs for Miami

Unscheduled repairs of components severely impact aircraft maintenance costs. However, the naïve analysis of the problem is largely derailed by the survivorship bias: the history of component repairs does not contain the unscheduled repairs which did not happen - only those which did. Through DP, we can provide a fat-tail model for the unscheduled repairs. In particular, we will illustrate that the idea MTBUR (mean time between unscheduled repairs) is a fairly leaky abstraction, as many of those distributions don’t even have a well-defined statistical mean.

4.2.4 Algebra of random variables

Uncertainty needs more than being merely estimated, the probabilistic forecasts need dedicated tooling to be of any practical supply chain use. An algebra of random variables typically work on explicit probability density functions. The algebra supports the usual arithmetic operations (addition, subtraction, multiplication, etc.) but transposed to their probabilistic counterparts, frequently treating random variables as statistically independent. This lecture is illustrated with the ranvar data type in Envision, which provides an effective random variable algebra tailored for supply chain purposes.

4.3 Probabilistic programming

TBD

4.3.1 Demand forecasting for Miami

TBD

4.3.2 Generative models

TBD

4.3.3 Innovation state space models

TBD

4.4 Interlude, re-thinking forecasting

At this point, we have already introduced a short series of forecasting methods, or rather we have introduced a programming paradigm - differentiable programming - to create them as needed, structurally espousing the context. It’s time to step back and explore supply chain forecasting angles beyond the technicalities of the numerical recipes themselves.

4.4.1 Feedback loops

Forecasts aren’t produced in a vacuum, they have consequences on the supply chain. Frequently, these consequences end up impacting the very supply chain phenomenon that the forecasts were trying to capture in the first place. In practice, these feedback loops are all over the place, and yet, they are too frequently ignored. Some feedback loops do have positive consequences for the company (eg. self-prophetic effects on sales) while others don’t (eg. delayed purchases until the sales). Either way, the forecasting method should attempt to factor in these effects.

4.4.2 Data sources beyond transactional data

The proper exploitation of the transactional data as found in the company’s systems goes a surprisingly long way as far as supply chain optimization is concerned. However, this data can sometimes be supplemented by ‘external’ data sources, the most notable one being competitive intelligence data. Some data sources are notoriously hard to exploit for supply chain purposes. We review the pros and cons associated with these data sources.

4.4.3 Manual forecasting override (antipattern)

Due to the defective design of the software stack, many companies have developed the practice of routinely applying manual overrides to their forecasts. These manual overrides hurt the supply chain because they are not accretive: all the time invested in identifying the necessary fixes and their application is better spent fixing the underlying numerical recipes. Thus, in order to eliminate manual overrides, we need to eliminate the root causes of the problem in the first place.

4.4.4 Naked forecasting (antipattern)

The idea that the naked demand forecasts could be produced and shared at large within the company is probably the single most dangerous idea of the “classic” supply chain theory, which proves even to be harmful to companies. In theory it should work, but in practice it doesn’t. Forecasts are intrinsically coupled with the supply chain problem they are trying to solve.

5. Financial Optimization

Every single day, thousands of supply chain decisions (millions in large companies) are to be made as part of the daily routine of the company’s operations. Each decision comes with alternatives. The supply chain optimization’s goal is to pick the options that turn out most profitable while facing future uncertain conditions. This process presents two keys challenges that we haven’t addressed yet: first, the quantitative assessment of the profitability of any decision, second, the roll-out of the numerical optimization recipes suitable for supply chain problems.

5.1 Store inventory allocations for Paris

Let’s build our first financial optimization model for the stock replenishment operations at the store level for the Paris persona, the fashion retail network. This lecture is illustrated through code written in Envision, the DSL (domain specific programming language) developed by Lokad for the predictive optimization of supply chains. We introduce the economic prioritization scheme and illustrate how this scheme also addresses “for free” problems at the warehouse level.

5.2 Discrete univariate decisions

Many supply chain decisions present themselves as discrete, univariate problems, for example when one has to decide how much to purchase, move, transform, package, scrap, … for any given SKU managed by the company. Due to the prevalence and diversity of these decisions, it is of interest to establish some versatile tooling to cope not with a particular instance of the problem, but with entire classes of problems.

5.2.1 Zedfuncs

In Envision, the zedfunc data type is an algebraic utility that offers the possibility to work on ‘all possible decisions’ at once (univariate case). It is the counterpart of the ranvar data type - introduced in the previous chapter - which offers the possibility to work on ‘all possible futures’. Combined with prioritization techniques, the zedfunc algebra lays the groundwork for an end-to-end financial optimization of supply chains.

5.2.2 Stock reward function

The stock reward function can be seen as a small mathematical framework to combine a probabilistic demand forecast with 3 inventory economic drivers (gross-margin, stock-out penalty and carrying cost) in order to assess the expected financial outcome of every stock level position. The stock reward was pioneered by Lokad a few years ago as a superior alternative to the naive scoring methods that were previously in use.

5.2.3 Action reward function

The action reward function is the spiritual descendent of the stock reward function. It addresses many of the original limitations of the stock reward function such as: non-stationary probabilistic demand forecasts, probabilistic lead times, no-regret perspective, unrecoverable loss of the unserviced demand, etc. The action reward has also been pioneered by Lokad in order to take advantage of the newer capabilities offered by the DP predictive models.

5.2.4 MOQs and other constraints

The economic prioritization scheme, despite its simplicity, proves capable of dealing with many non-linear constraints that are prevalent among supply chains. At first glance, this wasn’t necessarily to be expected from a greedy algorithmic process. We illustrate how many common MOQ constraints can be dealt with with little complication involved. This also sheds a new light on the very nature of the numerical optimization problem in supply chain.

5.3 Economic drivers

Supply chain performance should be assessed financially - not through percentages as it is commonly done (e.g. MAPE, service levels). Optimizing percentages brings harm in two ways: first through the illusion of progress and second through bureaucracies that invariably emerge to support this. In contrast, the financial assessment is engineered to be aligned with the company’s strategic interests. In particular, the financial impact of a decision should be decomposed into its underlying economic drivers.

5.4 Differentiable Programming (2nd round)

Differentiable Programming (DP) embodies the convergence between the two broad fields of statistical learning and numerical optimization. So far, this programming paradigm has been applied to statistical learning problems, and we will now see how it can be applied to numerical optimization problems as found in supply chains.

5.5.1 End of collection discount steering for Paris

Let’s build a discount steering model through DP for the Paris persona, the fashion retail network. The lecture is illustrated through code written in Envision. The goal is to make the most of the stock held by the company, while adjusting the level of discount to ensure a timely liquidation by the end of the season.

5.4.2 Competitive pricing for Stuttgart

Let’s build a competitive pricing strategy through DP for the Stuttgart persona, the auto-parts distributor. The lecture is illustrated through code written in Envision. The goal is to implement a well-behaved pricing strategy when confronted with competitors, who also benefit from competitive intelligence data.

5.5 Interlude, re-thinking the optimization

We have introduced a short series of optimization techniques that are capable of approaching very diverse supply chain situations. Much like the way we approached predictive modelling, optimization is approached via programming paradigms as opposed to specific numeric recipes. Again, it’s time to step back and expand our horizon on the optimization perspective.

5.5.1 Robotized execution

Treating experts as the human coprocessors of otherwise dysfunctional enterprise systems is wasteful. The supply chain practice should make sure that the experts’ time is capitalized, not consumed. The robotization of the end-to-end process generating the decisions is the key to achieving an accretive supply chain practice. Indeed, robotization is the key to having the experts focus on the continuous improvement of the numerical recipes, as opposed to fire fighting.

5.5.2 Cultivating options

The options - purchasing, production, etc. - presented to the numerical optimization recipe did not fall from the sky: they were engineered or negotiated in the first place. Thus, these options can be improved upon, independently from the optimization process that later takes place. While improving the options and improving the numerical optimization are complementary, the former tends to be neglected.

5.5.3 Cultivating knowledge

The historical data obtained by the company is the result of its past decisions. The decisions are shaping what may be observed and what won’t. Trying out “risky” options is a mechanism to learn more about the market, yet there is a balance to be found in the amount of risk to be taken. This problem is known as the exploration vs. exploitation trade-off, and lies at the core of a field known as reinforcement learning.

6. Software infrastructure

Modern supply chains live and die by the quality of their software infrastructure. Delegating the understanding of what makes a good software infrastructure to IT is the recipe for ensuring endless production problems. The introduction of non-trivial numerical recipes, as discussed in the previous chapters, magnifies the problem due to their own internal complexity. The software infrastructure itself must be engineered to exhibit properties that are suitable for supply chain optimization purposes.

6.1 The ownership of the numerical recipes

Casual observations of supply chains indicate that the operational success of numerical recipes is heavily dependent on whether there is a small group of people within the organization that owns those recipes. In practice, it turns out that the software infrastructure itself dictates to a large extent whether the numerical recipes can be owned or not. This perspective provides a first round of insights into the software infrastructure’s desirable properties as far as supply chain optimization is concerned.

6.2 Correctness by design

In the 21st century, well-thought design is something that we take for granted for virtually every object that we have in our homes. Yet, when it comes to software, especially the “scientific” kind used to implement numerical recipes, the lack of design is stunning. Fail fast and break things might be the proper mindset to roll-out a gaming app, but it is certainly not the right mindset for supply chains where failure tends to be exceedingly costly.

6.3 Hardware mutualization and hardware miscibility

Supply chain optimization vastly differs from its supply chain management counterpart in its highly irregular needs of computing resources. The hardware and software paradigms pioneered by the cloud computing giants turn out to be of prime relevance for supply chain purposes. Hardware mutualization emphasizes a dynamic allocation of resources. Hardware miscibility emphasizes a perspective where one type of resources can be traded for another.

6.4 Integrated stack to deliver predictive apps

Deeply layered design is the silent killer of modern software, hurting on nearly all the fronts that matter: reliability, compute performance, productivity and business performance. The problem is magnified for supply chain, due to ambient complexity both at the hardware and software levels. Worse, the greater the sophistication of the numerical recipes, the greater the number of layers they introduce. De-layering via stack integration turns out to be the solution that supply chains need.

6.5 Relational-first to prepare, learn and optimize

The quasi-totality of the supply chain data happens to be highly structured, relational data. Thus, unsurprisingly, a “good” software infrastructure is one that adopts a relational-first perspective of the data. The data frame paradigm is an important step to this direction. However, this paradigm can and should be further extended beyond the single table case to support most supply chains situations. Further, the relational-first approach must also permeate the statistical learning and the numerical optimization aspects of the software infrastructure.

6.6 Deep maintainability

Software maintainability is a matter of survival for supply chains. Yet, both academics and consultants have structural incentives to disregard the problem entirely. Even enterprise software vendors may have incentive to mostly disregard the problem depending on how their pricing structure is set up. Unfortunately, design-wise, maintainability cannot be an afterthought. A series of design principles vastly improve the software infrastructure’s maintainability.

6.7 Deep security

Corporate ransomware is on the rise and upon examination of the root causes there is little reason to believe that this problem will fade away in the next two decades. Supply chain optimization presents some unique challenges in regards to its requirements for programmatic expressiveness. In particular, the urgency frequently associated with mundane corrections of the numerical recipes as found in supply chain are not compatible with the usual practices adopted by the software industry to ensure that the software is secure. Once more, the solutions are to be found in a secure design of the software infrastructure itself.

7. Tactical execution

An initiative that intends to improve the performance of the supply chain through superior numerical recipes may, if successful, profoundly alter the supply chain itself. This perspective comes with two major caveats. First, the numerical recipes must be engineered design-wise to facilitate process; there is more to it than it appears. Second, the very process of introducing numerical recipes reshape the recipes themselves; which, at first glance, is fairly counterintuitive.

7.1 Establishing a scope

The scope of an initiative must be attached to a mundane routine decision. Focusing on intermediate numerical artifacts such as forecasts is frequent, but misguided. Furthermore, the curse of supply chain is that local solutions are displacing problems rather than addressing them. However, all-encompassing scopes are harmful as well. The scope must be engineered by paying attention to the tradeoffs involved.

7.2 Participants and their roles

While modern companies are incredibly diverse, there are strong organizational commonalities that are near ubiquitous, such as the presence of an IT department under one form or another. The stereotype roles of the initiative are the coordinator, the data officer, the supply chain executive and the supply chain scientist.

7.3 Typical timeline of an initiative

Assuming that a proper software infrastructure is available, a quantitative supply chain’s timeline has little to do with the technology: the human factor becomes the bottleneck at every stage - which is just as it should be. There is only so much change a complex organization can take in a short period of time while keeping risk under control. Inadequate software infrastructure usually derails the timeline due to the introduction of accidental complexity.

7.4 Data preparation

TBD

7.4.1 Setup of the data pipeline

The freshness of the data matters critically because outdated data discourages supply chain practitioners. Thus a data pipeline is needed - not merely a data extraction - no matter how modest the initiative. The data pipeline must mirror the production data (quirks included) and not attempt to “fix” the data. Also, it’s the opportunity to do a first round of data qualification focusing on the data’s semantics.

7.4.2 Low-level and high-level data health

The notion of data health tries to capture whether the data is suitable to be exploited for supply chain purposes. This approach diverges significantly from the perspective traditionally associated with data cleaning and data preparation. The low-level data health focuses on capturing the IT-flavored problems, typically caused by problems happening at the system boundaries. The high-level data focuses on capturing the business-flavored issues, typically caused by some impedance mismatch between the reality of the business and its digital counterpart in the systems.

7.4.3 Data inspectors

A strict correspondence should be established between what is seen in the assess management systems (i.e. ERP, MRP, WMS, etc) and what is seen within the predictive optimization system. Moreover, the information should be consolidated into meaningful units - referred to as “inspectors” - which provide a consolidated view of the first-class citizens within the supply chain (e.g. products, locations, clients), etc.

7.5 Reverse Cartesianism

Far from the naïve Cartesian perspective where optimization would just be about rolling-out an optimizer for a given score function, supply chain requires a much more iterative process. Each iteration is used to identify “insane” decisions that are to be investigated. The root cause is frequently improper economic drivers, which need to be re-assessed in regards to their unintended consequences. The iterations stop when the numerical recipes no longer produce insane results.

7.6 Whiteboxing

Any non-trivial numerical recipe, beyond moving averages, should be expected to be opaque by default for the supply chain practitioners. Yet, when confronted with numerical opacity, supply chain practitioners should, and in practice will, put their veto against those results that should not be trusted. Whiteboxing, the answer to this problem, is fundamentally a process that can, however, be radically simplified if the software infrastructure is properly designed.

7.7 The joint procedure manual

The “initiatives’ big book” - technically referred to as the JPM (Joint Procedure Manual) - answers the critical question of the intent. The formulas and the source code answer the what and how questions, but they do not answer the why. The JPM ensures that the supply chain scientists understand the problem they are facing. Over time, the JPM becomes the key to ensure a smooth transition from one supply chain scientist to the next.

7.8 Production rollout

Fail fast and break things is not the proper mindset to graduate the prediction optimization system toward the supply chain production. Worse, waterfall processes are risky in practice, while giving the impression that risks are under control. Reducing the risks associated with the production go-live is primarily a question of having the right process supported by the right tooling.

7.9 The daily routine

The day in the life of a “classic” demand and supply planner mostly includes clerical tasks that are entirely automated away through an intentional design of the numerical recipes, as promoted by the quantitative supply chain. Yet, while superior automation drastically reduces the raw man-day requirements to operate a given supply chain, there are classes of operations that will remain beyond automation for the next few decades at least.

7.10 Assessing the success

Counterintuitively, metrics and KPIs tell too little and too late to know whether a quantitative supply chain initiative is a success or not. By the time this sort of numerical assessment is obtained, the go-live decision has already been made. Also, in practice, metrics are too easily gamed to be of any use whenever interpreted literally. Observational heuristics should be used instead, as they are notably more reliable against the ambient chaos of the supply chain.

8. Strategic execution

Supply chain, both as a practice and a field of study, aims at being an enabler and a competitive advantage for the company as a whole. From a top management perspective, two angles dominate: making supply chain an accretive asset and unlocking superior ways to execute the business. In practice, results mostly boil down to the choice of the right team players.

8.1 Hiring a supply chain director

The head of the supply chain has one job: making the supply chain as accretive as possible for the company as a whole. First, this person needs to put an end to fire-fighting practices, if any, as they prevent capitalist improvements from taking place. Second, this person needs to be capable of hiring a great team, which usually boils down to being able to lead by example.

8.2 Hiring and training a team

Smarts and getting things done should be the two traits that supply chain divisions are hiring for. Supply chain is competing with the market at large for those profiles. Management consulting firms and software companies have a multi-decade track record of out-competing the rest of the market as far as those profiles are concerned. This should make you wary of very “smart” people (PhDs, data scientists) who happen to be still conveniently available and applying to your positions.

8.3 Cooperation between IT and supply chain

Supply chain must take its software practices into its own hands. This approach profoundly (re)defines the relationship between IT and supply chain. IT remains present to support the core software infrastructure and provide all the necessary coaching whenever system-level expertise is required.

8.4 Beyond S&OP

S&OP is correct in its emphasis on end-to-end company alignment to properly serve the market. However, its practices - generally geared towards meetings - are dated. Capabilities of modern software allow you to re-think in depth how synchronization can be achieved by design in ways that are more accretive. A mere upgrade of S&OP via software tools only brings a fraction of the benefits that can be achieved via a more radical redesign of the practice.

9. Negative knowledge

As a rule of thumb, negative knowledge - things that don’t work - is more stable than positive knowledge. If it has repeatedly failed so far, odds are exceedingly high that it will keep failing forever. Negative knowledge is highly valuable for supply chain, precisely because it is so dependable compared to its positive counterpart.

9.1 Bad execution

Many supply chain initiatives fail due to fairly mundane aspects of their day-to-day execution. In those death by a thousand cuts situations, the staff and its management spend most of their time and energy putting out the fires, while remaining oblivious to their root causes.

9.1.1 Toxic classics

Service levels, safety stocks, ABC analysis, EOQ formula - alongside many old-time classics - happen to be both widespread and largely dysfunctional in a real world context. These classic numerical recipes all have in common the idea of being grounded in idealized supply chains, which in reality bear little resemblance to actual supply chains.

9.1.2 Dark UX and dark workflows

Alerts and exceptions are ubiquitous in supply chain software, and yet, these UX (user experience) patterns systematically lead to poor productivity and the near-impossibility of continuous improvements. More generally, many workflows give the appearance of “rationality” because they happen to be fully “specified”, yet they are nothing more than reified bureaucracies.

9.1.3 Porcelain software

Every time a software “glitches” somewhere in the supply chain landscape, operations are disrupted, teams stop working on continuous improvement to switch to fire-fighting mode. Eliminating these problems should be a top priority, yet, most common approaches are misguided and end up resulting in the opposite.

9.2 Bad leadership

Like almost every corporate problem, the root cause ultimately stems from poor upper management. Yet, the motto “work smarter, work harder” is less than useful in practice. Nevertheless, a certain set of bad practices have become enormously popular, mostly because they play on the fear, uncertainty and doubts that are plaguing those higher up the hierarchy.

9.2.1 From indecision and ignorance to RFP and prototypes

Supply chain executives may lack depth in their vision about what their own supply chains ought to be. Depth is usually lacking on two fronts: technical and business. On the technical side, a lack of mechanical sympathy leads to software woes. On the business side, a lack of acumen leads to a supply chain stuck in a supportive role at best. Indecision and ignorance manifest themselves through ill-structured RFP and prototypes.

9.2.2 Inverted Solomon’s Judgment

The judgement of Solomon is, among other things, about keeping whole what wasn’t intended to be split in the first place. Yet, the divide-and-conquer approach is used extensively by large companies to fragment decisions among many silos. This fragmentation is fueled by incorrect understanding of the benefits of specialization, and also generates broken incentives in its trail.

9.2.3 Sacrificial offerings to the forecasting gods

For hundreds of years, ancient Roman generals were probing the future via augurs and haruspices. Later, in more recent times, politicians moved toward card reading, which is more conveniently performed indoors. Yet card reading was still lacking in accuracy, and thus, some corporate executives modernized the whole affair, which may now be going under the name of S&OP.

9.3 Pseudo-science

Pseudo-science runs amok in supply chain circles. Indeed, people facing wicked problems are comparatively easier prey to charlatans, because it’s harder to expose them. Also, even the fraud when exposed, a little bit of bad faith goes a long way to deflect any criticism.

9.3.1 Simplistic theories suggesting sophistication

Supply chain problems are both incredibly diverse and technical. As a result, every decade or so emerges some theory that proposes a radical simplification. Those theories have to be just “complex enough” so that they aren’t blatantly simplistic, yet simplistic enough to gain easy traction as they only command minimal efforts from the supply chain practitioners. We will be discussing DDMRP and flowcasting in this context.

9.3.2 Obscure theories to make you feel initiated

Conversely, every decade or so there emerges a supply chain theory or trend that goes in the exact opposite direction than the one pointed out in the previous lecture. As supply chain problems appear opaque, it’s not too unreasonable to expect that the solution could be opaque as well. By playing the powerful “fear of missing out” card, actors get traction among supply chain circles promising an initiation. Three notable recent examples are AI, blockchain and demand sensing.