Project deliverables

learn menu

The goal of Quantitative Supply Chain is to deliver actionable decisions - suggested quantities for purchase orders being an archetypal example. Below, we further clarify the specific form and delivery mechanism of those decisions. Setting clear expectations for the deliverables is an important step in the journey that Quantitative Supply Chain represents. Also, the optimized numerical results are not the only desirable output: several other outputs, most notably data health monitoring and management KPIs, should be included in the deliverable as well. In practice, the deliverables of a Quantitative Supply Chain initiative are dependent on the flexibility of the software solution used to support the initiative itself. Nevertheless, they are primarily defined by their intent , which is agnostic of the technology being used.

Scripts as deliverable

Quantitative Supply Chain emphasizes fully automated data pipelines . This does not imply that the software setup is supposed to run autonomously. A high degree of close supervision is naturally desirable whenever a large scale supply chain is being considered. Nevertheless, the data pipeline is expected to be fully automated in the sense that no single step in the pipeline actually depends on a manual operation. Indeed, as outlined in the manifesto, whenever manual operations are involved in supporting supply chain data processing, the solution simply does not scale in practice.

As a direct consequence of this insight, the deliverables of a Quantitative Supply Chain initiative are invariably whole pieces of software. This does not imply that the team in charge is expected to reimplement everything: a software solution dedicated to Quantitative Supply Chain offers the possibility to focus strictly on the relevant aspects of supply chain challenges. All the low-level technicalities, such as leveraging distributed computing resources auto-allocated within a cloud computing platform, are expected to be abstracted away. The team does not need to delve into such matters, because those aspects are expected to be suitably managed by the tooling itself.

The deliverables are materialized as scripts typically written in a programming language able to accommodate the supply chain requirements, while featuring a high level of productivity. The term “script” is used here rather than “source code”, but the two terms are closely related: a “script” stresses the idea of a high degree of abstraction and a focus on the task itself, while a “source code” emphasizes a lower level perspective, intended to be an accurate reflection of the computing hardware itself. For Quantitative Supply Chain, it’s obviously the supply chain perspective that matters the most, not the computing hardware, which is a technical aspect of secondary importance.

During the last decade, the success of WYSIWYG (what-you-see-is-what-you-get) user interfaces for end-customer apps has caused many supply chain software vendors to try to emulate this approach with a WYSIWYG solution for supply chain planning and optimization. However, the lesson of the near systematic failure of these types of interfaces is that supply chains are complex and cannot dodge the need for programmatic tools. From our experience, expecting a drag-and-drop tool to be able to reflect properly the complex nonlinearities involved in, say, overlapping MOQ (minimum ordering quantities), is delusional at best. Programmatic expressiveness is required because, otherwise, the supply chain challenge cannot even be expressed within the tool.

Naturally, from the end-user perspective, scripts are not what supply chain practitioners would expect to see as a tangible output of the Quantitative Supply Chain initiative. People will interact with dashboards that contain consolidated KPIs and tables that gather suggested decisions. However, those dashboards are transient and disposable. They are merely obtained by running the scripts again on top of the relevant supply chain data. While the distinction is a bit subtle, it’s important not to confuse the script, which represents the real deliverable, with its numerical expression, which is typically what you can see as an end-user of the solution.

Data health dashboards

Before considering delivering optimized decisions for the supply chain, we must ensure that the data processed by the system that supports the Quantitative Supply Chain initiative is both numerically and semantically correct. The purpose of the data health monitoring dashboards, or simply data health dashboards, is to ensure a high degree of confidence in the data’s correctness, that is naturally an essential requirement for the accuracy of all the numerical results returned by the solution. These dashboards also assist the supply chain team in improving the quality of the existing data.

Numerical errors are straightforward: the CSV file exported from the ERP indicates that the product ABC has 42 units in stock, while the ERP web console reports only 13 units in stock. It is apparent here that we have divergent numbers where they should be the same. The data health dashboards address those relatively obvious problems by simply checking that data aggregates remain within expected numerical ranges.

Semantic errors are more subtle and, in practice, much harder to pinpoint. Most of the work done during the data preparation actually consists in identifying and addressing all the semantic errors. For example: the field stockinv in the ERP might be documented as being the stock on hand. Thus, the supply chain team assumes that this quantity can never be negative, because, obviously, if those units are within physical reach on the shelf, it has to be a positive quantity. Yet, the documentation of the ERP might also happen to be slightly misleading, and this quantity would have been more aptly named stock available because whenever a stock-out happens and the client issues a backorder, the quantity can become negative to reflect that a certain number of units are already due to a client. This case illustrates a semantic error: the number isn’t wrong per se - it’s the understanding of the number that is approximate. In practice, semantic approximations can generate many inconsistent behaviors, which, in turn, generate ongoing friction costs within the supply chain.

The data health dashboards consolidate numbers that allow the company to decide on the spot whether the data can be trusted as good enough or not. Indeed, as the solution is going to be used on a daily basis for production purposes, it is imperative that a significant data problem be identified through a near-instant inspection. If not, then the odds are that the supply chain will end-up operating for days, if not weeks, on top of faulty data. In this respect, the data health dashboard is akin to a traffic light: green you pass, red you stop.

Furthermore, when considering a sizeable supply chain, there is usually an irreducible amount of corrupted or otherwise incorrect data . This data arises through faulty manual entries or through rare edge cases in the company systems themselves. In practice, for any sizeable supply chain, it’s unreasonable to expect that the supply chain data will be 100% accurate. Instead, we need to ensure that the data is accurate enough to keep the friction costs generated by those errors quasi-negligible.

Hence, the data health dashboards are also expected to collect statistics on the identified data errors. Those statistics are instrumental in establishing that the data can be trusted. To that end, a Supply Chain Scientist is frequently called upon to establish well-chosen alert thresholds, typically associated with a hard stop of the solution. Care needs to be exercised in establishing the thresholds because, if they are too low, then the solution is unusable, as it is too frequently stopped for “identified data issues”; however, if they are too high, then friction costs generated by data errors may become significant and undermine the benefits brought by the initiative itself.

Beyond red-green signaling, data health dashboards are also intended to offer prioritized insights into the data improvement efforts . Indeed, many data points might be incorrect but also inconsequential. For example, it does not matter if the purchase price of a product is incorrect if the market demand for this product vanished years ago, as there won’t be any further purchase orders for this product.

The Quantitative Supply Chain emphasizes that the fine grained resolution of the data errors, which may involve a considerable amount of manual work, should be prioritized against the estimated financial impact of the data error itself vs. the labor cost associated with the correction. Indeed, depending on the situation, the cost associated with the correction of a single faulty data point varies enormously, and needs to be taken into account in the suggested prioritization. Finally, when the cost of corrections is deemed to be more expensive than the supply chain costs generated by those errors, the data improvement process can stop.

Prioritized decision dashboards

As we have seen, only supply chain decisions can truly be assessed from a quantitative perspective. Thus, it’s no surprise that the one key operational deliverables of a Quantitative Supply Chain initiative is the dashboards that consolidate the decisions obtained as the final numerical result of the whole data pipeline. Such a dashboard can be as simple as a table that lists for every product the exact quantity in units to be immediately reordered. If MOQs (minimum order quantities) are present - or any alternative ordering constraints - then the suggested quantities might be zero most of the time, until the proper thresholds are met.

For the sake of simplicity, we assume here that those numerical results are gathered into a dashboard, which is a specific form of user interface . However, the dashboard itself is only one option, which may or may not be relevant. In practice, it is expected that the software powering the Quantitative Supply Chain initiative be highly flexible, i.e. programmatically flexible, offering many ways to package those results in various data formats. For example, the numerical results can be consolidated within flat text files, which are intended to be automatically imported into the primary ERP used to manage the company’s assets.

While the decisions’ format is highly dependent on the supply chain task being addressed, most tasks require prioritizing those decisions. For example, the act of computing suggested quantities for a purchase order can be decomposed through a prioritized list of units to be acquired. The most profitable unit is ranked first. As stock comes with diminishing returns, the second unit acquired for the same product addresses a diminishing fraction of the market demand. Therefore, the second unit for this very product may not be the second entry in the overall list. Instead, the second most profitable unit can be associated with another product, etc. The prioritized list of units to be acquired is conceptually without end: it’s always possible to purchase one more unit. As market demand is finite, all purchased units would become dead stock after a point. Turning this priority list into the final quantities for purchase only requires introducing a stopping criterion , and summing up the quantities per product. In practice, nonlinear ordering constraints further complicate this task, but, for the sake of simplicity, we will put these constraints aside at this stage of the discussion.

Prioritizing decisions is a very natural operation from a Quantitative Supply Chain point of view. As every decision is associated with a financial outcome expressed in dollars , ranking the decisions from the most profitable to the least profitable is straightforward. Thus, many, if not most, of the dashboards that compile the suggested supply chain decisions can be expected, in practice, to be prioritized lists of decisions. These dashboards contain lists with highly profitable decisions listed at the top and very unprofitable ones listed at the bottom. Alternatively, supply chain practitioners may decide to truncate the lists when decisions are unprofitable. However, there are frequently insights to be gained from being able to inspect decisions that happen to be just below the profitability threshold - even if the company obviously isn’t expected to act on those unprofitable entries.

In order to deliver this type of decision-driven dashboards, the software solution supporting the Quantitative Supply Chain needs to numerically explore vast amounts of possible decisions. For example, the solution should be able to consider the financial impact of purchasing every single unit, unit by unit, for every single product in every single location. Not surprisingly, this operation may require substantial computing resources . Fortunately, nowadays, computing hardware is capable of dealing with even the largest global supply chains. Assuming that the underlying software solution is suitably architectured for Quantitative Supply Chain, scalability of the data processing should remain a non-issue as far supply chain teams are concerned.

Whiteboxing the numerical results

Systems derisively referred to as black boxes in supply chain, and other fields too, are systems that generate outputs that cannot be explained by the practitioners who interact with those systems. The Quantitative Supply Chain, with its specific focus on an automated data pipeline, also faces the risk of delivering what supply chain teams would classify as “black boxes”. Indeed, the financial implications of supply chain decisions are very important for a company, and, while a newer system can improve the situation, it also can potentially create disasters. While automation is highly desirable, it does not mean that the supply chain team isn’t expected to have a thorough understanding of what is being delivered by the data pipeline supporting the Quantitative Supply Chain initiative.

The term whiteboxing refers to the effort needed to make the solution fully transparent for the benefit of supply chain teams. This approach emphasizes that no technology is transparent by design. Transparency is the end-result of a specific effort, which is part of the initiative itself. Even a simple linear regression can generate baffling results in practice. Putting aside a few exceptional individuals, most people do not have an intuitive understanding of the what the linear model’s “expected” output is whenever 4 dimensions or more are involved. Yet, supply chain problems often involve dozens of variables, if not hundreds. Thus, even simplistic statistical models are de facto black boxes for supply chain practitioners. Naturally, when machine learning algorithms are used, as is recommended by Quantitative Supply Chain, they leave the practitioners even more in the dark.

While the black box effect is a real problem, a realistic solution does not lie in simplifying data processing into calculations that are immediately intuitive to the human mind. This approach is a recipe for extreme inefficiency, which entirely demolishes all the benefits of the modern computing resources, which can be used to tackle the raw complexity of modern supply chains. Dumbing down the process is not the answer. Whiteboxing is.

Even the most complex supply chain recommendations can be made largely transparent to supply chain practitioners, by simply decomposing the inner calculations with well-chosen financial indicators, which represent the economic drivers that support the recommendation itself. For example, instead of merely displaying a bare table with two columns product and quantity as a suggested purchase order, the table should include a couple of columns that aid decision-making. Those extra columns can include the current stock, the total demand over the last month, the expected lead time, the expected financial cost of stock out (if no order is passed), the expected financial cost of overstock (risk associated with the suggested order), etc. The columns are crafted in order to give the supply chain team quick sanity checks of the suggested quantities. Through the columns, the team can rapidly establish trust in the numerical output, and can also identify some of the weaknesses of a solution that needs further improvement.

Extending the dashboards for whiteboxing purposes is partly an art. Generating millions of numbers is easy, even having access to no more computing resources than what is available on a smartphone. Yet, generating 10 numbers worthy of being looked at on a daily basis is very difficult. Thus, the core challenge is to identify a dozen or less KPIs that are sufficient to shed light on the recommended supply chain decisions. Good KPIs typically require a lot of work; they should not be naïve definitions, which are usually misleading in supply chain. For example, even a column as simple as the “unit purchase price” can be highly misleading if the supplier happens to offer volume discounts, thereby making the purchase price dependent on the quantity being purchased.

Strategic dashboards

While the focus on small scale decisions is necessary - as it’s one of the few approaches that yields itself to quantitative performance assessments - supply chain may also need to be adjusted in bigger, more disruptive ways, to ramp up performance to its next level. For example, purchasing more well-chosen units of stock marginally increases the service level. However, at some point, the warehouse is full, and no additional unit can be purchased. In this situation, a bigger warehouse should be considered. In order to assess the impact of lifting this limitation, we can remove the warehouse capacity constraint from the calculations and evaluate the overall financial upside of operating with an arbitrary, large warehouse. The supply chain management can then keep an eye on the financial indicator associated with the friction cost imposed by the warehouse capacity itself, and can then decide when it’s time to consider increasing warehousing capacity.

Typically, supply chains operate based on numerous constraints that cannot be revised on a daily basis. Those constraints can include working capital, warehousing capacity, transportation delays, production throughput, etc. Each constraint is associated with an implicit opportunity cost for the supply chain, which typically translates into more stock, more delays or more stock-outs. The opportunity cost can be assessed through the performance gains that would be obtained by removing or weakening the constraint itself. While some of those simulations may prove to be difficult to implement, frequently, they are no more difficult than optimizing the routine decisions, i.e. establishing the purchase order quantities.

The Quantitative Supply Chain emphasizes that the opportunity costs associated with those constraints should be part of the production data pipeline and, typically, should be materialized with dedicated dashboards, which are specifically intended to help the supply chain management decide when it’s time to bring bigger changes to their supply chain. These types of dashboards are referred to as strategic dashboards. This approach differs from the traditional supply chain practice that emphasizes ad hoc initiatives when it feels that the supply chain is about to hit an operating limit. Indeed, the KPIs delivered by strategic dashboards are refreshed on a daily basis, or more frequently if needed, just like the rest of the data pipeline. They do not need to make a last-minute, last-ditch effort, because they are up-to-date and ready to capitalize on the insights gained from a long-lived initiative.

The strategic dashboards support the decision-making process of the supply chain management. As they are part of the data pipeline, whenever the market starts evolving at a faster pace than usual, the KPIs remain up-to-date on the company’s present situation. This approach avoids the traditional pitfalls associated with ad hoc investigations that are invariably adding further delays to already overdue problems. This approach also largely mitigates the alternative problem, which is hasty strategic decisions that turn out to be unprofitable - a regrettable condition that could have been anticipated right from the start.

Inspector dashboards

Supply chains are both complex and erratic. Those properties make the debugging of the data pipeline a fearsomely challenging task. Yet, this data pipeline is the spinal cord of the Quantitative Supply Chain initiative. Data processing mistakes, or bugs , can happen anywhere within the data pipeline. Worse, the most frequent type of issue is not the incorrect formula, but the ambiguous semantic . For example, at the beginning of the pipeline, the variable stockinv might refer to the stock available (where negative values are possible) while later on, the same variable is used with a stock on hand interpretation (where values are expected to be positive). The ambiguous interpretation of the variable stockinv can generate a wide variety of incorrect behaviors, ranging from system crashes - which are obvious, hence only moderately harmful - to a silent and pervasive corruption of the supply chain decisions.

As supply chains are nearly always built out of a unique mix of software solutions set up over the years, there is no hope in gaining access to a “proven” software solution that is free of bugs. Indeed, most of the issues arise at the system boundaries , when reconciling data originating from different systems, or even merely reconciling data originating from different modules within the same system. Thus, no matter how proven the software solution might be, the tooling must be able to handily support the debugging process, as those kind of issues are bound to happen.

The purpose of the inspector dashboards is to provide fine-grained views for a minute inspection of the supply chain datasets. Yet, those dashboards are not simple drill-downs to inspect the input data tables. Such drill-down, or similar slice-and-dice approaches on the data, would be missing the point. Supply chains are all about flows: flow of materials, flow of payments, etc. Some of the most severe data issues happen when the flow’s continuity is “logically” lost. For example, when moving goods from warehouse A to warehouse B, warehouse B’s database might be missing a few product entries, hence generating subtle data corruptions, as units originating from warehouse A get received in warehouse B without getting properly attached to their product. When numerical results feel odd, those inspector dashboards are the go-to option for the Supply Chain Scientist to make a quick sample data investigation.

In practice, an inspector dashboard provides a low level entry point such as a product code or a SKU, and it consolidates all the data that is associated with this entry point into a single view. When goods are flowing through many locations, as it happens in aerospace supply chains for example, then the inspector dashboard usually attempts to reconstitute the trajectories of the goods, which may have transited not only through multiple physical locations, but through multiple systems as well. By gathering all this data in one place, it becomes possible for the Supply Chain Scientist to assess whether the data makes sense: is it possible to identify where the goods that are being shipped originate from? Are stock movements aligned with official supply chain policies etc.? The inspector dashboard is a “debugging” tool as it’s designed to bring together the data that are tightly coupled, not from an IT viewpoint, but from a supply chain viewpoint.

One of the most bizarre issues that Lokad faced while investigating supply chain datasets was the case of the teleported parts . The company - in this case an airline - had aircraft parts stocked both in mainland Europe and Southern Asia. As aircraft security is an absolute requirement to operate, the company kept impeccable stock movement records for all its parts. Yet, using a newly devised inspector dashboard, the Lokad team came to realize that some parts were moving from Asia to Europe, and vice-versa, supposedly in only 2 or 3 minutes. As aircraft parts were transported by aircraft, the transportation time would have been expected to be at least a short dozen of hours - certainly not minutes. We immediately suspected some timezone or other computer time issue, but time records also proved to be impeccable. Then, investigating the data further, it appeared that the parts that had been teleported were indeed being used and mounted on aircrafts at their landing location, a finding which was all the more baffling. By letting the supply chain teams have a look themselves at the inspector dashboards, the mystery was finally uncovered. The teleported parts were aircraft wheels consisting of two-half wheels plus a tire. The wheel could be dismounted by taking apart the two half-wheels and the tire. In the most extreme case, if the two half-wheels and the tires were removed, then there was nothing physically left. Hence, the fully dismounted wheel could be freely remounted anywhere, disregarding entirely its original location.

The inspector dashboards are the low-level counterpart of the data health dashboard. They focus on fully disaggregated data, while data health dashboards usually takes a more high level stance on the data. Also, inspector dashboards are typically an integral part of the whiteboxing effort. When facing what appears to be a puzzling recommendation, supply chain practitioners need to have a close look at a SKU or a product, in order to figure out whether the recommended decision is reasonable or not. The inspector dashboard is typically adjusted for this very purpose, by including many intermediate results that contribute to the calculation of the final recommendation.