Initiative of Quantitative Supply Chain

learn menu

Quantitative Supply Chain Optimization, or Quantitative Supply Chain in short, is a broad perspective on supply chains that, simply put, aims to make the most of human intelligence, augmented with the capabilities of modern computing resources. Yet, this perspective is not all-inclusive. It does not pretend to be the endgame solution to supply chain challenges, but to be one complementary approach that can nearly always be used to improve the situation.

The initiative

Quantitative Supply Chain helps your company to improve quality of service, to reduce excess stocks and write-offs, to boost productivity, to lower purchase prices and operating costs … and the list goes on. Supply chain challenges vary wildly depending on different situations. Quantitative Supply Chain embraces this diversity and strives, all while facing the resulting complexity. However, for supply chain practitioners who are used to more classical approaches to optimize their supply chains, Quantitative Supply Chain might feel a bit bewildering.

In the following, we review the ingredients that are required to make the most of the quantitative perspective on supply chain. We examine and clarify the ambitions of a Quantitative Supply Chain initiative. We review the roles and the skills of the team tasked with the execution of the initiative. Finally, we give a brief overview of the methodology associated with Quantitative Supply Chain.

The ambition

Except for very small companies, a supply chain involves millions of decisions per day. For every unit held in stock, every day, the company is making the decision to keep the unit where it is, rather than to move it somewhere else. What’s more, the same logic applies to non-existent stock units that could be produced or purchased. Doing nothing is already a decision in itself.

Quantitative Supply Chain is about optimizing the millions of decisions that need to be made by the company every day, and since we are talking about millions, if not billions of decisions per day, computers play a central role in this undertaking. This isn’t surprising since, after all, supply chains were historically one of the first corporate functions, after accounting, to be digitalized back in the late 1970s. Yet, Quantitative Supply Chain is about taking digitalization one step further.

Here we have to acknowledge that misguided attempts to roll out the “supply chain system of the future” have been frequent over the last two decades. Too often, such systems did nothing but wreak havoc on supply chains, combining black-box effects and automation gone wrong, and thereby generating so many bad decisions that problems could no longer be fixed by human intervention.

To some extent, Quantitative Supply Chain was born out of those mistakes: instead of pretending that the system somehow knows the business better that its own management, the focus needs to placed on executing the insights generated by the management, but with a higher degree of reliability, clarity and agility. Software technology done right is a capable enabler, but, considering the present capabilities of software, removing people entirely from the solution isn’t a realistic option.

This ambition has one immediate consequence: the software that the company uses to keep track of its products, materials and other resources isn’t going to be same as the software the company needs to optimize its decisions. Indeed, may it be an ERP, a WMS, an MRP or an OMS - all such software primarily focuses on operating the company’s processes and its stream of data entries. Don’t get us wrong, there are massive benefits in streamlining data entries and automating all clerical tasks. Yet, our point remains that these tasks do not address in the slightest the challenge at hand, which is to increase the capacity of your company to execute human insights, and at the scale required by your supply chain.

Then, there is no optimization without measurement. Therefore, Quantitative Supply Chain is very much about measurements - as its name suggests. Supply chain decisions - buying stock, moving stock - have consequences, and the quality of such decisions should be assessed financially (for example in dollars) with sound business perspectives. However, having good, solid metrics takes effort, significant effort. One of the goals of Quantitative Supply Chain is to help the company establish such metrics, which also plays a critical role during a project’s later stages, in assessing the return on investment (ROI) of the overall supply chain initiative.

Finally, as mentioned previously, Quantitative Supply Chain is not an all-encompassing paradigm. It does not have the ambition to fix or improve everything in the company’s supply chain. It doesn’t claim to help you find trusted suppliers or reliable logistic partners. It doesn’t promise to help you hire great teams and keep their spirits high. Yet, thanks to its very specific focus, Quantitative Supply Chain is fully capable of delivering tangible results.

The project roles

Quantitative Supply Chain requires a surprisingly low amount of human resources, even when handling somewhat large-scale supply chains. However, such an initiative does require specific resources, which we will cover the details of in this section. But before delving into the different roles and their specificities, let’s start by mentioning one core principle of Quantitative Supply Chain: the company should capitalize on every human intervention.

This principle goes against what happens in practice with traditional supply chain solutions: human efforts are consumed by the solution, not capitalized. In order to keep producing an unending stream of decisions, the solution requires an unending stream of manual entries. Such entries can take many forms: adjusting seasonal profiles, dealing with exceptions and alerts, fixing odd forecast values, etc.

Quantitative Supply Chain seeks to reverse this perspective. It’s not just that human labor is expensive, it’s that supply chain expertise combined with acute business insights is too rare and too precious to be wasted on repetitive tasks. The root cause of manual intervention should be fixed instead: if forecast values are off, then there is no point in modifying the values themselves, it’s the input data or the forecasting algorithm itself that need to be fixed. Fixing symptoms guarantees an endless dealing with the same problems.

The size of the team required to execute a quantitative supply chain initiative varies depending on the scale of the supply chain itself. At the lower end of the spectrum, it can be less than a FTE (full-time employee), typically for companies below $20 million in turnover. At the higher end of the spectrum, it can involve a dozen people; but then, in this case, several billions dollars’ worth of inventory are typically at stake.

The Supply Chain Leader: Quantitative Supply Chain is a change of paradigm. Driving change requires leadership and support from the top management. Too frequently, supply chain leadership does not feel that it has the time to be too directly involved in what is perceived as the “technicalities” of a solution. Yet, Quantitative Supply Chain is about executing strategic insights at scale. Not sharing the strategic insights with the team in charge of the initiative is a recipe for failure. Management is not expected to come up with all the relevant metrics and KPIs – as it takes a lot of effort to put these together – but management is certainly expected to challenge them.

The Supply Chain Coordinator: while the Quantitative Supply Chain initiative itself is intended to be very lean on staff, most supply chains aren’t, or at the very least, aren’t that lean. Failure to bring everybody on board can result in confusion and a slowdown of the the initiative. Thus, the Coordinator’s mission is to gather all the necessary internal feedback the initiative requires and communicate with all the parties involved. The Coordinator clarifies the processes and decisions that need to be made, and gets feedback on the metrics and the KPIs that will be used to optimize those decisions. He also makes sure that the solution embraces the company workflows as they are, while preserving the possibility of revising those workflows at a later stage of the initiative.

The Data Officer: Quantitative Supply Chain is critically dependent on data, and every initiative needs to have reliable access to data from a batch processing perspective. In fact, the initiative does not merely involve reading a few lines in the company system, rather it involves reading the entire sales history, the entire purchase history, the entire product catalog, etc. The Data Officer is typically delegated by the IT department to support the initiative. He is in charge of automating all the data extraction logic and getting this logic scheduled for daily extractions. In practice, the efforts of the Data Officer are mostly concentrated at the very beginning of the initiative.

The Supply Chain Scientist: he uses the technology - more on this to follow - for combining the insights that have been gathered by the Coordinator with the data extracted by the Data Officer in order to automate the production of decisions. The scientist begins by preparing the data, which is a surprisingly difficult task and requires a lot of support from the Coordinator, who will need to interact with the many people who produced the data in the first place, to clarify anything that may be uncertain. He formalizes the strategy so that it can be used to generate decisions - for instance, the suggested reorder quantities. Finally, the Supply Chain Scientist equips the whole data pipeline with dashboards and KPIs to ensure clarity, transparency and control.

For mid-sized companies, having the same person fulfill both the Coordinator and the Data Officer role can be extremely efficient. It does require a range of skills that is not always easy to find in a single employee, however, if such a person does exist in the organization, he tends to be an asset for speeding-up the initiative. Then, for larger companies, even if the Coordinator is not highly familiar with the company’s databases at the beginning of the initiative, it’s a big plus if the Coordinator is capable of gaining a certain level of familiarity with the databases as the initiative goes on. Indeed, the IT landscape keeps changing, and anticipating as to how the change will impact the initiative vastly helps to ensure a smooth ongoing execution.

Managed subscription plans of Lokad. Filling the Supply Chain Scientist position might be a challenge for companies that have not cultivated any data science expertise for years. Lokad supports quantitative supply chain initiatives of such companies by providing an “expert-as-a-service” through its Premier subscription plan. On top of delivering the necessary coaching for the initiative to take place, Lokad also provides the time and dedication it takes to implement the logic that computes the decisions and the dashboards that give clarity and control required by management for gaining trust and understanding in the initiative itself.

The technology

So far, we have remained rather vague concerning the software technology required to support Quantitative Supply Chain. Yet, Quantitative Supply Chain is critically dependent on the technology stack that is used to implement it. While, conceptually, every piece of software could be re-implemented from scratch, the Supply Chain Scientist requires an incredible amount of support from his stack to be even reasonably productive. Then, certain capabilities such as forecasting and numerical optimization, require significant prior R&D efforts that go well beyond what the Supply Chain Scientist can deliver during the course of the initiative.

The first requirement of Quantitative Supply Chain is a data platform with programmatic capabilities, and, naturally, having access to a data platform specifically tailored for handing supply chain data and supply chain problems is a sure advantage. We are referring to a data platform, because while any desktop workstation can store multiple terabytes nowadays, it does not mean that this desktop workstation will offer any other desirable properties for carrying out the initiative: reliability against hardware failure, auditability for all accesses, compatibility with data exports, etc. In addition, since supply chain datasets tend to be large, the data platform should be more scalable, or in other words, should be capable of processing large amounts of data in a short amount of time.

The data platform requires programmatic capabilities, which refers to the possibility to implement and execute pretty much any arbitrary data processing logic. Such capabilities are delivered through a programming language. Programming is correctly perceived as a very technical skill, and many vendors take advantage of the fear inspired by the idea of having to cope with a solution that requires “programming” to push simple user interfaces with buttons and menus to the users. However, whenever the supply chain teams are denied programmatic capabilities, Excel sheets take over, precisely because Excel offers programmatic capabilities with the possibility to write formulas that can be arbitrarily complicated. Far from being a gadget, programmatic capabilities are a core requirement.

Finally, there are significant benefits in having a data platform tailored for supply chain. In fact, the need for a data platform of some kind is hardly specific to supply chain: quantitative trading, as performed by banks and funds, comes with similar needs. However, supply chain decisions don’t require sub-millisecond latencies like high-frequency trading does. The design of a data platform is a matter of engineering trade-offs as well as a matter of a software ecosystem, which begins with the supported data formats. Those engineering trade-offs and the software ecosystem should be aligned with the supply chain challenges themselves.

The second requirement of Quantitative Supply Chain is a probabilistic forecasting engine. This piece of software is responsible for assigning a probability to every possible future. Although this type of forecast is a bit disconcerting at first because it goes against the intuition of forecasting the future, the “catch” actually lies in the uncertainty: the future isn’t certain and a single forecast is guaranteed to be wrong. The classic forecasting perspective denies uncertainty and variability, and as a result, the company ends-up struggling with a forecast that was supposed to be accurate, but isn’t. A probabilistic forecasting engine addresses this problem head-on by resolving the issue with probabilities.

Probabilistic forecasting in supply chain is typically a 2-stage process starting with a lead time forecast and followed by a demand forecast. The lead time forecast is a probabilistic forecast: a probability is assigned to all possible lead time durations, usually expressed in days. Then, the demand forecast is a probabilistic forecast as well and this forecast is built on top of the lead time forecast provided as an input. In this manner, the horizon to be covered by the demand forecast has to match the lead times, which are themselves uncertain.

As the probabilistic forecasting engine delivers sets of probability distributions, its forecasting outputs involve a lot more data than the outputs of a classic forecasting engine. This isn’t a blocking problem per se, but in order to avoid facing too much friction while processing a massive set of probabilities, a high degree of cooperation is required between the data platform and the forecasting engine.

Lokad’s technology stack. We could say that Lokad’s technology has been designed to embrace the Quantitative Supply Chain perspective, but in reality, it happened the other way around. Lokad’s R&D teams made a breakthrough in probabilistic forecasting and uncovering data processing models that were a much better fit for supply chain challenges than traditional approaches. We realized the extent of the breakthrough, as we were able to observe superior levels of performance once those elements had been put into production. This consequently led Lokad to the Quantitative Supply Chain perspective as a way of clarifying what Lokad teams were actually doing. Lokad has both a data platform – codenamed Envision – and a probabilistic forecasting engine. As you can see, Quantitative Supply Chain has very empirical roots.

The project phases

Quantitative Supply Chain is heavily inspired by software engineering R&D and the best practices known to data science. The methodology is highly iterative, with low emphasis given to prior specification, and a high emphasis on agility and the capacity to recover from unexpected issues and/or unexpected results. As a result, this methodology tends to be perceived as rather surprising to companies that are not deeply involved in the software industry themselves.

The first phase is the scoping phase, which defines which supply chain decisions are intended to be covered by the initiative. This phase is also used to diagnose the expected complexity involved in the decision-making process and the relevant data.

The second phase is the data preparation phase. This consists of establishing an automated set-up that copies all the relevant data from the company systems to a separate analytical platform. It also consists of preparing this data for quantitative analysis.

The third phase is the pilot phase and consists of implementing an initial decision-taking logic that generates decisions, for instance the suggested purchase quantities, which in itself already outperforms the company’s former processes. This logic is expected to be fully automated.

The fourth phase is the production phase, which brings the initiative to cruising speed where the performance is monitored and maintained, and where a consensus is achieved on the desirable degree of refinement for the supply chain models themselves.

The scoping phase is the most straightforward and identifies the routine decisions that the Quantitative Supply Chain initiative intends to cover. These decisions might involve many constraints: MOQ (minimum order quantities), full containers, maximum warehouse capacity, … and these constraints should be closely examined. Then, decisions are also associated with economic drivers: carrying costs, cost of stock-outs, gross-margin, … and such economic drivers should also be studied. Finally, the relevant historical data should be identified, along with the systems from which the data will be extracted.

The data preparation phase is the most difficult phase; most failures tend to happen at this stage. Gaining access to data and making sense of data is nearly always an underestimated challenge. Operational systems (e.g. ERP / MRP / WMS / OMS) have been designed to operate the company, to keep the company running. Historical data is a by-product of such systems since recording data was not the reason why those systems were implemented in the first place. Thus, many difficulties should be expected in this phase. When facing difficulties, most companies have an unfortunate reflex: let’s step back and write down a full specification. Unfortunately, a specification can only cover the known or expected difficulties. Yet, nearly all the major issues that are encountered in this phase are elements that cannot be planned for.

In reality, problems typically tend to be revealed only when someone actually starts putting the data to the test of generating data-driven decisions. If decisions come out wrong while the logic is considered to be sound, then there is probably a problem with the data. Data-driven decisions tend to be somewhat sensitive to data issues, and therefore actually represent an excellent way of challenging how much control the company has over its own data. Moreover, this process challenges the data in ways that are meaningful for the company. Data quality and an understanding of data are merely means to an end: delivering something of value for the company. It is very reasonable to concentrate the efforts on data issues that have a significant impact on data-driven decisions.

The pilot phase is the phase that puts the supply chain management to the test. Embracing uncertainty with probabilistic forecasts can be rather counter-intuitive. At the same time, many traditional practices such as weekly or monthly forecasts, safety stocks, stock covers, stock alerts or ABC analysis actually do more harm than good. This does not mean that the Quantitative Supply Chain initiative should run loose. In fact, it’s quite the opposite as Quantitative Supply Chain is all about measurable performance. However, many traditional supply chain practices have a tendency to frame problems in ways that are adverse to the resolution of said problems. Therefore, during the pilot phase, one key challenge for the supply chain leadership is to remain open-minded and not reinject into the initiative the very ingredients that will generate inefficiencies at a later stage. You cannot cherish the cause while cursing the consequence.

Then, the Supply Chain Scientist and the technology are both put to the test also, given that the logic has to be implemented in order to generate the decisions in a relatively short timeframe. The initial goal is to merely generate what is perceived by practitioners as reasonable decisions, decisions that do not necessarily require manual correction. We suggest not to underestimate how big of a challenge it is to generate “sound” automated decisions. Traditional supply chain systems require a lot of manual corrections to even operate: new products, promotions, stock-outs… Quantitative Supply Chain establishes a new rule: no more manual entries are allowed for mundane operations, all factors should be built into the logic.

The Supply Chain Coordinator is there to gather all factors, workflows and specificities that should be integrated into the decision-making logic. Following this, the Supply Chain Scientist implements the first batch of KPIs associated with the decisions. Those KPIs are introduced in order to avoid black-box effects that tend to arise when advanced numerical methods are being used. It is important to note that the KPIs are devised together with the Supply Chain Leader who ensures that the measurements are aligned with the company’s strategy.

The production phase stabilizes the initiative and brings it to cruising speed. The decisions generated by the logic are being actively used and their associated results are closely monitored. It typically takes a few weeks to a few months to assess the impact of any given supply chain decision because of the lead times that are involved. Thus, the initiative’s change of pace in the production phase is slowed down, so that it becomes possible to make reliable assessments about the performance of the automated decisions. The initiative enters a continuous improvement phase. While further improvements are always desirable, a balance between the benefits of possible refinements to the logic and the corresponding complexity of those refinements has to be reached, in order to keep the whole solution maintainable.

The Supply Chain Coordinator, free from his mundane data-entry tasks, can now focus on the strategic insights proposed by the supply chain management. Usually, desirable supply chain process changes that may have been identified during the pilot phase have been put on hold in order to avoid disrupting operations by changing everything at once. However, now that the decision making logic’s pace of change has slowed down, it becomes possible to incrementally revise the processes, in order to unlock performance improvements that require more than better routine decisions.

The Supply Chain Scientist keeps fine-tuning the logic by putting an ever increasing emphasis on the KPIs and data quality. He is also responsible for revising the logic since subtle flaws or subtle limitations, typically relating to infrequent situations, become uncovered over time. Then, as the processes change, the decision-making logic is revised too, in order to remain fully aligned with the workflows and the strategy. Also, even when internal processes do not change, the general IT and business landscapes do keep changing anyway: the Supply Chain Scientist has to ensure that the decision-making logic remains up to date within this constant state of flux.

The deliverables

The goal of Quantitative Supply Chain is to deliver actionable decisions - suggested quantities for purchase orders being an archetypal example. Below, we further clarify the specific form and delivery mechanism of those decisions. Setting clear expectations for the deliverables is an important step in the journey that Quantitative Supply Chain represents. Also, the optimized numerical results are not the only desirable output: several other outputs, most notably data health monitoring and management KPIs, should be included in the deliverable as well. In practice, the deliverables of a Quantitative Supply Chain initiative are dependent on the flexibility of the software solution used to support the initiative itself. Nevertheless, they are primarily defined by their intent, which is agnostic of the technology being used.

Scripts as deliverable

Quantitative Supply Chain emphasizes fully automated data pipelines. This does not imply that the software setup is supposed to run autonomously. A high degree of close supervision is naturally desirable whenever a large scale supply chain is being considered. Nevertheless, the data pipeline is expected to be fully automated in the sense that no single step in the pipeline actually depends on a manual operation. Indeed, as outlined in the manifesto, whenever manual operations are involved in supporting supply chain data processing, the solution simply does not scale in practice.

As a direct consequence of this insight, the deliverables of a Quantitative Supply Chain initiative are invariably whole pieces of software. This does not imply that the team in charge is expected to reimplement everything: a software solution dedicated to Quantitative Supply Chain offers the possibility to focus strictly on the relevant aspects of supply chain challenges. All the low-level technicalities, such as leveraging distributed computing resources auto-allocated within a cloud computing platform, are expected to be abstracted away. The team does not need to delve into such matters, because those aspects are expected to be suitably managed by the tooling itself.

The deliverables are materialized as scripts typically written in a programming language able to accommodate the supply chain requirements, while featuring a high level of productivity. The term “script” is used here rather than “source code”, but the two terms are closely related: a “script” stresses the idea of a high degree of abstraction and a focus on the task itself, while a “source code” emphasizes a lower level perspective, intended to be an accurate reflection of the computing hardware itself. For Quantitative Supply Chain, it’s obviously the supply chain perspective that matters the most, not the computing hardware, which is a technical aspect of secondary importance.

During the last decade, the success of WYSIWYG (what-you-see-is-what-you-get) user interfaces for end-customer apps has caused many supply chain software vendors to try to emulate this approach with a WYSIWYG solution for supply chain planning and optimization. However, the lesson of the near systematic failure of these types of interfaces is that supply chains are complex and cannot dodge the need for programmatic tools. From our experience, expecting a drag-and-drop< tool to be able to reflect properly the complex nonlinearities involved in, say, overlapping MOQ (minimum ordering quantities), is delusional at best. Programmatic expressiveness is required because, otherwise, the supply chain challenge cannot even be expressed within the tool.

Naturally, from the end-user perspective, scripts are not what supply chain practitioners would expect to see as a tangible output of the Quantitative Supply Chain initiative. People will interact with dashboards that contain consolidated KPIs and tables that gather suggested decisions. However, those dashboards are transient and disposable. They are merely obtained by running the scripts again on top of the relevant supply chain data. While the distinction is a bit subtle, it’s important not to confuse the script, which represents the real deliverable, with its numerical expression, which is typically what you can see as an end-user of the solution.

Data health dashboards

Before considering delivering optimized decisions for the supply chain, we must ensure that the data processed by the system that supports the Quantitative Supply Chain initiative is both numerically and semantically correct. The purpose of the data health monitoring dashboards, or simply data health dashboards, is to ensure a high degree of confidence in the data’s correctness, that is naturally an essential requirement for the accuracy of all the numerical results returned by the solution. These dashboards also assist the supply chain team in improving the quality of the existing data.

Numerical errors are straightforward: the CSV file exported from the ERP indicates that the product ABC has 42 units in stock, while the ERP web console reports only 13 units in stock. It is apparent here that we have divergent numbers where they should be the same. The data health dashboards address those relatively obvious problems by simply checking that data aggregates remain within expected numerical ranges.

Semantic errors are more subtle and, in practice, much harder to pinpoint. Most of the work done during the data preparation actually consists in identifying and addressing all the semantic errors. For example: the field stockinv in the ERP might be documented as being the stock on hand. Thus, the supply chain team assumes that this quantity can never be negative, because, obviously, if those units are within physical reach on the shelf, it has to be a positive quantity. Yet, the documentation of the ERP might also happen to be slightly misleading, and this quantity would have been more aptly named stock available because whenever a stock-out happens and the client issues a backorder, the quantity can become negative to reflect that a certain number of units are already due to a client. This case illustrates a semantic error: the number isn’t wrong per se - it’s the understanding of the number that is approximate. In practice, semantic approximations can generate many inconsistent behaviors, which, in turn, generate ongoing friction costs within the supply chain.

The data health dashboards consolidate numbers that allow the company to decide on the spot whether the data can be trusted as good enough or not. Indeed, as the solution is going to be used on a daily basis for production purposes, it is imperative that a significant data problem be identified through a near-instant inspection. If not, then the odds are that the supply chain will end-up operating for days, if not weeks, on top of faulty data. In this respect, the data health dashboard is akin to a traffic light: green you pass, red you stop.

Furthermore, when considering a sizeable supply chain, there is usually an irreducible amount of corrupted or otherwise incorrect data. This data arises through faulty manual entries or through rare edge cases in the company systems themselves. In practice, for any sizeable supply chain, it’s unreasonable to expect that the supply chain data will be 100% accurate. Instead, we need to ensure that the data is accurate enough to keep the friction costs generated by those errors quasi-negligible.

Hence, the data health dashboards are also expected to collect statistics on the identified data errors. Those statistics are instrumental in establishing that the data can be trusted. To that end, a Supply Chain Scientist is frequently called upon to establish well-chosen alert thresholds, typically associated with a hard stop of the solution. Care needs to be exercised in establishing the thresholds because, if they are too low, then the solution is unusable, as it is too frequently stopped for “identified data issues”; however, if they are too high, then friction costs generated by data errors may become significant and undermine the benefits brought by the initiative itself.

Beyond red-green signaling, data health dashboards are also intended to offer prioritized insights into the data improvement efforts. Indeed, many data points might be incorrect but also inconsequential. For example, it does not matter if the purchase price of a product is incorrect if the market demand for this product vanished years ago, as there won’t be any further purchase orders for this product.

The Quantitative Supply Chain emphasizes that the fine grained resolution of the data errors, which may involve a considerable amount of manual work, should be prioritized against the estimated financial impact of the data error itself vs. the labor cost associated with the correction. Indeed, depending on the situation, the cost associated with the correction of a single faulty data point varies enormously, and needs to be taken into account in the suggested prioritization. Finally, when the cost of corrections is deemed to be more expensive than the supply chain costs generated by those errors, the data improvement process can stop.

Prioritized decision dashboards

As we have seen, only supply chain decisions can truly be assessed from a quantitative perspective. Thus, it’s no surprise that the one key operational deliverables of a Quantitative Supply Chain initiative is the dashboards that consolidate the decisions obtained as the final numerical result of the whole data pipeline. Such a dashboard can be as simple as a table that lists for every product the exact quantity in units to be immediately reordered. If MOQs (minimum order quantities) are present - or any alternative ordering constraints - then the suggested quantities might be zero most of the time, until the proper thresholds are met.

For the sake of simplicity, we assume here that those numerical results are gathered into a dashboard, which is a specific form of user interface. However, the dashboard itself is only one option, which may or may not be relevant. In practice, it is expected that the software powering the Quantitative Supply Chain initiative be highly flexible, i.e. programmatically flexible, offering many ways to package those results in various data formats. For example, the numerical results can be consolidated within flat text files, which are intended to be automatically imported into the primary ERP used to manage the company’s assets.

While the decisions’ format is highly dependent on the supply chain task being addressed, most tasks require prioritizing those decisions. For example, the act of computing suggested quantities for a purchase order can be decomposed through a prioritized list of units to be acquired. The most profitable unit is ranked first. As stock comes with diminishing returns, the second unit acquired for the same product addresses a diminishing fraction of the market demand. Therefore, the second unit for this very product may not be the second entry in the overall list. Instead, the second most profitable unit can be associated with another product, etc. The prioritized list of units to be acquired is conceptually without end: it’s always possible to purchase one more unit. As market demand is finite, all purchased units would become dead stock after a point. Turning this priority list into the final quantities for purchase only requires introducing a stopping criterion, and summing up the quantities per product. In practice, nonlinear ordering constraints further complicate this task, but, for the sake of simplicity, we will put these constraints aside at this stage of the discussion.

Prioritizing decisions is a very natural operation from a Quantitative Supply Chain point of view. As every decision is associated with a financial outcome expressed in dollars, ranking the decisions from the most profitable to the least profitable is straightforward. Thus, many, if not most, of the dashboards that compile the suggested supply chain decisions can be expected, in practice, to be prioritized lists of decisions. These dashboards contain lists with highly profitable decisions listed at the top and very unprofitable ones listed at the bottom. Alternatively, supply chain practitioners may decide to truncate the lists when decisions are unprofitable. However, there are frequently insights to be gained from being able to inspect decisions that happen to be just below the profitability threshold - even if the company obviously isn’t expected to act on those unprofitable entries.

In order to deliver this type of decision-driven dashboards, the software solution supporting the Quantitative Supply Chain needs to numerically explore vast amounts of possible decisions. For example, the solution should be able to consider the financial impact of purchasing every single unit, unit by unit, for every single product in every single location. Not surprisingly, this operation may require substantial computing resources. Fortunately, nowadays, computing hardware is capable of dealing with even the largest global supply chains. Assuming that the underlying software solution is suitably architectured for Quantitative Supply Chain, scalability of the data processing should remain a non-issue as far supply chain teams are concerned.

Whiteboxing the numerical results

Systems derisively referred to as black boxes in supply chain, and other fields too, are systems that generate outputs that cannot be explained by the practitioners who interact with those systems. The Quantitative Supply Chain, with its specific focus on an automated data pipeline, also faces the risk of delivering what supply chain teams would classify as “black boxes”. Indeed, the financial implications of supply chain decisions are very important for a company, and, while a newer system can improve the situation, it also can potentially create disasters. While automation is highly desirable, it does not mean that the supply chain team isn’t expected to have a thorough understanding of what is being delivered by the data pipeline supporting the Quantitative Supply Chain initiative.

The term whiteboxing refers to the effort needed to make the solution fully transparent for the benefit of supply chain teams. This approach emphasizes that no technology is transparent by design. Transparency is the end-result of a specific effort, which is part of the initiative itself. Even a simple linear regression can generate baffling results in practice. Putting aside a few exceptional individuals, most people do not have an intuitive understanding of the what the linear model’s “expected” output is whenever 4 dimensions or more are involved. Yet, supply chain problems often involve dozens of variables, if not hundreds. Thus, even simplistic statistical models are de facto black boxes for supply chain practitioners. Naturally, when machine learning algorithms are used, as is recommended by Quantitative Supply Chain, they leave the practitioners even more in the dark.

While the black box effect is a real problem, a realistic solution does not lie in simplifying data processing into calculations that are immediately intuitive to the human mind. This approach is a recipe for extreme inefficiency, which entirely demolishes all the benefits of the modern computing resources, which can be used to tackle the raw complexity of modern supply chains. Dumbing down the process is not the answer. Whiteboxing is.

Even the most complex supply chain recommendations can be made largely transparent to supply chain practitioners, by simply decomposing the inner calculations with well-chosen financial indicators, which represent the economic drivers that support the recommendation itself. For example, instead of merely displaying a bare table with two columns product and quantity as a suggested purchase order, the table should include a couple of columns that aid decision-making. Those extra columns can include the current stock, the total demand over the last month, the expected lead time, the expected financial cost of stock out (if no order is passed), the expected financial cost of overstock (risk associated with the suggested order), etc. The columns are crafted in order to give the supply chain team quick sanity checks of the suggested quantities. Through the columns, the team can rapidly establish trust in the numerical output, and can also identify some of the weaknesses of a solution that needs further improvement.

Extending the dashboards for whiteboxing purposes is partly an art. Generating millions of numbers is easy, even having access to no more computing resources than what is available on a smartphone. Yet, generating 10 numbers worthy of being looked at on a daily basis is very difficult. Thus, the core challenge is to identify a dozen or less KPIs that are sufficient to shed light on the recommended supply chain decisions. Good KPIs typically require a lot of work; they should not be naïve definitions, which are usually misleading in supply chain. For example, even a column as simple as the “unit purchase price” can be highly misleading if the supplier happens to offer volume discounts, thereby making the purchase price dependent on the quantity being purchased.

Strategic dashboards

While the focus on small scale decisions is necessary - as it’s one of the few approaches that yields itself to quantitative performance assessments - supply chain may also need to be adjusted in bigger, more disruptive ways, to ramp up performance to its next level. For example, purchasing more well-chosen units of stock marginally increases the service level. However, at some point, the warehouse is full, and no additional unit can be purchased. In this situation, a bigger warehouse should be considered. In order to assess the impact of lifting this limitation, we can remove the warehouse capacity constraint from the calculations and evaluate the overall financial upside of operating with an arbitrary, large warehouse. The supply chain management can then keep an eye on the financial indicator associated with the friction cost imposed by the warehouse capacity itself, and can then decide when it’s time to consider increasing warehousing capacity.

Typically, supply chains operate based on numerous constraints that cannot be revised on a daily basis. Those constraints can include working capital, warehousing capacity, transportation delays, production throughput, etc. Each constraint is associated with an implicit opportunity cost for the supply chain, which typically translates into more stock, more delays or more stock-outs. The opportunity cost can be assessed through the performance gains that would be obtained by removing or weakening the constraint itself. While some of those simulations may prove to be difficult to implement, frequently, they are no more difficult than optimizing the routine decisions, i.e. establishing the purchase order quantities.

The Quantitative Supply Chain emphasizes that the opportunity costs associated with those constraints should be part of the production data pipeline and, typically, should be materialized with dedicated dashboards, which are specifically intended to help the supply chain management decide when it’s time to bring bigger changes to their supply chain. These types of dashboards are referred to as strategic dashboards. This approach differs from the traditional supply chain practice that emphasizes ad hoc initiatives when it feels that the supply chain is about to hit an operating limit. Indeed, the KPIs delivered by strategic dashboards are refreshed on a daily basis, or more frequently if needed, just like the rest of the data pipeline. They do not need to make a last-minute, last-ditch effort, because they are up-to-date and ready to capitalize on the insights gained from a long-lived initiative.

The strategic dashboards support the decision-making process of the supply chain management. As they are part of the data pipeline, whenever the market starts evolving at a faster pace than usual, the KPIs remain up-to-date on the company’s present situation. This approach avoids the traditional pitfalls associated with ad hoc investigations that are invariably adding further delays to already overdue problems. This approach also largely mitigates the alternative problem, which is hasty strategic decisions that turn out to be unprofitable - a regrettable condition that could have been anticipated right from the start.

Inspector dashboards

Supply chains are both complex and erratic. Those properties make the debugging of the data pipeline a fearsomely challenging task. Yet, this data pipeline is the spinal cord of the Quantitative Supply Chain initiative. Data processing mistakes, or bugs, can happen anywhere within the data pipeline. Worse, the most frequent type of issue is not the incorrect formula, but the ambiguous semantic. For example, at the beginning of the pipeline, the variable stockinv might refer to the stock available (where negative values are possible) while later on, the same variable is used with a stock on hand interpretation (where values are expected to be positive). The ambiguous interpretation of the variable stockinv can generate a wide variety of incorrect behaviors, ranging from system crashes - which are obvious, hence only moderately harmful - to a silent and pervasive corruption of the supply chain decisions.

As supply chains are nearly always built out of a unique mix of software solutions set up over the years, there is no hope in gaining access to a “proven” software solution that is free of bugs. Indeed, most of the issues arise at the system boundaries , when reconciling data originating from different systems, or even merely reconciling data originating from different modules within the same system. Thus, no matter how proven the software solution might be, the tooling must be able to handily support the debugging process, as those kind of issues are bound to happen.

The purpose of the inspector dashboards is to provide fine-grained views for a minute inspection of the supply chain datasets. Yet, those dashboards are not simple drill-downs to inspect the input data tables. Such drill-down, or similar slice-and-dice approaches on the data, would be missing the point. Supply chains are all about flows: flow of materials, flow of payments, etc. Some of the most severe data issues happen when the flow’s continuity is “logically” lost. For example, when moving goods from warehouse A to warehouse B, warehouse B’s database might be missing a few product entries, hence generating subtle data corruptions, as units originating from warehouse A get received in warehouse B without getting properly attached to their product. When numerical results feel odd, those inspector dashboards are the go-to option for the Supply Chain Scientist to make a quick sample data investigation.

In practice, an inspector dashboard provides a low level entry point such as a product code or a SKU, and it consolidates all the data that is associated with this entry point into a single view. When goods are flowing through many locations, as it happens in aerospace supply chains for example, then the inspector dashboard usually attempts to reconstitute the trajectories of the goods, which may have transited not only through multiple physical locations, but through multiple systems as well. By gathering all this data in one place, it becomes possible for the Supply Chain Scientist to assess whether the data makes sense: is it possible to identify where the goods that are being shipped originate from? Are stock movements aligned with official supply chain policies etc.? The inspector dashboard is a “debugging” tool as it’s designed to bring together the data that are tightly coupled, not from an IT viewpoint, but from a supply chain viewpoint.

One of the most bizarre issues that Lokad faced while investigating supply chain datasets was the case of the teleported parts. The company - in this case an airline - had aircraft parts stocked both in mainland Europe and Southern Asia. As aircraft security is an absolute requirement to operate, the company kept impeccable stock movement records for all its parts. Yet, using a newly devised inspector dashboard, the Lokad team came to realize that some parts were moving from Asia to Europe, and vice-versa, supposedly in only 2 or 3 minutes. As aircraft parts were transported by aircraft, the transportation time would have been expected to be at least a short dozen of hours - certainly not minutes. We immediately suspected some timezone or other computer time issue, but time records also proved to be impeccable. Then, investigating the data further, it appeared that the parts that had been teleported were indeed being used and mounted on aircrafts at their landing location, a finding which was all the more baffling. By letting the supply chain teams have a look themselves at the inspector dashboards, the mystery was finally uncovered. The teleported parts were aircraft wheels consisting of two-half wheels plus a tire. The wheel could be dismounted by taking apart the two half-wheels and the tire. In the most extreme case, if the two half-wheels and the tires were removed, then there was nothing physically left. Hence, the fully dismounted wheel could be freely remounted anywhere, disregarding entirely its original location.

The inspector dashboards are the low-level counterpart of the data health dashboard. They focus on fully disaggregated data, while data health dashboards usually takes a more high level stance on the data. Also, inspector dashboards are typically an integral part of the whiteboxing effort. When facing what appears to be a puzzling recommendation, supply chain practitioners need to have a close look at a SKU or a product, in order to figure out whether the recommended decision is reasonable or not. The inspector dashboard is typically adjusted for this very purpose, by including many intermediate results that contribute to the calculation of the final recommendation.

Assessing success

It might seem as somewhat of a paradox, but while the Quantitative Supply Chain puts significant emphasis on numerical methods and measurements, our experience tells us that metrics tend to tell us too little, and often too late, about whether an initiative is on the right track. Nearly all metrics can be gamed and this usually comes at the expense of the chosen approach’s sustainability. Thus, Quantitative Supply Chain seeks obvious improvements: if improvements are so subtle that it takes advanced measurements to detect them, then, the initiative was most likely not worth the effort and should be considered as a failure. On the contrary, if improvements are noticeable and consistent across many metrics, and the supply chain as whole feels more agile and more reactive than ever, then the initiative has most likely succeeded.

Metrics can be gamed

There is a reason why engineers are rarely assessed based on metrics: they are just too good at gaming the metrics, that is, taking advantage of the metrics for their own interests rather than serving the interests of the company. Supply chains are complex and nearly all simple metrics can be taken advantage of, in ways that might be thoroughly destructive for the company. It might feel that this issue is just a matter of closing the loopholes that lurk within the metrics. Yet, our experience indicates that there is always one more loophole to be found.

A tale of metric reverse engineering

Let’s take a fictitious e-commerce as an example. The management decides that service levels need to be improved and thus the service level becomes the flagship metric. The supply chain team starts working in accordance with this metric, and comes up with a solution, which consists of vastly increasing the stock levels, thereby incurring massive costs for the company.

As a result, the management changes the rules, the maximum amount of stock is defined, and the team has to operate within this limit. The team revises its figures, and realizes that the easiest way to lower the stock levels is to flag large quantities of stocks as “dead” , which triggers aggressive promotions. Stock levels are indeed lowered, but gross margins are also significantly reduced in the process.

Once again, the problem doesn’t go unnoticed, and rules are changed one more time. A new limit is introduced on the amount of stock that can end up being marked as “dead”. Implementing this new rule takes a lot of effort because supply chain suddenly struggles with “old” stock that will need to be heavily discounted. In order to cope with this new rule, the team increases the share of air transport in relation to sea transport. Lead times are reduced, stocks are lowered, but operating costs are rising fast.

In order to deal with the operating costs that are getting out of control, management changes the rules once more, and puts an upper bound to the percentage of goods that can be transported by air. Once again, the new rule wreaks havoc, because it triggers a series of stock-outs that could have been prevented by using air transport. As a result of being forced to operate under increasingly tight constraints, the team starts giving up on leveraging the price breaks offered by suppliers. Purchasing smaller quantities is also a way of reducing lead times. Yet, once again, gross margins get reduced in the process.

Getting the purchase prices back on track proves to be a much more elusive target for management. No simple rule can cope with this challenge, and a myriad of price targets for each product subcategory are introduced instead. Many targets turn out to be unrealistic and lead to mistakes. All in all, the supply chain picture is less and less clear. Pressured from many sides, the supply chain team starts tweaking an obscure feature of the demand planning process: the product substitution list.

Indeed, management realized early on in the process that some stock-outs were not nearly as impacting as others, because some of the products that were missing had multiple near-perfect substitutes. Consequently, everybody agreed that stock-outs on those products could be largely discounted when computing the overall service level. However, the supply chain team, which is now operating under tremendous pressure, is starting to stretch the purpose of this list one to two notches beyond its original intent: products that are not that similar get listed as near-perfect substitutes. Service level metrics improve, but the business does not.

The pitfall of success

Metrics can be gamed and if teams are given toxic incentives, metrics will most likely be leveraged in a misleading manner. However, the situation is not nearly as bad as it might seem. In fact, our experience indicates that except for really dysfunctional company cultures, employees don’t generally tend to sabotage their line of work. Quite the contrary, we have observed that most employees take pride in doing the right thing even if it means that company policies need to be stretched a little.

Therefore, instead of taking freedom away from the team in charge of implementing the supply chain optimization strategy, it’s important to encourage the team to craft a set of metrics that sheds light on the supply chain initiative as a whole. The role of management is not to enforce rules built on the basis of those metrics, but rather to challenge the strategic thinking that underlies those metrics. Frequently, the immediate goal should not even be to improve the metric values, but to improve the very definition of the metrics themselves.

In reality, all metrics are not equally valuable for a business. It usually takes considerable effort to craft metrics that give a meaningful perspective on the business. This work requires not only a good understanding of the business strategy, but also a profound knowledge of underlying data, which comes with a myriad of artifacts and other numerical oddities. Thus, metrics should above all be considered as work in progress.

We have found that a strong indicator of success in any supply chain project is the quality of the metrics that are being established throughout the initiative. Yet, it’s a bit of a paradox, but there isn’t any reasonable metric to actually assess the relevance of those metrics. Here are a few elements that can help evaluate the quality of metrics:

  • Is there a consensus within the different supply chain teams that the metrics capture the essence of the business? Or that the business perspectives implicitly promoted by the metrics are neither short-sighted nor blindsided?
  • Do the metrics come with a real depth when it comes to reconciling the numbers with economic drivers? Simplicity is desirable, but not at the expense of getting the big picture wrong.
  • Are the data artifacts properly taken care of? Usually, there are dozens of subtle “gotchas” that need to be taken care of when processing the data extracted from the company systems. Our experience tells us to be suspicious when raw data appears to be good enough, as this usually means that problems haven’t been even identified as such.
  • Do decisions generated from the chosen metrics make sense? If a decision, that is otherwise aligned with the metrics, does not feel like it makes any sense, then, it most probably doesn’t; and the problem frequently lies in the metric itself.

In many ways, crafting good metrics is like orienting gravity towards the pit of success: unless something intervenes, the natural course of action is to roll down the slope towards the bottom, which happens to be precisely where success lies. Knowing the exact depth of where the bottom lies is not even strictly required, as long as every step of the journey towards the bottom is making things better for the company.

Sane decisions lead to better performance

In supply chain, even the best metrics come with a major drawback: numbers are usually late to the party. Lead times might be long and the decisions made today might not have any visible impact for weeks, if not months. In addition, Quantitative Supply Chain, which puts significant emphasis on iterative and incremental improvements, complicates this matter further. Yet, using non-incremental methods would be even worse, albeit for other reasons. Therefore, metrics can’t be the only signals used for assessing whether the initiative is on the right track.

Generating sane decisions is a simple, yet underestimated, signal of superior performance. Indeed, unless your company is already doing exceedingly well with its supply chain, it is most likely that the systems keep producing “insane” decisions that are caught and fixed manually by the supply chain teams. The purpose of all the “alerts”, or similar reactive mechanisms, is precisely to mitigate the ongoing problems through ongoing manual corrective efforts.

Bringing the Quantitative Supply Chain initiative to a point where all the decisions - generated in a fully robotized manner - are deemed sane or safe is a much bigger achievement than most practitioners realize. The emphasis on “robotized” decisions is important here: to play by the rules, no human intervention should be needed. Then, by “sane”, we refer to decisions that still look good to practitioners even after spending a few hours on investigating the case; which naturally can’t be done on a regular basis, due to the sheer amount of similar decisions to be made every day.

Our experience indicates that whenever the automated decisions are deemed reliable, performance materializes later on when those decisions actually get put to the test of being used “in production”. Indeed, the “sanity” test is a very strict test for the decision-making logic. Unless your company is already leveraging something very similar to Quantitative Supply Chain, then, most likely, the existing systems your company has in place are nowhere near passing this test. As a result, uncaught mistakes are being made all the time, and the company ends up paying a lot for this ongoing stream of problems.

Then, from an operational viewpoint, as soon as supply chain decisions become automated, the supply chain teams become free from the servitude of feeding their own system with a never-ending stream of manual entries. Those productivity gains can be reinvested where it actually matters: to refine the fine print of the supply chain strategy itself, or to monitor suppliers more closely in order to address supply chain problems that originate from their side. The increase in performance, achieved through pure quantitative optimization of the supply chain, is intensified by the gains obtained by the supply chain teams who can finally find the time to improve the processes and workflows.