An Overview of Quantitative Supply Chain

An Overview of Quantitative Supply Chain












Home » Resources » Here

Quantitative Supply Chain Optimization, or Quantitative Supply Chain in short, is a broad perspective on supply chains that, simply put, aims to make the most of human intelligence, augmented with the capabilities of modern computing resources. Yet, this perspective is not all-inclusive. It does not pretend to be the endgame solution to supply chain challenges, but to be one complementary approach that can nearly always be used to improve the situation.

Quantitative Supply Chain helps your company to improve quality of service, to reduce excess stocks and write-offs, to boost productivity, to lower purchase prices and operating costs … and the list goes on. Supply chain challenges vary wildly depending on different situations. Quantitative Supply Chain embraces this diversity and strives, all while facing the resulting complexity. However, for supply chain practitioners who are used to more classical approaches to optimize their supply chains, Quantitative Supply Chain might feel a bit bewildering.

In the following, we review the ingredients that are required to make the most of the quantitative perspective on supply chain. We examine and clarify the ambitions of a Quantitative Supply Chain initiative. We review the roles and the skills of the team tasked with the execution of the initiative. Finally, we give a brief overview of the methodology associated with Quantitative Supply Chain.


The ambition

Except for very small companies, a supply chain involves millions of decisions per day. For every unit held in stock, every day, the company is making the decision to keep the unit where it is, rather than to move it somewhere else. What’s more, the same logic applies to non-existent stock units that could be produced or purchased. Doing nothing is already a decision in itself.

Quantitative Supply Chain is about optimizing the millions of decisions that need to be made by the company every day, and since we are talking about millions, if not billions of decisions per day, computers play a central role in this undertaking. This isn’t surprising since, after all, supply chains were historically one of the first corporate functions, after accounting, to be digitalized back in the late 1970s. Yet, Quantitative Supply Chain is about taking digitalization one step further.

Here we have to acknowledge that misguided attempts to roll out the “supply chain system of the future” have been frequent over the last two decades. Too often, such systems did nothing but wreak havoc on supply chains, combining black-box effects and automation gone wrong, and thereby generating so many bad decisions that problems could no longer be fixed by human intervention.

To some extent, Quantitative Supply Chain was born out of those mistakes: instead of pretending that the system somehow knows the business better that its own management, the focus needs to placed on executing the insights generated by the management, but with a higher degree of reliability, clarity and agility. Software technology done right is a capable enabler, but, considering the present capabilities of software, removing people entirely from the solution isn’t a realistic option.

This ambition has one immediate consequence: the software that the company uses to keep track of its products, materials and other resources isn’t going to be same as the software the company needs to optimize its decisions. Indeed, may it be an ERP, a WMS, an MRP or an OMS - all such software primarily focuses on operating the company’s processes and its stream of data entries. Don’t get us wrong, there are massive benefits in streamlining data entries and automating all clerical tasks. Yet, our point remains that these tasks do not address in the slightest the challenge at hand, which is to increase the capacity of your company to execute human insights, and at the scale required by your supply chain.

Then, there is no optimization without measurement. Therefore, Quantitative Supply Chain is very much about measurements - as its name suggests. Supply chain decisions - buying stock, moving stock - have consequences, and the quality of such decisions should be assessed financially (for example in dollars) with sound business perspectives. However, having good, solid metrics takes effort, significant effort. One of the goals of Quantitative Supply Chain is to help the company establish such metrics, which also plays a critical role during a project's later stages, in assessing the return on investment (ROI) of the overall supply chain initiative.

Finally, as mentioned previously, Quantitative Supply Chain is not an all-encompassing paradigm. It does not have the ambition to fix or improve everything in the company’s supply chain. It doesn't claim to help you find trusted suppliers or reliable logistic partners. It doesn't promise to help you hire great teams and keep their spirits high. Yet, thanks to its very specific focus, Quantitative Supply Chain is fully capable of delivering tangible results.

The project roles

Quantitative Supply Chain requires a surprisingly low amount of human resources, even when handling somewhat large-scale supply chains. However, such an initiative does require specific resources, which we will cover the details of in this section. But before delving into the different roles and their specificities, let’s start by mentioning one core principle of Quantitative Supply Chain: the company should capitalize on every human intervention.

This principle goes against what happens in practice with traditional supply chain solutions: human efforts are consumed by the solution, not capitalized. In order to keep producing an unending stream of decisions, the solution requires an unending stream of manual entries. Such entries can take many forms: adjusting seasonal profiles, dealing with exceptions and alerts, fixing odd forecast values, etc.

Quantitative Supply Chain seeks to reverse this perspective. It’s not just that human labor is expensive, it’s that supply chain expertise combined with acute business insights is too rare and too precious to be wasted on repetitive tasks. The root cause of manual intervention should be fixed instead: if forecast values are off, then there is no point in modifying the values themselves, it’s the input data or the forecasting algorithm itself that need to be fixed. Fixing symptoms guarantees an endless dealing with the same problems.

The size of the team required to execute a quantitative supply chain initiative varies depending on the scale of the supply chain itself. At the lower end of the spectrum, it can be less than a FTE (full-time employee), typically for companies below $20 million in turnover. At the higher end of the spectrum, it can involve a dozen people; but then, in this case, several billions dollars’ worth of inventory are typically at stake.

The Supply Chain Leader: Quantitative Supply Chain is a change of paradigm. Driving change requires leadership and support from the top management. Too frequently, supply chain leadership does not feel that it has the time to be too directly involved in what is perceived as the “technicalities” of a solution. Yet, Quantitative Supply Chain is about executing strategic insights at scale. Not sharing the strategic insights with the team in charge of the initiative is a recipe for failure. Management is not expected to come up with all the relevant metrics and KPIs – as it takes a lot of effort to put these together – but management is certainly expected to challenge them.

The Supply Chain Coordinator: while the Quantitative Supply Chain initiative itself is intended to be very lean on staff, most supply chains aren’t, or at the very least, aren’t that lean. Failure to bring everybody on board can result in confusion and a slowdown of the the initiative. Thus, the Coordinator's mission is to gather all the necessary internal feedback the initiative requires and communicate with all the parties involved. The Coordinator clarifies the processes and decisions that need to be made, and gets feedback on the metrics and the KPIs that will be used to optimize those decisions. He also makes sure that the solution embraces the company workflows as they are, while preserving the possibility of revising those workflows at a later stage of the initiative.

The Data Officer: Quantitative Supply Chain is critically dependent on data, and every initiative needs to have reliable access to data from a batch processing perspective. In fact, the initiative does not merely involve reading a few lines in the company system, rather it involves reading the entire sales history, the entire purchase history, the entire product catalog, etc. The Data Officer is typically delegated by the IT department to support the initiative. He is in charge of automating all the data extraction logic and getting this logic scheduled for daily extractions. In practice, the efforts of the Data Officer are mostly concentrated at the very beginning of the initiative.

The Supply Chain Scientist: he uses the technology - more on this to follow - for combining the insights that have been gathered by the Coordinator with the data extracted by the Data Officer in order to automate the production of decisions. The scientist begins by preparing the data, which is a surprisingly difficult task and requires a lot of support from the Coordinator, who will need to interact with the many people who produced the data in the first place, to clarify anything that may be uncertain. He formalizes the strategy so that it can be used to generate decisions - for instance, the suggested reorder quantities. Finally, the Supply Chain Scientist equips the whole data pipeline with dashboards and KPIs to ensure clarity, transparency and control.

For mid-sized companies, having the same person fulfill both the Coordinator and the Data Officer role can be extremely efficient. It does require a range of skills that is not always easy to find in a single employee, however, if such a person does exist in the organization, he tends to be an asset for speeding-up the initiative. Then, for larger companies, even if the Coordinator is not highly familiar with the company’s databases at the beginning of the initiative, it’s a big plus if the Coordinator is capable of gaining a certain level of familiarity with the databases as the initiative goes on. Indeed, the IT landscape keeps changing, and anticipating as to how the change will impact the initiative vastly helps to ensure a smooth ongoing execution.

Managed subscription plans of Lokad. Filling the Supply Chain Scientist position might be a challenge for companies that have not cultivated any data science expertise for years. Lokad supports quantitative supply chain initiatives of such companies by providing an “expert-as-a-service” through its Premier subscription plan. On top of delivering the necessary coaching for the initiative to take place, Lokad also provides the time and dedication it takes to implement the logic that computes the decisions and the dashboards that give clarity and control required by management for gaining trust and understanding in the initiative itself.


The technology

So far, we have remained rather vague concerning the software technology required to support Quantitative Supply Chain. Yet, Quantitative Supply Chain is critically dependent on the technology stack that is used to implement it. While, conceptually, every piece of software could be re-implemented from scratch, the Supply Chain Scientist requires an incredible amount of support from his stack to be even reasonably productive. Then, certain capabilities such as forecasting and numerical optimization, require significant prior R&D efforts that go well beyond what the Supply Chain Scientist can deliver during the course of the initiative.

The first requirement of Quantitative Supply Chain is a data platform with programmatic capabilities, and, naturally, having access to a data platform specifically tailored for handing supply chain data and supply chain problems is a sure advantage. We are referring to a data platform, because while any desktop workstation can store multiple terabytes nowadays, it does not mean that this desktop workstation will offer any other desirable properties for carrying out the initiative: reliability against hardware failure, auditability for all accesses, compatibility with data exports, etc. In addition, since supply chain datasets tend to be large, the data platform should be more scalable, or in other words, should be capable of processing large amounts of data in a short amount of time.

The data platform requires programmatic capabilities, which refers to the possibility to implement and execute pretty much any arbitrary data processing logic. Such capabilities are delivered through a programming language. Programming is correctly perceived as a very technical skill, and many vendors take advantage of the fear inspired by the idea of having to cope with a solution that requires “programming” to push simple user interfaces with buttons and menus to the users. However, whenever the supply chain teams are denied programmatic capabilities, Excel sheets take over, precisely because Excel offers programmatic capabilities with the possibility to write formulas that can be arbitrarily complicated. Far from being a gadget, programmatic capabilities are a core requirement.

Finally, there are significant benefits in having a data platform tailored for supply chain. In fact, the need for a data platform of some kind is hardly specific to supply chain: quantitative trading, as performed by banks and funds, comes with similar needs. However, supply chain decisions don’t require sub-millisecond latencies like high-frequency trading does. The design of a data platform is a matter of engineering trade-offs as well as a matter of a software ecosystem, which begins with the supported data formats. Those engineering trade-offs and the software ecosystem should be aligned with the supply chain challenges themselves.

The second requirement of Quantitative Supply Chain is a probabilistic forecasting engine. This piece of software is responsible for assigning a probability to every possible future. Although this type of forecast is a bit disconcerting at first because it goes against the intuition of forecasting the future, the “catch” actually lies in the uncertainty: the future isn’t certain and a single forecast is guaranteed to be wrong. The classic forecasting perspective denies uncertainty and variability, and as a result, the company ends-up struggling with a forecast that was supposed to be accurate, but isn’t. A probabilistic forecasting engine addresses this problem head-on by resolving the issue with probabilities.

Probabilistic forecasting in supply chain is typically a 2-stage process starting with a lead time forecast and followed by a demand forecast. The lead time forecast is a probabilistic forecast: a probability is assigned to all possible lead time durations, usually expressed in days. Then, the demand forecast is a probabilistic forecast as well and this forecast is built on top of the lead time forecast provided as an input. In this manner, the horizon to be covered by the demand forecast has to match the lead times, which are themselves uncertain.

As the probabilistic forecasting engine delivers sets of probability distributions, its forecasting outputs involve a lot more data than the outputs of a classic forecasting engine. This isn’t a blocking problem per se, but in order to avoid facing too much friction while processing a massive set of probabilities, a high degree of cooperation is required between the data platform and the forecasting engine.

Lokad’s technology stack. We could say that Lokad’s technology has been designed to embrace the Quantitative Supply Chain perspective, but in reality, it happened the other way around. Lokad’s R&D teams made a breakthrough in probabilistic forecasting and uncovering data processing models that were a much better fit for supply chain challenges than traditional approaches. We realized the extent of the breakthrough, as we were able to observe superior levels of performance once those elements had been put into production. This consequently led Lokad to the Quantitative Supply Chain perspective as a way of clarifying what Lokad teams were actually doing. Lokad has both a data platform – codenamed Envision – and a probabilistic forecasting engine. As you can see, Quantitative Supply Chain has very empirical roots.

The project phases

Quantitative Supply Chain is heavily inspired by software engineering R&D and the best practices known to data science. The methodology is highly iterative, with low emphasis given to prior specification, and a high emphasis on agility and the capacity to recover from unexpected issues and/or unexpected results. As a result, this methodology tends to be perceived as rather surprising to companies that are not deeply involved in the software industry themselves.

The first phase is the scoping phase, which defines which supply chain decisions are intended to be covered by the initiative. This phase is also used to diagnose the expected complexity involved in the decision-making process and the relevant data.

The second phase is the data preparation phase. This consists of establishing an automated set-up that copies all the relevant data from the company systems to a separate analytical platform. It also consists of preparing this data for quantitative analysis.

The third phase is the pilot phase and consists of implementing an initial decision-taking logic that generates decisions, for instance the suggested purchase quantities, which in itself already outperforms the company's former processes. This logic is expected to be fully automated.

The fourth phase is the production phase, which brings the initiative to cruising speed where the performance is monitored and maintained, and where a consensus is achieved on the desirable degree of refinement for the supply chain models themselves.

The scoping phase is the most straightforward and identifies the routine decisions that the Quantitative Supply Chain initiative intends to cover. These decisions might involve many constraints: MOQ (minimum order quantities), full containers, maximum warehouse capacity, … and these constraints should be closely examined. Then, decisions are also associated with economic drivers: carrying costs, cost of stock-outs, gross-margin, ... and such economic drivers should also be studied. Finally, the relevant historical data should be identified, along with the systems from which the data will be extracted.

The data preparation phase is the most difficult phase; most failures tend to happen at this stage. Gaining access to data and making sense of data is nearly always an underestimated challenge. Operational systems (e.g. ERP / MRP / WMS / OMS) have been designed to operate the company, to keep the company running. Historical data is a by-product of such systems since recording data was not the reason why those systems were implemented in the first place. Thus, many difficulties should be expected in this phase. When facing difficulties, most companies have an unfortunate reflex: let’s step back and write down a full specification. Unfortunately, a specification can only cover the known or expected difficulties. Yet, nearly all the major issues that are encountered in this phase are elements that cannot be planned for.

In reality, problems typically tend to be revealed only when someone actually starts putting the data to the test of generating data-driven decisions. If decisions come out wrong while the logic is considered to be sound, then there is probably a problem with the data. Data-driven decisions tend to be somewhat sensitive to data issues, and therefore actually represent an excellent way of challenging how much control the company has over its own data. Moreover, this process challenges the data in ways that are meaningful for the company. Data quality and an understanding of data are merely means to an end: delivering something of value for the company. It is very reasonable to concentrate the efforts on data issues that have a significant impact on data-driven decisions.

The pilot phase is the phase that puts the supply chain management to the test. Embracing uncertainty with probabilistic forecasts can be rather counter-intuitive. At the same time, many traditional practices such as weekly or monthly forecasts, safety stocks, stock covers, stock alerts or ABC analysis actually do more harm than good. This does not mean that the Quantitative Supply Chain initiative should run loose. In fact, it’s quite the opposite as Quantitative Supply Chain is all about measurable performance. However, many traditional supply chain practices have a tendency to frame problems in ways that are adverse to the resolution of said problems. Therefore, during the pilot phase, one key challenge for the supply chain leadership is to remain open-minded and not reinject into the initiative the very ingredients that will generate inefficiencies at a later stage. You cannot cherish the cause while cursing the consequence.

Then, the Supply Chain Scientist and the technology are both put to the test also, given that the logic has to be implemented in order to generate the decisions in a relatively short timeframe. The initial goal is to merely generate what is perceived by practitioners as reasonable decisions, decisions that do not necessarily require manual correction. We suggest not to underestimate how big of a challenge it is to generate “sound” automated decisions. Traditional supply chain systems require a lot of manual corrections to even operate: new products, promotions, stock-outs... Quantitative Supply Chain establishes a new rule: no more manual entries are allowed for mundane operations, all factors should be built into the logic.

The Supply Chain Coordinator is there to gather all factors, workflows and specificities that should be integrated into the decision-making logic. Following this, the Supply Chain Scientist implements the first batch of KPIs associated with the decisions. Those KPIs are introduced in order to avoid black-box effects that tend to arise when advanced numerical methods are being used. It is important to note that the KPIs are devised together with the Supply Chain Leader who ensures that the measurements are aligned with the company's strategy.

The production phase stabilizes the initiative and brings it to cruising speed. The decisions generated by the logic are being actively used and their associated results are closely monitored. It typically takes a few weeks to a few months to assess the impact of any given supply chain decision because of the lead times that are involved. Thus, the initiative's change of pace in the production phase is slowed down, so that it becomes possible to make reliable assessments about the performance of the automated decisions. The initiative enters a continuous improvement phase. While further improvements are always desirable, a balance between the benefits of possible refinements to the logic and the corresponding complexity of those refinements has to be reached, in order to keep the whole solution maintainable.

The Supply Chain Coordinator, free from his mundane data-entry tasks, can now focus on the strategic insights proposed by the supply chain management. Usually, desirable supply chain process changes that may have been identified during the pilot phase have been put on hold in order to avoid disrupting operations by changing everything at once. However, now that the decision making logic's pace of change has slowed down, it becomes possible to incrementally revise the processes, in order to unlock performance improvements that require more than better routine decisions.

The Supply Chain Scientist keeps fine-tuning the logic by putting an ever increasing emphasis on the KPIs and data quality. He is also responsible for revising the logic since subtle flaws or subtle limitations, typically relating to infrequent situations, become uncovered over time. Then, as the processes change, the decision-making logic is revised too, in order to remain fully aligned with the workflows and the strategy. Also, even when internal processes do not change, the general IT and business landscapes do keep changing anyway: the Supply Chain Scientist has to ensure that the decision-making logic remains up to date within this constant state of flux.