Antipatterns in supply chain

learn menu

Supply chain initiatives often fail. The Quantitative Supply Chain is our answer for dramatically lowering the failure rates . However, because Quantitative Supply Chain focuses on the practices that we know to be working, it puts little emphasis on practices that we also know not to be working. Worse, it turns out that most of these undesirable practices happen to be the precise root causes behind the high failure rates associated with traditional supply chain methods.

Below, we review the practices - or patterns - that cause most supply chain initiatives to fail. These insights have been hard-earned, as for each insight it typically took us not one, but multiple failures to grasp the root cause of the problem. We refer to those harmful practices as supply chain antipatterns. An antipattern is a “solution” that backfires: it is a common approach, it feels like a good idea, and yet, it invariably fails at delivering the intended improvement that was originally sought.

Bad leadership

At Lokad, we certainly don’t want to antagonize key supply chain decision makers, they are our prospects and our clients. Yet, at the same time, we feel it is our duty to refuse to close a deal when the solution is guaranteed to fail by design. Frequently, the issue boils down to how the initiative itself is managed. That being said, we acknowledge that the supply chain management is hardly the only culprit. Certain vendors sometimes promote all the wrong messages to their clients and are unabashedly getting away with it. Legacy practices and internal politics may also poison the daily life of supply chain management, who might also be used as a scapegoat when things don’t work out as intended. In this section, we list common pitfalls that could be addressed through a revised supply chain leadership.

The RFQ from Hell

There might be many areas where RFQs (requests for quotation) make sense. Unfortunately, software is not one of them. Writing a piece of software’s specification is vastly more difficult than writing the piece of software itself. The task is daunting. Once an RFQ process is engaged, companies manage to make the situation worse by introducing consultants, to further complicate what is already an over-complicated specification . RFQ stifles most problem-solving thinking, because the RFQ process assumes that the client already knows the fine-print of the desired solution, while the “problem” is, by definition, largely unsolved at the time when the RFQ is written. Moreover, in practice, RFQs promote an adversarial vendor selection process : the good vendors walk away while the bad ones remain. Finally, software is a fast-paced industry, and by the time your company is done with its RFQ process, your competitor will have already rolled out their solution.

The frail POC

Doing a POC (proof of concept) is good if what you intend to buy is a simple near-commoditized service: like a print service for business cards. A supply chain initiative is complicated by design. Supply chain requires coordination from multiple entities. There are multiple layers of data that should be leveraged. There are tens of workflows that need to be taken into account. POCs or small-scale pilots do more harm than good because, by their very design, they neglect a fundamental virtue of a successful supply chain initiative: its capacity to operate at scale. Most people are accustomed to the principle of economies of scale, however, when it comes to supply chain optimization, we primarily tend to deal with diseconomies of scale, whereby good decisions become harder and harder to make as the complexity of the problem grows. Achieving success for a small distribution center does not at all guarantee the solution will work just as well when dealing with tens of different distribution centers.

Dismissing uncertainty

The future is uncertain, and the uncertainty cannot be wished away . Similarly, the numerical optimization of the supply chain is a difficult problem that cannot be wished away either. Supply chain optimization requires probabilistic forecasts, which are the direct consequence of dealing with uncertain futures. Supply chain optimization also faces the rather counter-intuitive behaviors generated by numerical optimizations. Some vendors exploit the desire to keep things simple and easy in order to sell a fantasy practice where all the hurdles are abstracted away. Unfortunately, those hurdles aren’t mere technicalities: those hurdles define what has a chance to actually work for your supply chain. Uncertainty needs to be embraced from a deep numerical perspective. Supply chain management needs to acknowledge and embrace uncertainty as well.

Trusting the intern

If improving the supply chain is of any importance to your company, then the initiative requires the direct involvement of your top-level management. Too frequently, companies cherish the idea of improvement, but then assign an intern or two to the case. While we have met some very smart interns, we have never seen any intern-driven supply chain initiative get anywhere. We have nothing against interns, mind you. They can be smart, driven, out-of-the-box thinkers; but they are nowhere near what your company needs to drive change in its supply chain. Commitment from top-level supply chain management should be a given, otherwise, teams won’t execute. Teams don’t usually have much free time, if any. Unless management makes it clear, through its direct involvement, that the current initiative is a priority, then the current initiative isn’t really a priority for anyone, except maybe the poor intern assigned to the case.

Death by planning

Management seeks reassurance , and when it comes to reassurance, nothing looks as good as a solid roadmap , with well-defined phases, roles and deliverables. However, if the history of software has taught us anything at all, it is that pre-defined plans don’t usually survive the first week of the initiative. Sometimes, they don’t even survive the first day. When it comes to supply chain optimization, unexpected things will keep happening, and this is somewhat of a frightening perspective. However, rigidifying the initiative through precise planning only makes things worse: the initiative becomes even more fragile when it comes to unexpected issues. Instead, the initiative should be made as resilient as possible against the unknown. The capacity to recover from problems is more important than the capacity to eliminate problems upfront. Thus, the supply chain management should focus on making the initiative agile rather than well-planned.

Decoupling forecasting from optimization

The traditional perspective on supply chain optimization separates the forecasting process from the decision-making process. While this can be feasible from a very technical perspective by using two distinct sets of algorithms, one for forecasting and the other for optimization; from a functional perspective, the team in charge of the forecasting has to be the one in charge of the optimization as well. Indeed, the decision-making logic, or in other words the optimization, is numerically highly sensitive to the forecasting logic. Isolating the two perspectives is a recipe for amplifying whatever flaws may already exist at the forecasting level, wreaking havoc on the resulting decisions. The optimization logic should instead be numerically as cooperative as possible with the strengths and weaknesses of the forecasting logic.

Frankensteinization of software

It’s hard to achieve consensus in large companies. As a result, while the majority of the stakeholders involved in supply chain might decide to go for a specific vendor , a minority might remain adamant in enforcing their own vision, or wish to opt for certain features of a different product altogether. Since software customization constitutes a profitable business for large software vendors, the vendor is too frequently happy to oblige, inflating the costs and perceived value in the process. However, good software takes years to write, and when done correctly the end-result represents a fine-tuned trade-off between conflicting goals. The near systematic end result of software over-customization by large companies is to take away the original properties of the product and to make it not better, but worse, by adding more and more layers to it, thereby turning it into a monster. There is no shortage of software vendors. If the solution doesn’t fit your company, move on and pick another vendor. If no vendor fits your company, then either your business is truly unique - which is rare - or perhaps you should revise your requirements.

Buzzword-driven initiatives

Around 2010, it was all the rage in retail to figure out how to leverage weather forecasts in order to refine demand forecasts. In 2012, it was all the rage to factor social media data into demand forecasting. In 2014, Big Data was dominant, only to be replaced by machine learning in 2016. Every year that goes by comes with its new wave of buzzwords. While there is never much harm in revisiting an old problem with a fresh perspective, quite the opposite actually, losing sight of the core challenges is the near certainty of endangering whatever initiative is already undertaken. If it’s too good to be true, then it probably is. Supply chain improvements are hard-earned. Make sure that any novelty you want to bring in really focuses on the core challenges faced by your supply chain.

Bad IT execution

IT is too frequently blamed for project failures. IT is difficult - much more difficult than most people outside of IT ever realize. Yet, it also sometimes happens that the IT teams, out of good intentions, create so much friction through their processes that the initiative is slowed down to the point where the rest of the company just gives up. IT teams need not only to embrace change in the general sense, but also to embrace the specific sort of change that doesn’t compromise positive future changes. Easier said than done.

Beware of IT defense mechanisms

Since the IT teams might have felt like the scapegoat more than once in the past, when some company projects failed, they may have developed certain “defense mechanisms”. One of the most common defense mechanisms consists of requesting detailed written specifications for every new initiative. Yet, specifying a software solution tends to be more difficult than actually implementing the software solution. Consequently, this amounts to replacing a complex problem by an even more complex problem. Other defense mechanisms consist of having a hard line of “requirements” such as: software should be located on premise, software should be compatible with XYZ, software should have specific security features, and so on. Good software takes years to write. Once the long list of requirements is written down, usually only two types of software vendors remain: those that aren’t compatible with your requirements, and those that lie about being compatible with your requirements.

Underestimating the data effort

While this might seem like a paradox, supply chain initiatives may also fail because the IT is too involved in devising the solution, and takes it upon themselves to prepare the data . Indeed, since IT is insanely complex, and may therefore include rather talented individuals, it may also happen that some IT teams come to think that they know the business better than the rest of company. The primary undesirable consequence of this line of thought is a constant underestimation of the challenges that involve doing anything with the company data. Crunching data in a meaningful way isn’t about moving megabytes of data back and forth. Rather, it’s about establishing a subtle understanding of how this data reflects the company’s processes and workflows. It is also about understanding the subtle twists, biases and limits of the data, as they happen in the company systems at any given time. IT teams taking over the data preparation is a surefire way to have unexpected delays, as they gradually realize how much depth was missing from their original vision. Taking all this into account, the reasonable option consists of delegating this role upfront to someone outside of the IT department.

The temptation of the extensible platform

When it comes to enterprise software, there is one thing that vendors have mastered: the art of being an “extensible” platform that comes with many modules, which happen to represent many upsell opportunities. However, platforms don’t play well together and functional overlaps, that is, two platforms that internally compete for the same function within your company appear very soon. Two overlapping platforms is an IT nightmare for any company and typically results in synchronization mechanisms that happen to be hard to set up, and even harder to maintain. Thus, while it’s tempting to go for an all-encompassing solution, the reasonable option nearly always lies in opting for narrow applications that do one thing and do it well. Having dozens of narrow applications to maintain is straightforward, while managing two large platforms - with equally large functional overlaps - is hellish.

Unreliable data extractions

Data is like blood to a Quantitative Supply Chain initiative: stop pumping, and it dies. The initiative needs to be fed with fresh data all the time. Too often, IT considers that performing a couple of one-time data extracts will be good enough for getting things started. After all, chances are that this one initiative is going to be terminated soon anyway - remember, most supply chain initiatives fail - and thus, there is little point in investing too much effort during this early data-extraction stage. However, following this line of thought, the implementation of an automated process for reliable data extractions gets delayed, and as a result becomes one of the primarily root causes for the failure of the initiative itself. Here, IT needs to be proactive and start automating data extractions from day one. Furthermore, the IT team also has a coaching role to play in convincing the rest of the company that this extra effort is a key success factor for the initiative, and that the disposable option for data extraction is going to lead nowhere.

Bad numerical recipes

Optimizing supply chain is primarily a game of numbers. Naturally, company vision matters, leadership matters, discipline matters, but our experience indicates that most companies are doing more than a fair job in these areas. Yet, when it comes to numbers, it looks like the entire supply chain trade is overrun by bad numerical recipes. Not all supply chain practitioners realize that all formulas and models - referred to here as numerical recipes - depend on fairly strict assumptions . Break any of the assumptions, and the numerical recipe falls apart. In this section, we list the most common offenders in this regard. For the sake of concision, we assume that the reader is already familiar with the recipes themselves.

ABC analysis

The ABC approach to inventory was devised at a time where computers weren’t an option for driving the supply chain. The primary benefit of the ABC analysis is that it keeps the analysis so simple that it can be done by hand. However, considering the bewildering processing capacity of modern computers, using the ABC analysis no longer makes sense today. There are zero benefits in framing thousands of SKUs into 3 or 4 arbitrary classes. There is a full continuum between the product sold the most and the very long tail. The logic that optimizes the supply chain should embrace this continuum, instead of denying that this continuum even exists in the first place. In practice, we also observe that the negative effects of the ABC analysis are made worse by market changes that lead to class instabilities, with products that keep changing classes over time.

Safety stock

There is no such thing as “safety stock” in your warehouse. Safety stock is a fictional concept that divides the stock on hand into two categories of stock: the working stock and the safety stock. From an historical perspective, the safety stock was introduced as a simplistic way to deal with varying demand and varying lead time. The safety stock is modeled based on normal distributions - also referred to as Gaussians. However, even a quick examination of pretty much any supply chain dataset clearly shows that neither the demand nor the lead times are normally distributed. Back in the early 1980s, when computers were still very slow, normal distributions might have been a valid trade-off between complexity and accuracy, but nowadays, there is no point in holding on to something that was designed as a “hack” for coping with the limitations of early computers.

Manual forecast corrections

Some practitioners may pride themselves in being able to “beat the system” and generate better forecast numbers than what the system can produce. If this is indeed the case, the system should be considered as dysfunctional and fixed accordingly, typically leveraging the experience and insights of the practitioner. Optimizing a supply chain of any significant scale involves generating thousands, if not millions, of forecasts per day. Relying on manual data entries from the supply chain teams to cope with the deficiencies of the system should not even be considered as a valid option for the company. Given the progress in statistics in the last 20 years, there is zero reason to think that, when given the same data inputs, an automated system cannot outperform a human who, realistically speaking, won’t have more than a few seconds to spare on every number that needs to be produced. If the human were to have days to spend on every decision the company needs to take, then the situation would be radically different. However, the vast majority of decisions that the supply chain needs to take on a daily basis don’t fit in this category.

Alerts and bad forecasts monitoring

Classic forecasts emphasize one single future - i.e. forecasts aimed at the mean or the median -, as if this one single future was going to happen with any meaningful probability. Yet, the future is uncertain and forecasts are approximate at best. In certain situations, classic forecasts are just plain wrong. Frequently, immense costs are typically incurred by the company because of those large forecasting errors. As a result, alerts are put in place to keep track of those large errors. However, the primary culprit is not the forecasts themselves but the classic forecasting perspective that emphasizes one single future, whereas all futures are possible, just not equally probable. From the probabilistic forecasting perspective, the forecasting errors are primarily known in advance and are represented as distributions of probabilities, which are finely spread over a large range of possible values. Probabilistic forecasts emphasize an approach whereby the company will proactively de-risk its supply chain activity when dealing with a higher degree of uncertainty. In contrast, putting alerts on classic forecasts is the symptom of an approach that is broken by design, as it denies the all uncertainty.

Duct taping the historical data

When biases, such as stock-outs or promotions, are found in historical data, it is tempting to somehow “fix” those biases by modifying the historical data, so that the data better reflects what the history would have looked like without the bias. We refer to this process as the “duct taping” of historical data. The cornerstone idea behind duct taping is that all forecasting models are designed as variants of the moving average. If all you have are moving averages, then indeed, historical data needs to be adjusted to account for these moving averages. Yet, duct taping is not the solution. In reality, the solution lies in expanding the horizon, and looking for better forecasting models that are not as dysfunctional as the moving average can be. Better statistical models should be used for successfully handling “enriched” historical data, where the biases themselves are treated as data inputs. While such statistical models may not have been available decades ago, this is definitely not the case anymore.

Lead times as second-class citizens

For some reasons that are not entirely clear to us, lead times are too frequently considered to be a given data entry rather than something that needs its own forecasting. Future lead times are uncertain, and nearly always, the best way to reliably estimate future lead times is to use the lead times observed in the past. Thus, lead times require a forecast of their own. Furthermore, supply chain implications of correct lead time estimations are much greater than many practitioners realize: the quantities held in stock are precisely there to cover the demand for a given lead time. Change the lead times and the stock quantities change as well. Therefore, lead time forecasts cannot be given the role of a second-class citizen in your supply chain efforts. Nearly all supply chain plans put an emphasis on the need for precise demand forecasts, but our experience indicates that, in practice, precise lead time forecasts are just as important.

Pseudo-science

Pseudo-science has all the hallmarks of science: it feels rational, it comes with numbers, it’s said to be proven and very educated people are defending its case. However, pseudo-science does not stand the test of achieving repeatable results . Usually, an experimental setup is not even required to detect pseudo-science, and the pseudo-science materials start to fall apart under the scrutiny of an impartial expert peer review. Supply chains are expensive to operate and complex to comprehend. These two attributes alone explain why supply chain methodologies are quite difficult to challenge: not only do experimentations carry a lot of risk, but it is also difficult to correctly assess as to what truly contributes to a perceived improvement.

Fantasy business cases

Supply chain solutions are certainly not the only area of enterprise software where vendors make bold claims, but as the old saying goes: if it looks too good to be true, it probably is. We ourselves have observed that pretty much every January at the NRF trade show in New York, one of the largest retail trade shows in the world, that has been operating for over a century now. We often spot a very large vendor that boldly claims that inventory levels can now be halved, with the help of their new solution. If only 1/10th of those claims were to be true, the entire industry would have achieved near-perfect stock levels a decade ago. There are so many ways to game the business case numbers that most vendors aren’t even actually lying. The most common case is that the company advertised as the “poster child” for the solution had a massively dysfunctional supply chain to begin with, and hence equally massive improvement figures were possible to obtain, once things were back to normal one year later.

Trust the Sales team with forecasting

It remains a mystery whether the people who task their Sales teams with producing accurate demand forecast numbers have ever worked with an actual sales team before. In the very best case, these numbers can be seen as an honest piece of guesswork, but more often than not, they are just made up by the Sales team trying to game whatever financial incentive they are given. This gives way to the widespread practice known as “sandbagging” , where everyone sets their objectives as low as possible in order to exceed expectations later on. In addition to this, down the line, we have supply chain teams that often pretend to pay attention to those figures, while actual operations remain entirely separate from the inputs provided by Sales. Ignoring the figures suggested by the Sales team is the only reasonable option, as the supply chain would stop working altogether if it actually had to operate based on numbers so poor.

Proven solutions

Seeking a proven solution that has managed to deliver tangible benefits for a company very similar to yours might seem like a very rational perspective. From an anecdotal perspective, this is exactly what Nokia did, and countless other businesses, until they were no more. Most large companies don’t act that fast when it comes to choosing a complex solution. The vendor selection process can easily take up to 1 year. Then, achieving cruising speed with the chosen solution might also take another year. Monitoring and gaining trust in the results may take 1 or 2 more years; especially for those supply chains where not all solutions are sustainable, and where the supply chain may quickly revert to the previous state of performance, once the vendor is no longer constantly present onsite to tweak the solution. Following this, it may take 1 more year for the solution vendor to ultimately reach your company with this hard-earned proof. The fatal flaw in this line of thought is that your company can afford to come to the party 5 years late. When software is concerned, 5 years is a very long time. Most software is actually considered obsolete by year 5; why would your supply chain solution be any different?

Bad metrics, bad benchmarks

The Quantitative Supply Chain is all about numbers you can trust. As a result, we tend to be heavily inclined towards metrics and benchmarks. However, we observe that, in supply chain, the vast majority of benchmarks and metrics are so badly designed that they are generally viewed as pseudo-science in our book. Good supply chain metrics require a lot of effort. Good supply chain benchmarks require a near-insane amount of effort. Too often, metrics and benchmarks are dumbed down for the sake of simplicity, but at the expense of all actual relevance for the business. As a rule of thumb, if operating a benchmark does not feel like an incredibly difficult task for your supply chain teams, then chances are that the challenge is vastly underestimated.