Full transcript

Conor Doherty: This is Supply Chain Breakdown, and today we will be breaking down why you should be forecasting more than just demand. My name’s Conor; I’m Communications Director here at Lokad. And to my left in studio, as always, Lokad founder Joannes Vermorel.

Now, before we start, two questions. One, where are you watching from? We’re in Paris. And two—key question that really frames the entire discussion—do you agree that it is important to regularly forecast more than just demand? Key question, but it is one that is going to influence the entire discussion. At that point, join us. I know time is pressing today, so we’ll get straight into it.

This conversation was inspired by one of the lectures I went back and re-watched—lead time forecasting, lecture 5.3. In that you argued, and I quote, “Anything that isn’t known with a reasonable degree of certainty deserves a forecast.” Now, I know you a bit better than most; I know that you don’t think anything is certain except death and, in France, taxes for sure. So, to get the ball rolling: what are the known unknowns—the things we know we don’t know—in supply chain?

Joannes Vermorel: If you look at the supply chain literature, it’s all about forecasting sales. I mean, it’s literally a thousand papers to one when it comes to forecasting demand—which means time series forecasting for sales in practice—versus forecasting anything else.

At the time I was surveying the literature, the ratio I ended up with was: for a thousand papers discussing forecasting sales, there was one paper discussing the forecasting of lead time. Obviously, lead time is very important. It’s always, I would say, a known unknown because if you want to have quality of service you need to say, “I will be serving this amount of customers for how long?” Because, obviously, depending on your lead times, if it takes six months for the stuff to arrive, you need to cover six months’ worth of demand. If the supplier can deliver in 48 hours, it’s a much shorter amount of time.

The reality is that in virtually all industries suppliers are not perfectly reliable. And that’s just the first one—lead time is very obvious. Then you have prices. You have the prices of your own suppliers; they can bump their price up or down depending on market fluctuations. You have also the prices of your own competitors that can force your hand: a competitor can lower their price, forcing you to lower your price in turn, or—conversely, if you’re lucky—when a competitor goes into bankruptcy and suddenly you can bump your price upward because there is one source of stress less in the market.

Those sorts of things are happening all the time, and what I’m saying is that if you do not take into account those also very consequential sources of uncertainty, then supply chain decisions—allocation of scarce resources—are going to be very misaligned with reality. It’s like risk mismanagement. If you decree that a class of risk does not exist while in fact it does, then any calculation you make will be off, and that means more overhead compared to what it should be.

Conor Doherty: Thank you, and I do want to again very carefully frame the discussion. You made the point about when you were preparing your lecture, you surveyed the landscape in academia and you found an enormous disparity in terms of what was being written about demand versus something like lead times. Okay, that’s academia; in terms of boots-on-the-ground supply chain planning, it’s even worse. How common is it to have people forecasting lead times, prices, returns, scrap rates—in reality, not in the classroom?

Joannes Vermorel: In reality, even below 0.1%. You pick any mid-market ERP and you will have a demand forecasting module—even if it’s a crude one. As far as I know, pretty much none of them have any kind of lead time forecasting. Pricing volatility analysis—again, as far as I know—none.

If you go to the applicative landscape, the fact that those capabilities are completely absent literally reflects the fact that in academic papers it is also very, very much absent. In enterprise software, most vendors are literally copying what is found in academic textbooks. They are not necessarily super inventive when it comes to tech; they tend to just roll out the numerical recipe that is given in big textbooks.

Conor Doherty: We don’t have them present—I forgot to bring them, mea culpa—but we did actually look at some of the textbooks in your office yesterday, and again, just doing a very quick scan through the index to try and find where lead time forecasting is mentioned, you might have a paragraph.

Joannes Vermorel: They don’t even mention lead time forecasting. At most, supply chain professionals are going to acknowledge that there are varying lead times at best. The most I have found in the literature—again, I’m talking about practical textbooks, not a random loss paper on arXiv—the most I found was assuming that lead times are normally distributed, which is very strange and weird because it means that you’re putting a non-zero probability that an order you pass today will arrive yesterday. You have positive probabilities from negative infinity to positive infinity.

At face value it’s a very weird take for lead times, but that gives you the state-of-the-art of what we can find in the literature. Again, in the applicative landscape in companies, this thing is just completely absent. Usually you just have a hard-coded value for lead time, and if you’re lucky it is revisited once a year; if you’re not lucky, it has never been revisited.

Conor Doherty: Again, in theory—let’s just say for the sake of discussion—that in most companies planning is based purely around demand forecasting. What’s the problem with that? If you’re going to focus on one source of uncertainty, wouldn’t demand be the one you’d want to focus all of your efforts on, or at least the quasi-totality of your efforts?

Joannes Vermorel: Think of any other domain. Supply chains are very opaque, so that can make it complicated, but imagine you’re selling insurances. Yes, you have to take into account, say for house fire insurance, the probability that the house will burn, but you also need to take into account the likelihood that the client will stay so that you can actually make a profit.

You need to factor those uncertainties. If you do not factor them all, you’re blind. What are the odds that, when you ignore something that is very consequential, your economic calculation will end up correct? I’m not talking about a subtle, elusive pattern; I’m talking about something that is in your face, with massive impact, like lead times or the price at which you sell.

For example, if I sell products with 90% gross margin because it’s an accessory—clients don’t care—I can be much more generous with my overstocks, because selling one unit covers the cost of ten others. If, on the other hand, I am a wholesaler selling with 2% gross margin, then overstocks are absolutely deadly and I need to be very careful.

Here we are making projections about projected gross margin, but again it depends on the price. If you’re not paying attention to the price, you can have massive fluctuations in profitability; thus it has dramatic consequences on whether something is profitable to produce or to buy or to keep in stock somewhere.

Conor Doherty: Important to underline here that we’re making essentially a risk-management or economics argument about the importance of acknowledging the array of uncertainty.

Joannes Vermorel: Exactly. Here, we are talking about looking into the future to say: what do I need to know, what do I need to quantitatively assess, to have some kind of evidence-based decision that is rational for the company?

The classic mainstream theory just assumes time series of the sales and you’re done, and they will have incredibly simplistic ideas, at best, for the lead time—fixing—and that’s it. Depending on the verticals you have many more uncertainties. For e-commerce, you can have returns. If you’re in textile—fast fashion—you have quality control, and some of your production coming from, say, Bangladesh may not pass the quality control. So you ordered a thousand; in the end you have only six hundred because four hundred did not pass quality control.

Those are the known unknowns. People who work in those industries know that. Where it gets insane is that the typical way to take into account those uncertainties that have nothing to do with demand is to reverse-engineer the demand forecast so that it indirectly accounts for this other uncertainty.

For example, if you think your lead time has a lot of variability, people will bump the sales forecast upward so that you order more earlier, thus covering your lead time risk. But it’s a very roundabout way to think about that, and suddenly you end up with this super-weird situation in which making your sales forecast worse makes your company more profitable. This is very inconsistent. Operationally, I can understand why people end up doing that, but it is a much more reasoned approach to tackle those other uncertainties and try to predict them separately.

Conor Doherty: I do want to provide a little bit of pushback here, because when I started advertising this some people pointed out—and these are friends of the channel; these are people we’ve interviewed—shout-out to Jonathan Karrel at Northland and Meinolf Sellmann at Inside Opt. They pointed out that what we’re discussing today—I’m paraphrasing here—what you’re suggesting is not new. The idea of forecasting, say, lead time, scrap rates, returns, etc., has been part of the literature for decades and is, in fact, standard operating procedure in some places or in some industries. How do you respond to that pushback?

Joannes Vermorel: The fact that it was in the literature—absolutely, I’m pretty sure we can find papers dating from the era of operations research in the 1950s discussing that. As I said, the ratio of papers is a thousand to one; it’s extremely shallow. Most of what you get is just references in passing.

My casual observation, after talking for a decade and a half to hundreds of supply chain directors, is that those things are absent for 99% of the companies. If I were to say in practice it’s about 0% of companies who are doing that, it would be only a very modest approximation. Out of the one million companies worldwide who have a supply chain of some kind, yes, probably there are dozens of them who are doing that; but again, that’s vanishingly small in relative terms.

Conor Doherty: Underlining the point of the difference between academic awareness and the boots-on-the-ground reality. But to give the benefit of the doubt, let’s say that the vast majority of companies are aware of the sources of uncertainty you described. Then why would companies focus on demand and largely ignore or undervalue the other sources?

Joannes Vermorel: In this mainstream paradigm, the demand forecast is not really a forecast but a commitment. The company commits itself to serve this amount of demand. Under the hood, between divisions in the company, you have turf—fiefdoms—of who gets this amount of money to support their own fiefdom. That is the sort of battles that take place in S&OP: marketing versus sales versus operations, etc. Everyone wants a bigger slice.

Behind the demand, the thinking is that it’s not exactly a statistical forecast; it’s also a commitment and a prophetic statement. The company says, “We project that,” and with a self-prophetic effect we allocate the right amount of resources to make this happen.

When you consider other sources of uncertainty, you don’t have that. Forecasting lead time: there is no fiefdom battle over the nature of the forecast. As a consequence, these things get completely sidelined while people come to the big melee for the S&OP master forecast that defines how much money every single division, every single product range, will get.

Those other sources of uncertainty are extremely consequential for the decision, but they are not consequential in the same way for the inner politics within the company. That’s why I believe generally they are completely sidelined. It’s not about people preferring statistical models of forecasting demand; in fact, the demand projection is the core of the internal battles that comes with S&OP between divisions competing for the internal resources of the company.

Conor Doherty: Listening to you now, we’re talking about the vast array of uncertainties. Yet whenever you say “other uncertainties,” your go-to example is lead times. In that lecture I referenced before—lecture 5.3; Alex, please drop that in the live chat—you said that of all the sources of uncertainty, lead times are one of, if not the most important and are “incredibly underappreciated.” You stressed “incredibly.” What is it about lead times that makes them so important, and why are they so underappreciated?

Joannes Vermorel: Why underappreciated? We just covered that—no fiefdom to fight about—so it’s really a matter of pure risk management. The outcome of this modeling will not define how much money marketing versus sales versus production actually get, nevertheless it is extremely consequential for the profitability of the company.

Why very important? Because lead times are not “nice” distributions. It’s not like, “I have a supplier that always delivers in 21 days.” At Lokad we have worked with hundreds of companies and company datasets for lead times. Lead times are very, I would say, almost always bimodal in nature. You have a sharp peak that represents the delivery when the stars are aligned and everything goes according to plan. That might be, if you’re very lucky in some industries, say 95% of the time; in other industries where it’s not so reliable, say 80% of the time. That’s when you have perfect alignment and you get the delivery in the specified time frame.

Then there is when the planets are not aligned. The typical case is your supplier is having, at the moment, a stock-out, so they don’t have the stuff at hand and can’t ship it to you. You have situations where the transporter has a problem, or the warehouse at some intermediary is full, or there is a problem at customs with some kind of delayed inspection. In this situation—the second mode—which happens from 5% of the time to 20% or even 30% of the time depending on vertical, the delays get extremely long. The worst case is literally the stuff never arrives.

If you look at your average expected lead time—do the mathematical definition—you will very frequently get infinite values just because some stuff never arrives. The average lead time for those, when you average, you’re averaging infinity. Obviously it’s a little bit nonsense, but it’s to point out that those things—the technical term is fat tails—mean that when things do not go according to plan, things can go very wrong, much longer. That’s something that, for example, normal distributions will never capture. They do not capture that something expected to arrive in three days takes one year, and yet it frequently happens.

Conor Doherty: This ties into something—I’m going to read a quote again from the same lecture. Talking about lead times, you said people typically treat it as a “variability,” and variability is not something that can be controlled with compliance. People don’t typically view it as a source of uncertainty that needs to be intervened with technologically; it’s something to be approached through face-to-face or manual intervention—like, “I pick up a phone, I call my supplier.” Could you expand on that?

Joannes Vermorel: This is mostly the mainstream view on supply chains, where the future demand is not exactly a forecast with uncertainty; it’s a commitment. Once you have this commitment, it’s all about compliance—you want minimal deviation with regard to the plan. Any deviation is seen as deficient compliance. People think of process excellency and these sorts of things; thus this uncertainty is not really approached, because there is a mindset where this variability is just a defect—something that next year, when we have finally polished the process, will be gone.

Why would you forecast something that next year will be gone because we will finally have fixed the process? Unfortunately, what I’m describing—this uncertainty—is irreducible. Why? Because it doesn’t depend on you. It’s decisions made by other people. Your supplier may have the inventory, but they may decide they are going to serve another client first, not you. Too bad. It’s not a very good supplier, but it’s the supplier that you have; it’s a decision made against you.

Same thing for the prices of your competitors. It would be so nice if all your competitors could raise their prices so that you could raise your price too. But guess what? Someone is going to lower the price. Again, it’s not up to you.

If we put aside things like weather, tsunamis, earthquakes—all the natural events that will disrupt—ultimately those sources of uncertainty are irreducible because they boil down to decisions that have not been made yet and will be made in the future by other people. Fundamentally you are trying to guess decisions that will be made in the future by other people. That’s exactly what happens when you forecast demand: you are literally forecasting the decision that those people will buy your product in the future—they can change their mind. When you forecast the lead time, you assume your supplier is going to maintain the same level of investment so that they can serve you in a timely manner and that they won’t discontinue their products. All of that is guesswork; that’s why you end up needing those predictions.

Conor Doherty: Quoting you back from multiple sources on the variability inherent to certain classes of uncertainty—you said it’s not up to you; previously you argued variability is not something that can be controlled with compliance. If you can’t control these sources of variability with compliance—i.e., people manually intervening—what options do people actually have to contend? One is to ignore it; we’ve covered that. What else is there?

Joannes Vermorel: Very frequently people will reverse-engineer the master forecast. When I say master forecast, I mean demand forecast, because again, in companies, when you say “forecasting”—although we have seen that forecasting should be applied for all the sources of uncertainties: demand, lead time, prices, returns, quality problems, production yields, etc.—in practice “forecasting” is just demand.

What they will do is reverse-engineer the demand forecast, moving it up or down to readjust indirectly the commitment, because behind the master forecast you have all those commitments taken by the company—allocation of resources—and they are reverse-engineering the master forecast so that those commitments make a little bit more sense with regard to those risks. That’s what is happening in practice. Because it’s a very roundabout way to do it, it’s very inefficient; it is an incredibly indirect way to steer the company.

Conor Doherty: I’m mindful of time, so I’ll push on, but I want to pose one question that came from another friend of the channel, Jeff Baker—if you’re watching at MIT, hello. He pointed out that in many large companies, certainly in manufacturing, the kinds of approaches we’re describing today are common. People are aware of this array of uncertainties; they’re also actively forecasting them, but he made the point that often they lack the planning tools to utilize the kind of info that is being forecasted. What do you think about this, and why can you have very large, very profitable companies that are aware of the uncertainties and are actively, regularly forecasting them, yet not folding them into the decision-making process?

Joannes Vermorel: First, I really challenge “they are forecasting them.” They have a data science team somewhere that is forecasting hundreds of things and nobody pays any attention to what they’re doing. We’ll touch on data science—next episode. That’s my take: when people say, “Oh, yeah, yeah,” there is a data science team which is completely isolated; nobody cares about what those guys are doing. I would say: irrelevant.

Now, the fact that you have planning tools—again, the planning tools merely reflect what you find in the academic literature, which is nothing. The planning tools mostly reflect the dominant paradigm, which is “sales forecast is king,” and that’s it. When people say “we don’t have the tools,” again, when you’re selling software as an enterprise software vendor, it’s really client-driven, especially in the enterprise segment. Enterprises will list their requirements, and vendors will just comply, providing whatever the clients demanded. If those capabilities are not there, it is first and foremost because the client companies themselves didn’t care about it and didn’t ask.

Conor Doherty: I want to push on because there are some private questions and some public comments. Last question, then we’ll move to the audience. We’ve covered a lot today. What parting advice do you provide to companies who agree with your take but might lack the planning tools or the software to actually get into this? Technology and attitude—what are your takes on those two to make differences?

Joannes Vermorel: The first step is really: do a back-of-the-envelope calculation to assess, in euros and dollars, how much it costs. Those things—because they are never assessed—people take as the cost of doing business, and that’s it. Is it a minor thing for a business or something really major? It varies. My take is that for most companies at scale it’s a lot of money.

Do the back-of-the-envelope, and then I would suggest: see the C-level, the executives, and try to get an agreement on the magnitude of the problem. The rest—the technicalities—we could discuss exactly how to do it, etc. I think most of the problem most of the time is that the problem is not acknowledged. Nobody has ever really tried to put any kind of dollarization on that. Yes, some people did, but very few. Thus there is no awareness, and top management can’t sort out whether it’s something really important or just a gadget.

Imagine you’re a top executive in a very large company. You have so many people knocking on your door saying, “You have this piece of technology that you can’t ignore,” and they have twenty people every day knocking with that. My suggestion is: build a very clear business case—simple. I’m talking about things that are not super advanced, super technical—just to have an intimate conviction of the proper dimension of the problem. Present that to whoever’s in charge, and the rest will follow. People do not become powerful executives in large companies because they are idiots—it’s very rare. Large companies are actually quite good at filtering people getting very high in the hierarchy; that’s how they survive. Once there is awareness, things will take their course according to the modus operandi favored in the large company.

Conor Doherty: Thank you. I’m mindful of time because I know you have a hard commitment, so I’m going to prioritize the public comments, and the rest of the questions that were sent privately we’ll answer on LinkedIn tomorrow.

This is a comment and a question from Murthy—I hope I’m pronouncing that correctly. “Joannes, one of the key challenges faced by CPG companies and retailers is congestion at their distribution and fulfillment centers due to seasonal demand changes. Could we organize a session to explore best practices for predicting congestion and developing effective de-congestion strategies?”

Joannes Vermorel: Short answer is: absolutely yes. Longer answer: this is typically a problem you get by design with deterministic point forecasts. You project the average demand—or the average flow if we’re talking FMCG—and the average flow is just over, or slightly below, the capacity, and then people say it’s kind of okay.

But the reality is that—especially in FMCG/CPG—it’s very spiky. In theory, at the weekly level you’re just below 100% utilization, but with the fluctuation you are routinely above. Yes, there are plenty of techniques to do that. We should discuss a technique called “shadow valuations,” where the idea is that you want to smooth things over time, and to do that you need to introduce a notion of opportunity cost that reflects the risk of saturation of your DC, your production unit, your transporter, or whatever is actually the bottleneck.

That’s slightly different from the topic of the various sources of uncertainty.

Conor Doherty: On that note, if there are certain sessions people are interested in seeing us cover, absolutely comment below or connect with us privately on LinkedIn and suggest if you don’t want to do so publicly.

We might as well push on. Forgive me if the pronunciation is incorrect. Kaizen—excuse me. “In my market, which is luxury products, demand is intermittent and very low quantity. What is your best advice to enhance accuracy? Note: we already forecast at an aggregated level.”

Joannes Vermorel: We only have a few minutes, so just nonsense to pretend time series will fit. The bottom line is that the time series is broken—it’s just broken. What people do when they face a situation that misfits time series is they try to modify the problem so that it will fit time series. Here, we are talking “let’s aggregate everything by quarter, by region,” and then you’re back to fat time series that have meat and you can forecast.

Short answer: you need to give up on time series. It is completely not appropriate for luxury. We have luxury clients; time series are completely not appropriate. That is something that works for, say, Unilever, but it doesn’t work for sparse, intermittent demand. It will not work for retail; it will not work for aviation; it will not work for oil and gas; it will not work for luxury and fashion in general.

That’s the short answer: give up on time series. There are alternative approaches, but—

Conor Doherty: I wasn’t going to be a cheap plug of ourselves; I was going to say we do actually have further resources on that. Alexey, if you can hear this, please drop some of the forecasting-for-luxury-markets learning resources in the chat. Very helpful.

We have time for one last comment, related—also on the topic of luxury products: “We already forecast demand as distributions. What’s the quickest, least disruptive way to add lead time distributions into our buying logic?” It’s a very big question, I understand.

Joannes Vermorel: It really depends on your setup—where your current numerical recipe sits, the algorithm/decision-making engine. You can sort that out with Excel; we even have an Excel spreadsheet where we show that you can, even in Excel, deal with probabilistic settings. It’s kind of ugly, but if you’re patient, it’s doable.

If you’re really in a rush, you need to find heuristics—arbitrary numerical calculations—that kind of do what you want. I would say: just find a heuristic that is better than reverse-engineering the demand itself. Go one step—it’s still duct tape—but it’s one step better than tweaking the demand. Then you can have a look at how we do probabilistic forecasting in Excel if you have nothing else.

If we want to go further, specialists like Lokad manage to do these things in just a few months. It is not a big project, but that means revamping the whole pipeline so that we have an actual, proper numerical recipe. At some point you can’t dodge the fact that you have to embrace a programmatic numerical recipe for your supply chain—but that’s a different topic.

Conor Doherty: I’ve been told to get you out within forty minutes, so we have time for a very terse closing thought. Based on everything we’ve covered—just thirty seconds—what is your pitch to people when it comes to forecasting beyond demand?

Joannes Vermorel: Think of it as risk management. Decision-making in supply chain is risk management. If you only forecast demand, you say that the only risk you have is customers who show up or not, and you ignore all the other risks. It’s not good; it’s not proper risk management.

My final closing thought would be: assess how much money is left on the table by looking at those other risks. Look at how much your company is wasting, and bring that to your boss. I’m pretty sure people will react and seek solutions once they acknowledge the magnitude of the problem.

Conor Doherty: Thank you. We’re out of questions and we’re also out of time. As always, thank you for joining me—and to everyone else, thank you for attending and for your questions. As I said earlier, be sure to connect with Joannes and me on LinkedIn if you want to discuss these issues privately. We’ll see you next week for the next episode of Supply Chain Breakdown.

And on that note, to you all I say, get back to work.