00:00:00 Introduction to heuristics in supply chain
00:01:14 Examples of heuristics: min-max inventory, FIFO, ABC analysis
00:03:15 Origins and informal use of heuristics in companies
00:06:28 Human vs. algorithmic approaches to problem-solving
00:09:58 Heuristics from a computer science perspective
00:13:27 The problem with layman perspectives on heuristics
00:17:22 Supply chain heuristics and the illusion of causality
00:22:00 The need for metrics to assess heuristic effectiveness
00:26:35 Difference between algorithms and heuristics in practice
00:30:26 Experimental validation and empirical optimization
00:36:33 Misleading intuition in supply chain decisions
00:41:27 Example of Airline boarding strategies and intuition
00:46:47 The absence of financial metrics in supply chain decisions
00:53:05 Human limitations in complex scheduling vs. algorithms
00:58:47 Closing thoughts and takeaways
Summary
In a recent LokadTV episode, Conor Doherty, Communication Director at Lokad, interviewed Joannes Vermorel, CEO of Lokad, about heuristics in supply chain management. They discussed the use of simple problem-solving tools like FIFO and ABC analysis, highlighting their limitations and the need for more robust mathematical approaches. Joannes explained that while heuristics offer straightforward solutions, they often lack consistency and empirical validation. He emphasized the importance of distinguishing between true heuristics and arbitrary numerical recipes, advocating for real-world assessments and experiments to validate supply chain practices. The conversation underscored the necessity of critical evaluation and empirical evidence in optimizing supply chain decisions.
Extended Summary
In a recent episode of LokadTV, Conor Doherty, the Communication Director at Lokad, engaged in a thought-provoking discussion with Joannes Vermorel, the CEO and founder of Lokad, a French software company specializing in predictive supply chain optimization. The conversation delved into the use of heuristics in supply chain management, exploring their limitations and contrasting them with more robust mathematical approaches.
Conor began by introducing the concept of heuristics, which are simple problem-solving tools like FIFO (First In, First Out), LIFO (Last In, First Out), and ABC analysis, commonly used in supply chain decisions. He highlighted that these heuristics are often employed to navigate uncertainty and asked Joannes to elaborate on what supply chain practitioners mean when they refer to heuristics.
Joannes explained that in the industry, heuristics are essentially formalized rules of thumb used to guide decisions. For instance, a min-max inventory policy, where the maximum inventory is set to three months’ worth of demand, is a heuristic. These heuristics offer simple solutions to complex problems, but they are often arbitrary and lack consistency across different planners and companies.
Conor probed further, asking about the origins of these heuristics. Joannes responded that they are the simplest solutions people can think of to address specific problems. For example, FIFO ensures that all items are eventually picked and processed, preventing decay. However, he emphasized that these heuristics are not necessarily optimal solutions.
Joannes then introduced a critical distinction between heuristics as understood by economists and those in supply chain management. In natural tasks, like grabbing a glass of water, humans use heuristics effectively because evolution has equipped us with the necessary instincts. However, supply chain problems are discrete numerical challenges that do not exist in nature, and our innate heuristics are not suited for these tasks.
Conor and Joannes discussed the limitations of traditional heuristics like FIFO and ABC analysis. Joannes argued that these methods are often arbitrary numerical recipes rather than true heuristics, as they lack metrics to measure their effectiveness. He stressed the importance of distinguishing between heuristics and arbitrary numerical recipes, which can be misleading.
Conor presented a retailer’s perspective, suggesting that simple methods like ABC analysis work because they are profitable. Joannes countered that profitability does not validate every practice within a business. He used Apple as an example, noting that some practices may not directly contribute to profitability but are still followed.
The conversation shifted to the challenges of validating heuristics in real-world supply chains. Joannes explained that while algorithms have provable properties, heuristics require empirical assessment through experiments. He cited the example of stochastic gradient descent, a heuristic that gained recognition for its practical performance despite lacking formal proof.
Conor and Joannes discussed the difficulty of assessing the goodness of heuristics without clear metrics. Joannes emphasized the need for companies to validate their numerical recipes through experiments, rather than assuming their effectiveness. He referred to his lecture series on experimental optimization, highlighting the importance of discovering optimization targets and the difference between empirical and mathematical validation.
Joannes also addressed the psychological bias of falling in love with one’s own ideas, which can lead to the adoption of arbitrary policies without proper validation. He warned against assuming that traditional methods are inherently good just because they haven’t led to bankruptcy.
The discussion concluded with Joannes advising that the term “heuristic” should be reserved for simple, effective numerical recipes with empirical evidence of their success. He emphasized the importance of real-world assessments in financial terms and the need for companies to critically evaluate their methods.
Conor wrapped up the interview by thanking Joannes and the audience, encouraging viewers to subscribe to the LokadTV YouTube channel and follow them on LinkedIn for more insightful discussions on supply chain optimization.
Full Transcript
Conor Doherty: Welcome back to LokadTV. Heuristics are at the heart of most of the decisions that people make in supply chain terms.
Heuristics are simple problem-solving tools that guide us through moments of uncertainty. Think FIFO, LIFO, and ABC analysis.
Today with Joannes Vermorel, we will discuss the limits of these heuristics and contrast them with a more robust mathematical perspective.
As always, if you like what you hear, subscribe to the YouTube channel and follow us on LinkedIn. And with that, I give you heuristics in supply chain.
As I mentioned in my introduction, we’re here to talk about heuristics, particularly in supply chain. So just to set the table, when supply chain practitioners, you know, at the office, when they’re talking about heuristics, what are they talking about? What do they mean?
Joannes Vermorel: I mean, most supply chain practitioners would probably not use the term heuristics. It’s already a little bit fancy. When I think in the industry in general, when people say heuristic, it just means that they have some kind of formalized rule of thumb that is used to steer a decision.
So an example of that would be we have a min-max inventory policy, and the max is defined as being equal to three months’ worth of demand. That’s it. That is my heuristic.
And the interesting aspect of heuristics is that supposedly, you know, that’s the world perspective of heuristics, that you have a complex problem, but your heuristic delivers a simple solution to this problem.
Conor Doherty: Well, a lot just wrote down literally the words arrived at by committee. So my follow-up there is when you say in the example of min-max, there will be three months of demand, that’s just an arbitrary decision. Is that what makes it then just a general rule of thumb?
Joannes Vermorel: Yes, that’s pretty much all there is to it. I mean, maybe people have loosely tried a few alternatives, and their gut feeling is that two months is not enough, six months is too much, and so they converge to something.
Or even more frequently, there is no consistency whatsoever. Every demand and supply planner has its own set of rules of thumb, its own collection of heuristics that is being used.
It is rare that companies enforce any kind of practice with regard to heuristics. At least when companies think and say we have heuristics, it usually means that it’s not enforced and that it’s relatively informal and that there is a large degree of leeway in how you cherry-pick all the parameters of those heuristics.
Conor Doherty: Well, I mean, you gave the example of min-max. There’s also things like FIFO, there’s LIFO, there’s ABC analysis. There’s an entire array of heuristics. Where do these come from? Like out of what ether do they emerge?
Joannes Vermorel: I mean, they are just the simplest solutions that you can think of to give you a solution to the problem that is being faced. So let’s consider, for example, FIFO.
One of the most basic problems if you have to iteratively process stuff that is incoming is how do you avoid leaving something on the side forever. That’s it.
If you don’t decide on an order and you just pick stuff randomly, then you may well end up with one item that is never picked. It’s just pushed on the side and it is never processed.
And this is bad because then this item will ultimately decay. Whether you call a product perishable or not, all products perish given enough time.
Thus, you just want a process that at least guarantees that ultimately anything that has been flowing your way is going to be picked and processed and shipped somewhere down the line.
Thus, if you say first in, first out, for example, that’s just a basic way to ensure that everything will be picked. Is it a good policy? I mean, it depends, but it certainly gives you this one property.
And thus, you can say that it is certainly a solution to this problem. Is it a good solution? That’s a completely different question.
Conor Doherty: Well, that is exactly the next question because you didn’t use the term optimality or you didn’t say optimal decision. Of course, in those situations like the one you just described, you’re in a repair shop, two engines come in, or you come in in the morning and there are many engines, and you have to decide what am I going to repair first, what schedule am I going to go with, and you’re trying to arrive at what at least looks like a good or optimal decision.
So my question there is what is the upper bound, in your opinion, what is the upper bound on the optimality that can be obtained through these kinds of heuristics? Take, for example, FIFO.
Joannes Vermorel: I don’t think that it’s the correct way to even frame the problem. I think we have to step back and see that when we think in terms of heuristics, there are in fact two radically different perspectives to think about them, and we need to pause and consider that.
First is heuristics as, let’s say, economists think about it, or at least, which is, let’s say, for example, I need to grab this glass of water. I can reach and take it.
A physicist could say, “Oh, there is like a million calculations that are involved to compute the exact trajectory of my hand, of every one of my fingers, of the exact mass, of the exact force,” and that would be all the calculations that I would need to have if I wanted to have robots making a perfect calculation on how to move a robotic arm and take the glass.
But it turns out that a human being doesn’t work like that. Instead, we’re using tons of heuristics such as, you know, dead reckoning. “I’m too much to the right, oh, steer to the left,” and “Does the pressure feel enough? Oh no, the glass is slipping, press more.”
So you have plenty of heuristics that will let you achieve a very complex task but with underlying processing that is much more basic. So fundamentally, when you’re grabbing a glass of water, your brain is not solving in real-time differentiable equations. It is just a whole bunch of heuristics that just work beautifully, and then you can successfully grab your glass of water.
And it turns out that for tons of stuff that is happening in the real world, nature, the universe, whatever, gave us beautiful solutions that are for seemingly incredibly complicated problems that just work.
By the way, standing upright on two legs also requires all sorts of heuristics. When people try to engineer a robot that walks on two legs, they realize that it’s actually very, very difficult because we don’t know those heuristics.
Now, this is not the situation in supply chains. Here, I’m describing heuristics with tasks that have been represented challenges for the last half a billion years for any living creatures to be able to move.
Conor Doherty: They’re also unconscious. I’m talking about decisions.
Joannes Vermorel: Yes, I mean, grabbing the glass of water is a decision. Moving your hand is a decision. But here, what we’re talking about is discrete numerical decisions. That is something that does not exist in nature.
In nature, you do not think in terms of discrete numerical decisions such as how many products do I need to supply tomorrow, the day after tomorrow, etc. Those are discrete numerical decisions that are completely unlike whatever you find in nature.
So the first point that I make is that if we adopt this implicit perspective that comes from, let’s say, the natural world about heuristics, we can say that humans are just gifted with the capacity to apply very simple solutions to complex problems that work beautifully.
And my counterpoint is that this does not work for man-made situations such as supply chain where we are talking about solving discrete numerical problems. These classes of problems are completely unlike what we face in nature, and we cannot assume that we have any kind of innate sense of what will work there.
Evolution did not gift us with the capacity to assess what is the optimal replenishment schedule for a complex supply chain network. That’s a very fantastical claim to say that evolution did give us anything with regard to such a problem.
So what I’m saying here is that we need to go to a different perspective on heuristics, the one adopted by computer scientists. In computer science, when we have a problem, if we have a solution that is provably correct with nice properties for this problem, we call it an algorithm.
That’s what an algorithm is. An algorithm in computer science is a numerical recipe where we have formal elements of proof.
For example, sorting a list. You have an unordered list of elements, you want to sort them from the smallest to the largest. You have many ways to sort a list, but some ways will give you solutions that require a minimum number of steps and a minimum amount of memory to be able to sort all those numbers.
So that’s an algorithm for you. An algorithm is a solution that is provably correct and on top of being correct comes also with extra properties that are nicely behaved according to the problem at hand.
A heuristic, again from a computer science perspective, is a numerical recipe that works very well in practice even if you do not know formally why it works or why it works so well.
And it turns out that there are classes of solutions that are like hidden jewels that work beautifully, that are extremely simple, and yet nobody really knows why.
So, an example applied to supply chain? Yes, a lot of them apply to supply chain. There is, for example, stochastic gradient descent. It was a process that was discovered. It is very simple conceptually. You can write it down in like four lines. It was discovered most likely in the 50s, although it’s a little bit unclear. The idea is so simple that it had probably been invented multiple times.
And yet, people generally speaking, the community didn’t really pay any attention to the stochastic gradient descent before 15 years ago. Why? Because people had not really noticed how well it was performing in practice when used.
Conor Doherty: On what problems?
Joannes Vermorel: All the learning problems, all the optimization problems, and tons of other situations as well. So it is a semi-universal heuristic that is working on an extremely wide array of situations.
This is even baffling, the sheer spectrum in itself of applicability of the stochastic gradient descent is baffling. And yet, we don’t really have any mathematical proof to explain why it works so well. It just does.
So that’s very interesting. And here you have to think of, when computer scientists speak about heuristics, they are referring to heuristics as hidden jewels. And by the way, if we have to circle back to your initial question, heuristics by definition is something, at least the precise definition as given by computer scientists, it is a numerical solution where you have no proof.
So a heuristic, by definition, you don’t know how far you are from the optimal. That’s pretty much a given. If you knew, then by definition it would be an algorithm. Because an algorithm is literally when you can prove the correctness plus extra behavior, your numerical recipe graduates to become what is called an algorithm.
Conor Doherty: Algorithm, okay, so I’m going to try and summarize all of that and let me know where I may miss the mark. But again, as I understood all of that, the problem with traditional heuristics like FIFO, for example, the problem with that is when people try to apply that, it’s a very hasty solution to a problem that the human mind can’t possibly comprehend.
Joannes Vermorel: No, I would say the problem is that people, I would say the mistake that is being done with heuristics is when approached with a layman perspective, so not a computer scientist perspective, is to attribute some degree of goodness to your numerical recipe. That’s why I prefer to use the term numerical recipe that is completely neutral. You know, it can be complete crap, it can be excellent, it just is. It is just a series of calculations that give you a result. We do not presume that it is good for anything, it just does a calculation.
The problem with when people use the term heuristics is that they will come up with something that is very arbitrary and they will apply this qualifier as if it was a given that the numerical recipe is any good. Naturally, again, if we go in the natural world, those heuristics, those instinctive ways to fetch, for example, an object, they are very good. They are very good. And how do we know they are very good? Well, because when we try to engineer a robot that does the same thing, it fails miserably and it takes immense amounts of engineering efforts to even, you know, come close to what we can do instinctively.
So there is, but that creates this sort of bias that makes people think that, okay, I can, for example, say, “Oh, let’s say that the max in my min-max inventory policy is three months’ worth of demand.” Why am I calling that a heuristic? You know, is this thing any good? It can be completely nonsensical. I do not know. It’s not because I have some intuition. Where does this intuition come from? You see that? And usually it comes from nothing. And that’s where I think the mistake is.
Due to the fact that we have other communities, like computer science communities, where heuristic is used as a term to refer to something that is surprisingly good, you know, we have a loose positive attribution, you know, a sort of halo effect that would grant more value to those numerical recipes than what they really deserve.
Conor Doherty: But a retailer would simply respond to that if they heard what you just said and say, “Well, I perform an ABC analysis. I know where the vast majority of my sales come from. I keep a certain high service level of those SKUs in stock and I make money. It doesn’t need to be more sophisticated than that and it works because I’m still in business. I’m making money and I’m making more money than I did last year.”
Joannes Vermorel: Yes, and you can have a store that is leaking water and you’re making money. Thus, if you had more stores leaking water, maybe you would make more money. You see, again, that’s the problem. Supply chain is only one ingredient in a big picture. So, the mistake is to think that it’s not because you are making money that every single thing that you do is making sense or contributes positively to the fact that you’re profitable.
Even companies, for example, like Apple, are known for keeping most of their employees in the dark when it comes to the future of the company. That’s one of the well-known traits of Apple. When it comes to future product releases, everybody is kept in the dark and they will even go so far as leaking internally false roadmaps to various teams so that if a roadmap is leaked, you will know who got the false roadmap. Okay, is it truly an aspect that is improving the profitability of Apple? Maybe, maybe not. Is it something that you want to emulate for another business to make this other business more profitable? Maybe not.
So, you see, I’m saying that if you tell me, “I use ABC analysis, my business is profitable,” the only conclusion is that ABC is just not so bad that it would bring your company to bankruptcy. But that’s the only thing that you can know. The only thing that you can say about an ABC analysis.
Conor Doherty: You could also say that, again, when you talked about looking for the optimum, looking for the best possible point, that it’s nowhere near that. You can say it leaves money on the table. You can simply say it sounds like you’re making a binary position there, that like doing it is 100% stupid or it’s 100% good.
Joannes Vermorel: But here you see that’s where, again, the vision of a computer scientist versus a layman really diverges. In computer science, people acknowledge that a heuristic, I mean a numerical recipe, acquires the capacity to be called a heuristic only if it exhibits some kind of empirical goodness. You see, so it means that not every numerical recipe that I can invent is a heuristic. In order to qualify for heuristic, it has to be surprisingly good at doing something.
Conor Doherty: Which some people might contend.
Joannes Vermorel: And this surprising goodness requires a metric. It requires a measurement.
You see, the vast majority of let’s say ABC analysis, for example, there is no metric that qualifies anything about it. It is just about assigning a letter to every product: A, B, C. This is just about assigning that. Then an extension of that is to have a uniform inventory policy for each class. But this uniform inventory policy might be something completely different from service level, for example. Your uniform policy per class might be, for class A, I keep three months’ worth of inventory in stock. For class B, two months. For class C, one month. You know, it works too.
So service levels are not necessarily an integral part of ABC analysis. Availability of these goods corresponds with their perceived importance. ABC analysis is just about attributing a class to every product of importance. It is about attributing a class and then yes, the way you do that is by weighting the sales, but that’s it. So again, what I’m saying is that what is the problem that you’re trying to solve? You see, ABC analysis is why I say it’s a numerical recipe, not a heuristic, because you don’t know what problems you’re solving. You don’t have any reference of what is the optimal.
Conor Doherty: So, yep, keep going.
Joannes Vermorel: Again, that’s the problem. We need to separate heuristics versus just arbitrary numerical recipes. An arbitrary numerical recipe can be completely unmotivated. I just compute that. Why? Because I can compute it. So I just do the calculation, that’s it.
If you want to have a heuristic, you need to have, let’s say, a target that explains or a way to assess the goodness of it. Again, another example would be if I look for a heuristic from computer science. Let’s say I’m using XOR shift to generate pseudo-random numbers. Very good. There are metrics that will tell me the quality of a sequence of numbers to be considered as random. Plenty of metrics for that.
Thus, if I use a heuristic like XOR shift, I can then assess if it is any good at generating what is understood as random numbers according to the metrics that detect the randomness of a newly generated set of numbers. So you see, I have a metric, I have a target, I know what I’m talking about in terms of is it a heuristic or not. I would say is it any good or not? If it is, then you will say, okay, it’s a heuristic. But if you have no idea about what you’re doing, then I think it is a mistake to call that a heuristic because you don’t know if it’s any good.
You just have made up a numerical recipe and you call it a heuristic.
Conor Doherty: So when people, again, just to make it very concrete, when people, let’s say, conduct an ABC analysis and then arrive at decisions on the basis of that, for example, keeping three months of inventory for your A class or setting service levels, whatever decisions are taken after that, and if they see positive results, they’re just, that’s a logical fallacy. They’re ascribing to previous actions causality. Yes, because assessing the goodness, as you said, like how can you do that if the metric is not clear?
Joannes Vermorel: You just can’t. And again, I think that’s the thing with what people call heuristics. I prefer to call them with a neutral term, numerical recipe, because in fact, they have not even tried. You see, the thing is that very frequently there has been not even an attempt at qualifying whether this was any good or not, quantifying if it was good in any shape or form.
And there are plenty of examples like that. You can have, for example, some businesses that decide that their prices will be round numbers. Some will prefer that it ends in 99, some will prefer that it ends in 95. You can have a policy that adjusts your numbers, rounding it up to just below to the 99 or to the 95 or to the 97 or just round it up to the next round numbers.
The vast, vast, vast majority of businesses who are doing that have no clue whether which one of those options is any better for them, and they still pick one.
Conor Doherty: So they are guessing causality essentially.
Joannes Vermorel: Yes. And again, I’m not challenging that sometimes, you know, taking a policy in a way that is completely arbitrary and just sticking to it for the sake of simplicity is okay. But then you should not ascribe to this arbitrary choice your success. That’s just what I’m saying.
Conor Doherty: Well, again, that’s again when we talk, certainly in the economics perspective, when you talk about heuristics, most people, yeah, they’re trying to simplify a problem and arrive at a decision. And also then the way they view that result is also a very simplified version. So, for example, “I did a thing, I made all the prices round numbers or 99, and sales went up or sales went down. Therefore, post hoc ergo propter hoc, what I did earlier caused it.” And of course, that’s impossible. The problem is that you did that while there were a hundred other people at the same time trying to disentangle causality, and it’s very, very difficult.
Joannes Vermorel: Yes, it is very difficult. I mean, that’s especially in supply chain where you have a system, everything is kind of interconnected. It is very difficult. And my point is that when it comes to heuristics, if correctly understood, they can be absolutely fantastic. And by the way, they can be literally a way to outcompete your peers because if you have what computer scientists call heuristics, which is something that is like a hidden gem, something that you see in an algorithm.
The difference between an algorithm and a heuristic is that an algorithm is something where you have a numerical recipe. You can read the numerical recipe and then as a mathematician, you can prove its properties. It’s fantastic. It is very cheap. You see, the thing with algorithms, algorithms are incredibly cheap. You do not need to make any experiment in the real world to prove that your algorithm is nicely behaved. That is fantastic. So that means that you can have a mathematician working in his office and bam, you have your nicely behaved algorithm that brings value to your company.
A heuristic, well, the only way to discover a heuristic is to do experiments. It is something that is an empirical assessment, and this is very difficult. And that’s why, for example, stochastic gradient descent was, for literally decades, although it was known to thousands of people, it remained completely ignored just because, well, nobody had truly realized that in practice it was working beautifully. You see, so that’s something that is a heuristic. It might exist, but until people have actually tested the numerical recipe and seen that it works beautifully on certain classes of problems, then they will not identify that it is a valuable heuristic.
Conor Doherty: It occurs to me though, with some of the statements that you’ve made, so for example, again, just to summarize before I get to the question, that I give the example of “I did a thing, therefore I presume that what I did caused a spike in sales or possibly caused a loss in sales.” And you said, “Well, but a hundred other people did a hundred things or a thousand things, whatever.” It occurs to me that almost sounds like you’re setting an unfalsifiable standard because even if you were to use mathematical heuristics, how would you ever know that what you chose to do or the tools that you used made a positive difference once you take it out of theory and you put it into the real interconnected network of supply chain decision making?
Joannes Vermorel: No, again, you can do experiments and you can validate the goodness of any numerical recipes that you have. I’m not saying that you can’t. I’m just saying that most companies do not even try.
Conor Doherty: Well, how would a company try that? What would it look like?
Joannes Vermorel: So, but that’s exactly what we have in this series of lectures about experimental optimization. There is a whole, I have a one hour and a half lecture on that, and it’s called experimental optimization. So you cannot, the gist of it is that you do not know what you’re optimizing for, and the first step will be to discover what you’re trying to optimize for. And that’s very different from the classic mathematical optimization perspective where your target is already a given.
But what I’m saying is that if I want to go back to heuristics, fundamentally, there is no reason to think that the best numerical recipe is necessarily something where you can have a mathematical proof. The fact that a mathematical proof exists has nothing to do with the fact that your numerical recipe is good or bad. You see, fundamentally, these are two completely different perspectives. It just happens that if you can have a mathematical proof, at least you know something that’s nice. And in certain conditions, you can know a lot and you would say, “Oh, that’s very interesting because I know so much that at least compared to all the other numerical recipes where I know nothing, I prefer to use one where I have elements of proof. It is better than nothing.”
But then, if you try and in practice with proper experimental setup, as conferred in this lecture about experimental optimization, if you have an empirical demonstration that it is superior, then a mathematical criterion cannot trump the feedback from the real world. So if I have two methods, one where I have many mathematical proofs, another where I have none, but the other gives me better results in practice, then I should prefer this other even if it doesn’t come with nice mathematical properties.
And what makes heuristics very interesting, frequently at least from the computer science perspective, is that those things that qualify for heuristics sometimes can operate with a tiny, tiny fraction of the computing resources that you need for, I would say, more provable solutions. For example, again, stochastic gradient descent. Stochastic gradient descent is fantastically efficient at optimizing all sorts of problems. And yet, when I say fantastically efficient, I mean you can achieve a comparable level of optimization with other methods, you would need thousands, millions, billions of times more computing resources.
So it is very, very efficient, but you don’t have a formal proof for it.
Conor Doherty: Understood. And well, again, if you’re talking there about the allocation of resources and the return on investment from resources, FIFO, oh, I just applied it in my head, it costs zero. What is the differential in terms of cost with the arrangement you just described there?
Joannes Vermorel: I would say there is no, you cannot bypass thinking carefully about the situation at hand. Is FIFO going to make a difference? It varies enormously from one business to another. For some businesses, it is completely inconsequential. You don’t care. It has no impact whatsoever. For some other businesses, it is massively consequential.
If you are indeed an MRO and you want to repair aircraft engines, the order in which you pick the engines will be exceedingly consequential in whether your operations are running smoothly or not. If we are talking of just organizing a transit for a logistical platform and you want to do it FIFO, it is inconsequential because at the end of each day, you will clear your platform. You don’t want anything left on the platform when you do your transfers and whatnot. So the ordering is pretty much inconsequential in this situation.
Conor Doherty: Well, I really like the example you gave there. Again, that was if you’re an MRO, you’re working on engines, you have to choose which engines to work on. And I want to come back to something you said earlier, which was typically, what did you say, people are not optimizing what they think they’re optimizing or they’re not optimizing the right thing. So in the scenario that you just described, when people apply FIFO, they think, “Well, I’m getting engines out, I’m optimizing the repair of engines.” Are they at least thinking about the problem correctly, even if they’re not executing a heuristic well?
Joannes Vermorel: No, that’s another problem. You see, usually the numerical recipe, and I’m using not the term heuristic but the numerical recipe, is a placeholder for the problem and the solution. You know, we just do that. The situation is not framed as what is the problem and what are the class of possible solutions and what would be the various qualities of those varying solutions. You just pick a way to do it and that’s it. And then whether it is any good, maybe, maybe not, it just is.
Conor Doherty: Not it just is I like what you said that the confusing the way that they’re trying to fix things with the problem and the solution. Can you again expand on that?
Joannes Vermorel: It is orders of magnitude simpler to think of a solution instead of thinking about a problem. It is so when people want to think about quality of service in a store, it is very difficult to think about what does that quality of service mean. Quality of service would basically mean getting into the head of your customers and trying to see your store as they see it and assess whether they’re going to be satisfied or not, considering all their loose plans and desires and whatnot, and all of that being constantly changing. So that’s the problem, very difficult.
It is much easier to focus on the solution, which is five units for this product, five units for this product, two units for this product. You see, I’m just giving you a solution by saying each product has how many units and bam, I’m done. So inventing, conjuring a solution is usually vastly simpler as opposed to thinking about the problem. But what you’ve not addressed when you do that is that you don’t know the goodness of your solution. You just have a solution, and if this solution is kind of working, maybe you will say it’s a good solution, but you don’t know.
And maybe your store is just working very nicely not because you have the right inventory levels but because somewhere else in your company has just managed to negotiate fantastic prices that happen to be lower. Thus, even if your stock levels are kind of garbage, your stores are still quite competitive. You see, what I’m just saying is that companies, again, there is no such thing as things that are self-evident in supply chain, not really, not in those games where you try to do discrete optimization sort of problems.
And I think the step is to acknowledge that what you have until proven otherwise are not heuristics. That is something that has been assessed as being good, surprisingly good. What you have is numerical recipes. Are they good? Are they bad? You don’t know.
Conor Doherty: Because I’ve had a very similar conversation about this recently with Simon Schott at Lokad, and we were talking about scheduling optimization. And again, he also used the term self-evident. One of the problems with certain heuristics or numerical recipes, whatever term you want to use, like FIFO, is it ignores the immediate externalities or because it’s just beyond the human mind.
So for example, three engines, you arrive Monday morning, there are three engines. Which one to repair? Which was the first one in? I can’t compute all the interrelated steps and the interdependencies like working on this requires 100 parts, this requires 68, this requires 67. I need 20 tools for that, 10 of them I need on this as well. This needs to go over there when it’s done, that needs to go over there when it’s done. Joannes is sick, he’s not in today, so he can’t do step 20 of 30. Conor is doing an interview, he’s not available to complete step 99 of 100. There are all of these interdependencies, and they’re not self-evident to the human mind. Thus, you just fall back on, as opposed to total inaction, you fall back on which came in first.
And it’s not that it’s wrong, it’s in the absence, and this is Simon’s words, in the absence of something superior, you just use something that at least makes things work to a degree. And it seems like, having listened to you, you’ve used a much more mathematical way to describe that. But would that still align with what your sentiments are?
Joannes Vermorel: Yes, but again, the challenge is that you pick a solution, you have no clue whether it is good or not. And very frequently, again, you cannot let yourself be guided by your intuition. I think that’s the thing, is that again, in the natural world, the heuristics that are given to us, like how I can actually fetch and grab an object, they’re good. But when we translate, there is no translation of this, you know, I would say nature-given gifts into the world of the man-made world of supply chain decisions. You know, it’s just completely different things.
There was, for example, a very interesting paper that was published. People did benchmark boarding strategies for airplanes. And, you know, something like a decade ago, companies started to say, “Oh, we want to speed up the boarding, and so we will call passengers of the first rows first, and then second rows after, and then other rows, etc.” And people said, “Oh, it’s logic, it will speed up the boarding process.” It turned out that people, some researchers, did actual experiments. They said, “Okay, if we slice passengers in three groups and we call them in orders from rows 1 to 10, then 11 to 20, and 21 to 30, versus alternative policies, do we have one that is working better?” And the interesting thing they showed was that not having any policy, so letting people randomly fill the plane, was actually faster. It is not intuitive, but it was an empirical result.
So again, what I’m saying is that the goodness is, for those complex phenomena that are very man-made, because you see, fetching my glass is very complex in the sense that there are so many moving variables. I have five fingers, and then I have many joints, so it’s like a problem with something like probably 50 degrees of freedom if I’m just doing this simple movement to fetch my glass. So it is very complex, but our intuition works. But there are other classes of problems where our intuition is not naturally working, and I say in supply chain, it’s mostly games of dealing with discrete problems, dealing with randomness. Our mind is not very good. Our mind is typically very good at dealing with patterns, not very good at dealing with randomness. And thus, I would say do not trust your intuition too much. It may be very misleading.
And that’s very interesting because nowadays, despite us having now evidence that letting people board a plane randomly is faster, nowadays most companies have a policy of calling people in the order, although it has now been proven that it is actually slower.
Conor Doherty: True, but again, that does not demonstrate the point that you mentioned earlier because it depends on what you’re optimizing for. If you’re optimizing for efficiency of boarding, you’re correct. If you’re optimizing for profitability, you want to sell seats or access like Zone 1 is zones 1 to 9, and that costs $3,000. Zones 10 to 15 cost $1,000, and we will fill the plane at that rate, and I’m optimizing for profit.
Joannes Vermorel: But that even applies to planes where all the seats are at the same price. You even have these policies applied in low-cost airlines where you don’t have a business class, there is no first class, and everybody is pretty much charged at the same price, no matter which seat they have.
Conor Doherty: So then there’s no need for the advanced boarding.
Joannes Vermorel: But they still do it.
Conor Doherty: So they shouldn’t?
Joannes Vermorel: I say that again, what I’m saying is that they conjured in their mind a numerical recipe which was, “We’re going to call people in slices because it seems that if we put more order, it will work more efficiently.” And then people did the actual experiments and concluded that no, it’s actually degrading the performance compared to what you were doing previously, which was to not even attempt to solve the problem and just let people sort themselves out when boarding the plane.
You see, that is the thing. Again, what you think to be, that’s the difference. It’s very easy to conjure a numerical recipe, but if you have no clue whether it is any good, you should not assume just because it was the first thing that came into your mind that it’s going to be good. And you should not assume just because it looks plausible that it’s going to be any good.
Conor Doherty: Well, you can also expand that idea in terms of arbitrarily setting any KPI and assuming that that makes a difference.
Joannes Vermorel: Yes, and again, people, there is this psychological bias that people tend to fall in love with their own ideas. Like, “We need higher quality of service, so we need to push the service levels from 97% to 98%,” and then it becomes a company-wide policy. Does it make sense? Maybe, maybe not. I told you about this idea of min and max. We need to put three months’ worth of inventory, and then it becomes company policy. It is very easy to conjure a numerical recipe because all you have is to take the variables that are in front of you and do something with them, and you will compute something.
Here it’s a mistake done as called, I would say, naive rationalism. It’s not because you compute something with the variables that happen to be in front of you that this calculation is any correct. It might be correct in the numerical sense that you’re not doing a mistake in the addition and multiplications, but the formula that you’ve just conjured doesn’t really reflect anything.
Conor Doherty: But this naturally rubs against or conflicts with people’s natural tendency towards what the fundamental attribution bias. They just assume, “I have agency, I’ve done a thing, I set a policy, I set a KPI, I set a rule, and we made money. Therefore, not only am I great, but I am responsible for what has happened.”
Joannes Vermorel: Yes, but again, you go for “we made money,” but the reality is that most companies, especially most supply chain departments, have zero financial KPIs. You see, very frequently what will happen is that you just check whether you are compliant with the rules that you’ve made up for yourself, and that’s it. You see here, you’re saying, “We are profitable,” but again, most supply chain divisions are just checking whether they are compliant with their own percentages.
So, for example, they would say, “Oh, we need 97% service levels,” and then they will do things, and at the end of the day, they would say, “Oh, we are very good, look, we have achieved 97% level. We’ve lost a lot of money, but we have 97% service.” The fact that we earn money or lost money is irrelevant. You’re counting percentages, not dollars. I mean, very few companies that I know beyond the clients are actually considering any kind of financial metrics for their supply chain. It’s usually completely absent. They will think in terms of inventory turns, they will think in terms of service levels, they will differentiate those service levels indeed by ABC classes and whatnot.
But you see, it’s not because you set for yourself an arbitrary target of service level and then you declare victory when you reach these arbitrary targets. You can assume that being compliant with your own target is somehow correlated with the profitability of the company, but that’s a very bold assumption.
Conor Doherty: Well, again, this demonstrates an overarching point which has been made time and time again in various forms, that being people’s tendency to understandable tendency to take seriously complex problems. For example, if you’re talking about scheduling, you’re talking about how much to order, where to send, and try to decompose that into a way that fits into the human mind. So, for example, “Well, if I just have 90, if I go from 95 to 97% service level, bish bash bosh, problem is solved,” and it’s solved. Once I hit that target, well, it’s self-fulfilling. But of course, that is ignoring a lot of the interrelated and interdependencies of the decision-making process that we described earlier.
Joannes Vermorel: Yes, but also I would also describe, as I said, going for a solution is typically much easier than going for the problem. So if we look, for example, at the maintenance of an aircraft, the reality is that if one part is missing during a maintenance operation, the aircraft is going to be grounded. That’s relatively obvious unless you get it at the last minute. But now, the solution is, “I just want to have a nonzero stock level of serviceable parts for everything,” and that’s going to be your simple answer. You see, so if I have that, then I’m kind of good as long as I can maintain nonzero stock level of serviceable parts at any point of time, I’m good.
Thus, this is my solution. The problem is that it completely bypasses the fact that the solution that you propose is way too expensive because it would require way too much stock, and so it’s not a feasible solution, not really. So that’s where you need to again go back to have a numerical recipe. I need to characterize this numerical recipe to make sure that it is formalized properly so that I can then assess its goodness and then I can decide whether it’s an algorithm, whether it’s a heuristic, or whether it’s something else.
My point is merely that it is dangerous to assume that something that was done, that was an arbitrary numerical policy, that this thing has some inherent properties just because it was done that way before. The only thing that you can say is that it was not so bad that it brought the company to bankruptcy, but this is a very low bar. You can have things that are very, very bad and still not be sufficient to kill the company, especially if your competitors are also doing things that are very, very inefficient.
Conor Doherty: You’ve mentioned actually, in one of our conversations before, just again, talking about the implications of bringing into reality arbitrary policies or arbitrary KPIs. So, for example, the amount of money that it takes to go from 95 to 97% service level is approximately an order of magnitude more than it takes to go from 85 to 87% service level. So you say, “Oh, I just want to increase by 2%,” but there’s a law of diminishing returns.
Joannes Vermorel: Yes.
Conor Doherty: And the costs propagate exponentially once you get to a certain level. And again, people will look and say, “I just want to go 2%,” and it’s not self-evident how all of this ripples out.
Joannes Vermorel: The human mind is not a computer, and there are certain things I told you that the human mind is not very good with randomness, for example, but it is not very good with geometric growth either. Things that compound exponentially, the human mind just does not really comprehend that. We don’t have the mechanism.
Yes, if as a mathematician, I take the time, I take a pen and paper, and then I do my calculation, yes, I will understand that. But I don’t have an instinct; nobody has an instinctive intuition of the difference between a thousand, a million, a billion, a trillion. We don’t have the plumbing to feel these sorts of things, just like we don’t have the plumbing inside our brain to make the difference between Gaussian noise or any kind of non-Gaussian alternative noise. If I give you all sorts of randomness, Gaussian noise plus, unless you’ve been specifically trained to identify that, most people would say, “Oh, it seems very random.” We don’t have, I would say, an instinctive perception of the class of statistical noise, but mathematicians have uncovered plenty of different types of noise, of random behavior.
Conor Doherty: On that point, you mentioned earlier about scheduling for repairs, for example, in aerospace. Well, the idea if someone said, “Well, you know, we’ve got really, really smart people, and anytime we need to regenerate a sequence of actions for the repair of an engine, we have 10 really smart people. They sit down and they figure it out in-house by themselves.” Of course, it’s unreasonable. Think how Simon previously put it, it’s unreasonable to even expect a hundred super smart people with a pen and paper or with an Excel spreadsheet to outperform at scale repeatedly all the calculations required to arrive at the optimal new schedule given all the interdependencies, all the amount of parts, all the amount of skills required, the time it takes.
And you have to factor in that, as you said before, in the case of MRO, you don’t have the luxury of time. So even if it were possible, and we’ll just grant for the sake of discussion that it is, it isn’t, but let’s just say it were, that would take about infinity time versus an algorithm that can do it in a few minutes. And there’s a dollar cost to all of that. The point that I come back to, and again, this is how I understand it and try to explain it, is it’s not a matter of smart or dumb. It’s just there are externalities that are just invisible to the human eye, by definition invisible to the human eye.
Joannes Vermorel: Unfortunately, we have also to take into account the fact that most software vendors are utterly incompetent. So, you see, that’s also another factor. People say, “Oh, you see, my argument to that would be, well, if people, 10 people sit down and find a solution, if it doesn’t work out, then in the next minute, just because, for example, a part is missing and whatnot, they will go to an alternative. So they will explore again low-quality solutions until there is one that fits, a little bit like a rat going through a maze. It’s okay, oh, a wall, okay, direction, another wall, okay, another direction.”
The problem with many, I would say, software implementation is that the software doesn’t even have any kind of escape hatch if you’re hitting a wall. And so if you’re stuck, you’re just stuck with something that just is nonsensical and that’s it. So you see the and many companies had that. That was some of the, I would say, the promise of operational research in the 50s and whatnot. A lot of the initial, I would say, hopes were kind of did not turn out into, I would say, great achievements precisely because the software vendors were somewhat incompetent. And thus the sort of supposedly optimal solutions or supposedly, you know, superior software-driven solutions were in practice so poorly implemented that it was completely, I would say, impractical.
But we have to separate a little bit, you know, whether it was a problem of this problem cannot be approached by computers and the human mind is doing some kind of voodoo that is impossible yet to replicate with a computer versus this problem was just approached by a completely incompetent software vendor and it turned out that the solution that they delivered was terrible.
Conor Doherty: But on that point, how can non-specialists—that’s the term, I’m a non-specialist—how can a non-specialist know if what they are listening to or what the vendor is telling them is incompetence or dishonesty? Or how can you verify any of these claims?
Joannes Vermorel: That’s a huge problem. So here, there is another lecture for that. It’s adversarial market research, but that’s another hour of explanation on how you can actually detect incompetent vendors.
Conor Doherty: Any heuristics, any rules of thumb off the top of your head?
Joannes Vermorel: Yeah, I mean, there’s actually a heuristic given here, but it’s one that is proven. So, remember, it’s a simple solution that surprisingly and empirically works well, better than you would expect. And so the heuristic given in adversarial market research is: how do you know? You ask, when you have a vendor, you ask the competitors of this vendor what those competitors think of this vendor. And this is adversarial.
So if you want to have a correct opinion on a vendor, you do not ask the vendor because the vendor is just going to [ __ ] you. You ask its competitors what they think of this guy. And then you also do the symmetric: you ask all the vendors what they think of the other vendors. It’s called an adversarial assessment and it proved to be very, very robust. Warren Buffett made his fortune based on this very simple principle. And the idea was that if everybody agrees—and Buffett has this one question which was: “If you had a silver bullet to eliminate one of your competitors magically, who would be your target of your silver bullet?”
And that was a very interesting question because people would, if all the competitors end up designating the same company, then you end up with a situation where, okay, this company is obviously the one that threatens all the others. And those vendors are most knowledgeable about this trade, so the most competent actor is the one that is being pointed at by all the competitors. So that’s a heuristic, you know, until you’ve tested this adversarial market research, you don’t realize how well it works. And it’s not even obvious that it works at all, but it has been tested and it does work beautifully, as demonstrated among others by the success of Berkshire Hathaway.
Conor Doherty: Well, Joannes, I have no further questions, but in terms of closing thoughts, like takeaways for people from today, because we’ve covered a lot of ground. But in terms of heuristics in supply chain, what would be your executive summary for people?
Joannes Vermorel: What you do are most likely just numerical recipes, arbitrary numerical recipes. Reserve the term heuristic for something that is a hidden gem, something that is simple and works beautifully, but you have empirical evidence that it works. Not just “I do it and the company didn’t go bankrupt, so it works.” That is too much of a low bar. So reserve that term.
If you identify such recipes that work way beyond what would be reasonably expected from such a simple numeric recipe, then treasure it. It is extremely valuable. But again, this value must be rooted in, I would say, a real-world assessment expressed in dollars or euros, not just your gut feeling of the value of this numerical recipe.
Conor Doherty: Well, Joannes, thank you very much. I think we’ve solved that problem, another one in the bank. Thank you very much for your time and thank you very much for watching. We’ll see you next time.