00:00:00 Series premise: reader’s questions, chapter three
00:04:55 Supply chain learning means learning to think
00:09:50 Popper’s falsifiability: science risks contradiction
00:14:45 Einstein versus Marxism: attitudes toward refutation
00:19:40 Why theories can’t be proven true
00:24:35 Safety stock idea attacked, not stock levels
00:29:30 Fashion end-of-season: service levels backfire
00:34:25 Thought experiments as cheap falsification filter
00:39:20 Kuhn’s incommensurability: decisions versus plans
00:44:15 Adversarial incentives demand meta-analysis evidence
00:49:10 Why this book avoids vendor-centric case studies
00:54:05 Occam’s razor: one metric, not 300
00:59:00 Most firm success isn’t supply-chain driven
01:03:55 Amazon as supply-chain-led success exception
01:08:50 Case-study falsification: search for negative results
01:13:45 Vulnerable theories, minimal math, invites refutation
01:18:43 Takeaway: mental model to bootstrap practice

Summary

Chapter 3 argues that supply chain can’t be learned like a phone book of algorithms and templates; it must be learned as disciplined thinking. Vermorel borrows Popper’s falsifiability: real knowledge risks being proven wrong, unlike theories patched to avoid contradiction. He uses safety stock as an example—useful outcomes don’t vindicate a bad concept, like a broken clock being right twice a day—and offers thought experiments (fashion end-of-season) to show contradictions. Case studies, he says, are mostly infomercials, evidenced by the near-absence of negative ones despite high project failure rates.

Extended Summary

Here’s an extended summary in a brisk, empirically skeptical, no-nonsense register—Sowell-adjacent in tone, without trying to mimic his exact phrasing.

Conor frames the conversation as a stand-in for the “ordinary” supply chain practitioner: someone picking up Joannes Vermorel’s book without prior allegiance to Lokad or its worldview. Chapter 3, “Epistemology,” is presented as the book’s attempt to fix a problem most supply chain literature simply ignores: it offers either endless algorithms (academia) or endless templates (consulting), as if stockpiling procedures were the same thing as understanding. Vermorel argues the opposite: to learn supply chain is to learn how to think about it, which requires deciding what counts as legitimate knowledge in the first place.

To make that case, he borrows from Karl Popper’s falsifiability principle. Popper’s point, illustrated through the contrast between Einstein-era physicists and Marxist theorists, is that genuine science exposes itself to disproof. Physicists propose theories and then actively seek experiments that could destroy them; Marxists, when contradicted, patch the theory so it becomes immune to contradiction. The result is not “better understanding,” but a protected belief system.

Vermorel applies this standard to supply chain concepts and claims that many “cornerstone” ideas survive mainly because they are not subjected to serious attempts at falsification. His flagship example is safety stock. He distinguishes between a stock level that happens to work and the concept used to justify it: a broken clock can be right twice a day, but that does not rehabilitate the clock. He then offers a thought experiment—end-of-season fashion retail—where maintaining high service levels via safety stock produces an obviously self-defeating outcome: stores clogged with winter inventory as summer approaches, followed by heavy discounting to clear obsolete goods. The lesson is not merely that safety stock has “exceptions,” but that adding exceptions endlessly is how bad theories evade reality.

Conor challenges whether thought experiments are a sufficient evidence standard in a confounded, adversarial domain like supply chain, and whether this becomes an impasse when practitioners see profits rising under mainstream methods. Vermorel replies that thought experiments are a cheap first filter: if a theory collapses under basic reasoning, it doesn’t deserve costly real-world trials. For harder questions, he expects messy, medicine-like evidence—meta-analyses over many independent studies—precisely because incentives distort reporting.

This leads to his attack on case studies: they function as infomercials. His falsifiable prediction is that, despite high failure rates in supply chain projects reported by auditors, vendor-published negative case studies are near-zero. Until the field routinely publishes large numbers of negative results, he argues, “success stories” should be treated as marketing, not knowledge.

The practical payoff of the first three chapters, he claims, is a mental model for triage: what’s relevant, what’s secondary, what to scrutinize, and what to dismiss—so practitioners stop memorizing phone books and start reasoning.

Full Transcript

Conor Doherty: Welcome back. This is episode three of a very special series where Joannes Vermorel and I discuss, chapter by chapter, his new book Introduction to Supply Chain. Now, for this series I adopt a very specific posture: that of somebody who doesn’t know Lokad, doesn’t know Joannes. I am but one of the 10 million or so practitioners in the world who might see this book, pick it up, start reading, and possibly have some questions.

Now, this is episode three. As I said, if you have not seen the first two episodes, I strongly encourage you to go back and watch them, because some of the things we discuss today will build on what was said before—naturally, because it’s a discussion of a book. And on that note, Joannes just fixed my mic.

Chapter 3: epistemology. Before we get into that—and we’ll get into possibly one of the, I think, foundational concepts of the entire book—we’ll get into it in a bit. But epistemology: what is the goal in this chapter?

Joannes Vermorel: The goal is to start learning: how do I even think about supply chains.

You see, we want to learn, but we need to think. You cannot just learn. It’s not like a phone book. It’s not just about having, “Here is the stuff that you need to know by heart,” and there is no understanding. So fundamentally, to learn about supply chain is to learn how to think about supply chain. So it is a thinking exercise.

Very well. It turned out that the quasi-totality of the literature just completely disregards, dismisses this super important problem. They just literally jump into, “Here is my—here is a phone book. Here is the stuff that you can record in your mind.” That would be academia with an infinite list of algorithms: you want this, there is this algorithm; you want this, there is this algorithm. Or that would be the consulting literature with, “You have an organization, here are the templates that you should follow,” and here is another template, and here is another template, et cetera.

And I’m saying, “Oh, hold on. You’re telling me that I should memorize a million algorithms and 20,000 templates, and that will be what it takes to become proficient in supply chain?” Obviously every single author has its own perspective, and they would say, “No, no, no, you don’t need the one million algorithms, you don’t need the 10,000 templates, you only need those 20 algorithms and these three templates,” and again there are no two authors that agree on which ones.

So, for me, the first thing that I wanted to do is: okay, we need—and I know it’s very meta— we need to assess how we are going to even think about supply chain. And we will have to assess one question that is absolutely fundamental: what counts as supply chain knowledge, as valid, relevant, useful supply chain knowledge, and what does not.

Because if we cannot answer those questions, how are we supposed to proceed through the science of supply chain? We are just going to have stuff that would be an endless list of anecdotes, or even an endless list of stuff that is falsehood. So we’re just going to proceed, and this is not an obvious problem. This is a very, very fundamental problem.

That’s why I say we need to tackle this epistemic problem, because my peers, I believe, didn’t treat it seriously.

Conor Doherty: What are some examples of the knowledge that you think qualifies as valid supply chain knowledge, and examples that are— I think you use the term—corrupt?

Joannes Vermorel: So, a valid piece of supply chain knowledge would be, for example, that your allocations individually should maximize the long-term rate of return for the company. But this is a proposition I am saying: every allocation should— I mean, obviously there is a coordination problem between all those allocations, but you’re talking about allocation of financial resources or you mean literally where you’re saying allocations in the economic sense—so that can be any resource: money, inventory, people, etc.

So what I’m saying: it’s a principle, but I’m putting as a principle: every allocation must maximize the long-term rate of return for the company. Now, this is a statement. Is it true or is it false?

First thing, first thing: what I’m saying is that at least we have different—I would say it is relevant. That would be the first thing that we have to assess: whether this statement is even relevant for supply chain. Does it belong to supply chain, or does it belong to something else, like for example general economics? Does it belong to sociology? Does it belong to supply chain versus it’s something that is true, but it’s not even within the boundary of what we want to call supply chain?

There is a problem of knowledge management. If we include everything in everything, then it’s just going to be a big mess. So we need to have a criteria just to say: is it within the boundary of what we want to have as supply chain, versus not. First, that’s the first filter.

The second filter: okay, now that we admit that that is—we will have to explain why do we admit that it’s within the boundaries. It should not just be accidental: “Joannes says it’s within supply chain and it’s supply chain.” We need to have a reasoning to clarify what is inside and what is outside.

And then, once we admit that it is inside, we need to have a mechanism to say: is it a good proposition? Is it a valid piece of really useful, powerful, profound, fundamental knowledge? Or is it false, or is it completely secondary?

So we need to have a mechanism that, once we are within the boundaries of supply chain, we need to have a mechanism to understand: okay, what’s up with this proposition? Should it be in the introduction of supply chain, or is it something that should be a secondary note in a distant annex that is only going to be recalled when it’s really fringe edge cases, etc.

So those problems might seem a little bit abstract, but it is fundamental. Again, it’s an exercise to think. We need to start being able to think about supply chain: thinking how we are even going to organize this knowledge. That means prioritization of the knowledge, that means also setting boundaries for ourselves so that we don’t get lost on infinite tangents, etc.

Conor Doherty: Well, again, my understanding, having read, is it’s not just a taxonomical exercise, it’s not just classification. You make the point—and I think this is, in my opinion, if I were reading this—where the very strongest and clearest example of a “tip in the playbook” appears, and that is falsification.

I think that, when you take that—if you were to take your definition of supply chain and the importance of falsification—you could even put that in chapter one, because I think that is really foundational. And I would say discussing falsifiability is even more foundational than your own definition, because it actually informs your ability to be able to evaluate definitions.

So I’m not the author. Please explain the importance of falsification.

Joannes Vermorel: So, here falsifiability I would say is something that is above supply chain. Yes. We have the goal of supply chain, and then we have a separate discipline—that’s why I call it epistemology—that is really about the science of human knowledge. How do we characterize knowledge in general, and what’s not?

So I’m not inventing new principles to characterize knowledge. I am borrowing from epistemology, and I’m borrowing from one of the most incredible breakthroughs of the 20th century, which is the falsifiability principle from Popper, who was an Austrian philosopher.

Long story short: this philosopher was really asking a simple question: what counts as science? What counts as knowledge we can trust? Just a basic question. And it goes for everything: biology, finance, anything. He was just asking this basic question: what is true knowledge? What is actually good science? What are all those things? And obviously he came up with an answer.

And the interesting thing: I’m going to borrow this answer for supply chain. So let’s discuss a little bit what was this answer.

So, the falsifiability principle: Popper, he was living at a point in time where, in his early years—he was at some point in Vienna, in Berlin—he was meeting different groups of intellectuals. And there were two groups that were really striking. We are talking of the early 1920s, yes.

Two groups. The first were the physicists that were gathered around Einstein. He was looking, and they were incredible people. They were in the process of inventing essentially quantum physics. Einstein had developed relativity, and then there were plenty of developments on quantum physics afterwards.

And so that was a bunch of incredibly sharp people. Popper was very baffled: how did those people operate intellectually? And the way they operated: they were inventing theories all the time, but they were doing even more than that. They were inventing ways to destroy one another’s theories. They would invent, and they would relentlessly invent.

Einstein himself was relentlessly inventing ideas for experiments that could be done and potentially invalidate his own theories. So you see, Einstein spent pretty much his entire life trying to destroy his own theories. That’s very puzzling. For Popper, that was: what the hell is going on? You have people who are advocating for a theory, but what do they do in practice? They try to destroy them. Strange.

And people were doing it empirically. What Einstein was doing, he was saying, for example: if my theory of relativity is correct, that will mean that during the next— I will be able to observe strange movements in such and such ways for the orbit of Mercury. Or, for example, I will be able to observe pairs of stars that will be just optical illusions, due to light traveling different paths. And if I can’t observe that, that will prove that my theory is wrong.

He was actually designing very clever experiments to prove that his theory was wrong, and those experiments failed. They failed to prove that his theory was wrong.

So Popper came up and said, “Oh, that’s very interesting.”

And there was a second group, for contrast. So that was the physicists. Second group were people that were essentially Marxist. Marx was not around anymore, so they were followers. Those people—again, Marxism was fought at the time not as political science, but as a science. It was supposed to be a scientific explanation of society, and it was approached like a science.

And thus, like any good science, it was making predictions—very precise predictions about the future. If the Marxist theory is true, then we have very specific things that we can predict about the economy.

And, for example, in the 1910s, Marxists made a prediction which is completely correct according to the Marxist theory: if the proletariat revolution must happen—and it will happen, it will happen—then it will happen in countries in a very precise order. The theory is very precise: it will happen in a very precise order.

It will start in the United Kingdom. Why? Because it is the one country on Earth at the time where you have the biggest proportion of the population that is proletariat. The UK was really, really ahead in terms of industrialization versus all the other countries. Thus the revolution will start in the most industrialized country first, and then gradually happen in less and less developed countries.

And the theory was very precise. There was complete agreement: this is what will happen.

First World War happened. What is the first country to go through the Marxist revolution? Russia. Russia, which is the least industrialized, and at the time incredibly backward economy—completely agrarian. They literally had like no factory whatsoever, I mean very, very little.

So it flies in the face of this theory. It completely—again, everything, the way it unfolded, it goes completely against all the predictions of the theory.

Now, what is the response from the Marxist circles? The answer is: they just duct tape the theory to retroactively explain why, in fact, the theory was predicting this outcome.

And then Popper was looking at that and said, “Hold on. So you’re telling me that, as things contradict your theory, what you do is that you patch your theory, you modify your theory, and you’re making it immune to contradiction. This is weird.”

This is weird. You have people on one side, the physicists, who are trying to seek contradictions relentlessly, and who are ready at any minute to give up on all the theories—throw them out of the window—if there is a contradicting experiment.

And you have other people who say, “We are going to preserve the theory no matter what. We’re just going to make it gradually more and more immune to reality.”

Okay, it can’t both be right at the same time. One must be right, the other must be wrong. Popper said: who is right? The Einstein camp. He said: obviously, the physicists are doing it right. The Marxists: no, it’s not the right intellectual attitude, this is crazy.

Conor Doherty: And then, when you wanted to explain—just not to cut you off, but just be clear—you do cover all of these historical examples in the book. But to bring this back on topic to the 10 million practitioners: it’s interesting, and I know where you’re going, but anyone else might not.

Joannes Vermorel: Sure. Long story short: he actually devised the notion of falsifiability. Falsifiability is like the summary of why Einstein is right and why Marxists were wrong.

The long story short is that your knowledge, your theory, must be at risk of contradiction. Yes. If you make it immune to contradiction, then you don’t have anything that is scientific at all.

By the way, there is a caveat: it might exist truths in this universe that cannot be contradicted but they are still true. Popper said: fine, they are just not—they don’t belong to the realm of science. This is a fundamental limitation of science. Science can only deal with stuff where contradiction is possible, and those things are not all that is true. It is just all that we can call science.

So it’s very interesting, because Popper, in one go, defined pretty much what is the gold standard of what we can call science, and also showed to the world that science cannot be everything. It’s the exact opposite move that people would think, that science is all-powerful and all-knowing. No. Popper clarified once and for all that essentially science had its very natural limit.

But within those limits, what we have is much stronger. It is not all that is true.

And so, fast forward to supply chain: now we have the gold standard for scientific knowledge, and it applies to all domains. It applies to everything—from geography, biology, you name it. It applies to everything, not to the same degree. I think that’s something we’re going to get into, and we touched on.

And thus what I say is: because this thing has been the gold standard for how to approach science—for all sciences—supply chain should abide by the same standard. That is the point that I’m making.

Conor Doherty: Okay. So again, the historical examples are good, they’re correct, and they’re certainly interesting. In the context of supply chain—again, the target audience, 10 million practitioners—contextualize the importance of falsification, and what that looks like for them on a day-to-day basis. Why is that so important, and what does it look like to a supply chain practitioner, or how could it be inserted?

Joannes Vermorel: So we are talking about knowledge. I know it’s very meta, it’s a little bit abstract, but we are talking about the invalidation of knowledge.

So let’s start with a piece of knowledge. Let’s talk about safety stock. Okay. That’s an example in the book, and why I say safety stock is an invalid proposition. You call it hazardous stocks, actually.

Yeah, exactly. And again, I don’t say: your safety stock in your company—the stock level that you pick—is invalid. That would be a confusion. What I’m saying is that the very idea of safety stock is incorrect. That has to be unpacked.

You see, that’s a distinction, because safety stock ultimately is just a characterization of a stock level. So, in your company, you may accidentally have a stock level that is quite good for your business. Fine.

What I’m saying is that safety stock, as an idea to get there, is invalid.

So that’s the proposition. Now, like a broken clock that is correct twice a day: you may, with safety stock, end up with a fairly valid answer. But again, this is the broken clock fallacy. You just happen to have a situation where stars align, and in your situation safety stock gives you a satisfying answer.

That is a broken clock situation, where twice a day the broken clock still gives you the right time of the day. It is not— it would be incorrect to just think, just because you have that, that the broken clock is still broken, even if accidentally, here in this very specific situation, it gives the correct answer.

Conor Doherty: You see—but how do you falsify that position? Because you’ve just, okay, isolated yourself from criticism.

Joannes Vermorel: Yes, exactly. So, in order to falsify, we have to go further in the falsification principle.

What Popper says—the key idea—is that you can never prove that any theory is correct. You cannot. Why? Because that would mean that you would need to verify an infinite number of situations.

You see, your theory is supposed to apply to a vast number of situations, and if it’s a theory that is non-trivial, it’s going to be infinite. So fundamentally, when you talk about feedback from the universe—being able to verify and whatnot—you will only be able to verify a finite set.

So you can show that things work, not necessarily that it’s true.

And so Popper says: if I can have only one instance that demonstrates that the theory fails, then the theory is falsified. It is rejected. It is categorically false. It must be discarded entirely.

So he says: that’s the falsification principle, and that’s very interesting. That’s exactly what Einstein was trying to do: to prove that relativity is false.

You don’t need a thousand astrophysicists. You don’t need billions of budgets. You just need to imagine one simple experiment that will prove it wrong. That’s it. That’s the beauty of falsification: you can falsify an incredibly advanced, sophisticated theory—potentially with thousands of man-hours invested in it—with just a simple experiment, if you can do it.

So how do we falsify safety stock? The idea is that we need to come up with an experiment that proves that safety stock gives you crap results. Okay. We just need one situation where we get crap results from safety stock. Just one.

And in order to make it good faith, it must be safety stock done right. Safety stock done right. And then we prove that, in this situation where safety stock is done just right—because it must be a good faith attempt—we just prove that despite being done just right, it will give you nonsense results, results that go counter to your long-term interest of the company.

Again, for that, we go back to epistemology. We have to agree somehow that the long-term interest of the company is the criteria that is relevant. That’s tricky, because you see the problem is that I define supply chain—I define that. I just said that the long-term financial interest of the company is what it is.

If you tell me, “No, no, no, Joannes, I disagree with your statement. I believe that the long-term interest that supply chain should maximize is the happiness of my employees,” we have two different conflicting definitions. They are not compatible.

And by the way, this problem was solved, but that’s another philosopher called Thomas Kuhn.

Conor Doherty: But you’re kind of blurring the lines there between teleology and epistemology: what is the goal of a company, and what constitutes knowledge. They’re not necessarily the same things.

Joannes Vermorel: Yeah, yeah, yeah. So, go back. Let’s go back to the falsification of safety stock.

So now we just need to think of a situation that would demonstrate that safety stock blows up in your face. That’s it.

For example, let’s take a fashion retailer. End-of-season situation, with your safety stocks.

So what does the safety stock perspective say? It says: you need to maintain high-ish service level. That’s the perspective. How high? It can be anything between, let’s say, 85 and 100%. We’re not going to be too specific. It’s fine. You pick your call.

But I’ve never seen people that say you should have a safety stock with, let’s say, 15% service level. So, my good faith attempt: we have a fashion brand operating a retail network, and they have, for their stores, a safety stock policy with service levels that are, let’s say, 85 and above.

I don’t mind the detail. It doesn’t matter. I just say: if you’re not in this situation—in the fashion retailer—this is not safety stock, this is something else.

So situation is: we have this fashion retailer, they have stores, and their stores have articles where the stock level is controlled through safety stock, and those safety stocks are 80% and above. That would be my situation.

Now, I believe it is a good faith representation of what safety stock actually means.

And now what I want is to demonstrate that this will blow up. It will blow up for several reasons, in fact.

The first one is that at the end of the collection, you don’t want to maintain those safety stocks. Why? Because remember: at the end of the winter collection, if you maintain your service level, by definition you will have the store that is full of winter clothes while you’re about to enter the summer season. This is crazy.

First, you will not even be able to put the summer clothes in the store because it’s full of winter clothes. And then, during the sales period that will soon arrive, you will have to do crazy discounts to get rid of all the winter stuff that is now going to be very tough to sell.

Thus, my demonstration is completed. I have a situation where I took the safety stock perspective and I showed a situation that contradicts it.

And what Popper showed is that you should not adopt the Marxist perspective. The Marxist perspective would be: “Oh, you have a contradiction for my safety stock theory. You know what? I’m going to shuffle this safety stock theory, amend the theory, so that safety stock survives.” This is what Marxists were doing with their Marxist theory: whenever they were contradicted, they would just duct tape the theory to make it completely immune.

That is not the right intellectual stance. This is a dangerous stance, in the sense that that’s a recipe to get garbage knowledge. This is epistemology.

So when you have an obvious contradiction with a concept—where you use this concept, you use it in good faith, you do not make any mistake in the analysis—then you should say: your theory has just been falsified. It must now be discarded.

And that is tough. People don’t realize how demanding this falsifiability is. It says that literally you need to discard the stuff. And thus, safety stock must be discarded.

Conor Doherty: We touched on this in the discussion on chapter 2, when we discussed how robust a science can supply chain aim at being. We went back and forth on chemistry, and then at the end we agreed on the idea that medicine would probably be a reasonable standard.

There are many situations where—using the example that you gave in the last episode—you take medicine, you take very similar profiles: it works for one person, it doesn’t work for another. Medicine wouldn’t just discard entirely: “Well, that medicine doesn’t work, abandon completely.” They would say: it works most of the time.

So here the question is: did my falsification destroy safety stock for fashion, or for all verticals? Because that might be a question.

Joannes Vermorel: Again, Popper would tell you: beware. When you have a theory that is falsified, it is all too easy intellectually to minimize the impact of the falsification. That’s a problem of psychology. It is all too easy to minimize the impact and just duct tape your theory so that it endures despite this falsification.

So, for example, I could have a revised theory that says: safety stocks are good and well and useful—except in fashion.

Okay. Now, I can come up with a very similar example in aviation. I can come up with something that demonstrates that safety stock will work extremely counter to the long-term interest of the company in aviation. I can do that in automotive. I can do that for fresh food.

Some might say: these are cherry-picked examples, that in aggregate you make more money than you lose. Again, once you have 20 verticals where you have contradictions, when do you stop?

Are we going to say: safety stock is valid except in fashion, except in automotive, except in aviation, except in fresh food, except in luxury, except in blah blah blah?

That’s the Marxist way of thinking. You have a theory that is growing absurd, with an absurdly large list of edge cases.

And that’s also the beauty—again, there is an element that is very important in knowledge—and that’s Marxist perspective, and here it’s not Popper who described that, it’s more Einstein: there must be beauty in your theory. It needs to have this sort of crystal-like purity.

If your theory is just an infinite list of edge cases, if it has no structure whatsoever, if to describe your theory you need an endless phone book of stuff, it’s not a very good theory.

So if we say the supply chain theory is: “Oh, it’s safety stock, with two pages long of caveats,” it is just a very, very crappy theory.

Conor Doherty: Well, so this is where we come back to the idea of what standard of robustness do you expect off of a discipline like supply chain. You’re talking about Popper, and you’re applying it to a field that is governed completely by confounding factors, as you yourself would say in chapters one and two: total uncertainty everywhere—motivations, the weather, everything.

How can you falsify to the standard that you’re describing?

Joannes Vermorel: First, there are different levels of falsification. One of the most basic is the thought experiment, what we just did. That is very much what Einstein did. Most of Einstein’s breakthroughs were thought experiments. This is incredibly useful in science.

Thought experiments are super cheap. You can do it in your mind. It is not sufficient, because if you reason incorrectly you can have subtle flaws, and the only way will be the universe that will give you feedback.

But as a way to get to your correct theory faster, and be conservative with how much resource you want to spend on actual falsification in the real world, you need to do those thought experiments. Those thought experiments are essential so that you don’t do your falsification experiments at random, which is very costly.

So what I’m saying is that the first bare minimum that we need to ask from supply chain is that it resists thought experiments.

If I give you an element, a theoretical element like safety stock, and within two minutes I can give you thought experiments—very convincing contradictions—come on. You don’t even need to do it in real life. Once you understood, you say: okay, it is completely bogus.

That would be the super quick level. And then later on we can argue on much more difficult elements where we will need an empirical assessment.

But here, there are entire classes of propositions from the mainstream supply chain theory which can be destroyed by simple thought experiments. And by the way, this is exactly what Einstein did for Newtonian physics. To destroy Newtonian physics, Einstein didn’t have to do any actual physical experiments. He did thought experiments, and he proved that the theory had contradictions in itself, and bam—done. You don’t even need to do the experiments.

So you see, this is very powerful. Thought experiments are very, very powerful to at least get rid of the easy stuff. You can discard stuff when you have a theory that is so deeply flawed that you can just, through thought experiments, discard it.

If the theory is very mature—for example quantum physics—thought experiments alone will probably not be sufficient anymore. But that’s the second stage: that is the maturity of a science that has already been designed with the proper criteria in mind.

When you have a science that has been designed with the proper criteria in mind, carefully, all the thought experiments have already been done by your predecessors. Thus it doesn’t— it’s not so useful anymore because the low-hanging fruit have been consumed.

Here we are at a stage where those low-hanging fruits—thought experiments—can dismiss the mainstream supply chain theory. It’s very much possible because, well, people didn’t do it carefully enough before.

Conor Doherty: And again, to try and contextualize what we’re talking about from Lokad’s perspective: it would be decisions. That’s fundamentally what people are making on a day-to-day basis: supply chain choices. Why? Why? Why?

Joannes Vermorel: Again, that’s why we need this epistemological decision, because we have a disagreement. The mainstream theory says absolutely not. It disagrees profoundly: “No, I don’t care about that. I do not care.”

And you see, by the way, we have to talk about Thomas Kuhn and the incommensurability of theories. The problem, Thomas Kuhn says, is: fundamentally, if you look at Newtonian physics and Einstein physics, you cannot say one is better than the other. They are not commensurable. They are radically distinct.

The questions that make sense in Newtonian physics do not make sense in Einstein physics, and vice versa. You have two completely incompatible sets of questions and answers, and they cannot be compared.

So that is the problem. But now, how do we decide between classical physics and Einstein physics? It’s not even the same questions, it’s not even the same answers.

The answer is: once you start thinking about Einstein physics, it gives you ideas on how you can invalidate Newtonian physics. It gives you an experiment that you can give to your fellow Newtonian physics professor and say: “Please do that and explain it to me.”

And that is an experiment that will blow in the face of the other professor, and that’s it. That’s how Einstein did it.

Now, here, in supply chain: we have this mainstream supply chain. The mainstream supply chain doesn’t care at all about those decisions. They are like a second-class citizen. It says: the plan. The plan is what matters. The plan is both a forecast and a commitment. The plan is the first-class citizen.

And then what you call decisions, it’s irrelevant. It is just adequate execution of the plan. That’s the mainstream theory.

So if we say we want better decisions, this is already a statement about how do we think supply chain, because from the mainstream perspective this question is not even relevant. This is not a relevant question.

That is strange, but that’s the problem. When you have a paradigm shift from one theory to the next, you have plenty of questions that are not even relevant. In the quantitative supply chain paradigm, I ask questions about decisions. The mainstream theory does not ask questions about those.

And from the mainstream theory, they say your question is irrelevant. It’s like you’re asking: “What is the optimal color of the shirt of the person making the plan?” Both theories would tell you: we don’t care.

So you cannot judge the questions that I’m asking through the lenses of the former mainstream supply chain theory, and vice versa.

Conor Doherty: It’s interesting that you mentioned incommensurability. I think The Structure of Scientific Revolutions—it’s a book— it’s probably the second biggest landmark in the science of science in the 20th century.

What occurs to me is: to someone listening—and again from the perspective of the average practitioner—there seems to be a bit of an incommensurability here in terms of the standards of evidence that you value.

So you’ve talked about what you see as the purpose of a company, or purpose of supply chain: maximizing long-term financial return. But also the standard of evidence that you’re using, and that is—even in the book—is thought experiments. You use a lot of thought experiments to demonstrate your point, not a lot of real-life examples.

And I don’t mean case studies from the marketing perspective. I mean literally: you cite the Marxist revolution example, you cite Einstein; those are concrete examples. In this chapter, there are not a lot of real-life examples in supply chain that demonstrate your point.

And that’s where the incommensurability can come in, because you’re valuing the Popperian theoretical, epistemological knowledge: “It’s a thought experiment, therefore I’ve done it.” But someone can say: “But look, my financial performance with this model is getting better and better and better.” There you have an impasse.

Joannes Vermorel: Yes. First, there is a timing element. This thing was published very recently, it was practiced at Lokad. If you judge Einstein physics by their results in, say, 1907, it’s still very, very limited because it’s too early. People didn’t have time to digest that, apply it, etc.

So I would argue— it’s a weak argument— but I would say: give it time.

The second thing is: we have a problem that physics didn’t have, but supply chain has, which is adversarial behavior. This is exactly what I talked about earlier.

I think the experimental proof will come, and that will be a messy process, just like medicine. That’s why ultimately, if I project myself 50 years in the future with my theory accepted, people will do meta-analyses.

They will start to say: okay, we have this new paradigm. It’s a correct one. But due to the adversarial incentives, we cannot trust any one study. We need a meta-analysis.

And that’s exactly what, for example, the Cochrane Library is doing for medicine. They literally take, let’s say, AIDS, and they take 8,000 papers and they put it all together, and they say: okay, we do a meta-analysis of all of that. There is like 100 distinct research organizations that have produced independently or semi-independently the stuff, and thus we can reasonably hope that despite all the adversarial incentives—in medicine it’s pharmaceutical companies—despite all of that, we can have hope that the meta-analysis will make something better emerge.

In practice, it kind of works—with limits. It’s a messy process. It is slow, painfully slow, but it works.

And that’s what I’m talking about: thousands of papers that need to be meta-analyzed, and that’s how you build the next generation of medical knowledge.

Now, back to supply chain: that means right now, in this book, as an introduction, I didn’t want to start this battle yet. This book is already 500 pages long. There is already so much to convey into basic ideas.

For example, conveying the basic idea about what Popper is, why it’s relevant, and whatnot: there is so much to say that at some point I had to pick my battles. I decided—and yes, it’s a weakness, a theoretical weakness—that I would not go onto the terrain of very precise real-world examples, because they would all be coming from me.

Even if I put 50 examples, because they all come from me, people would say it’s cherry-picked.

Thus I say: okay, we need to have a second stage where there will be examples pushed not just by me, but by other people. And then, a decade from now, do a meta-analysis on those many, many cases, pushed not just by me, but by people who have read the book, applied it, and say: it works, it doesn’t work. That’s the standard we can have.

Meanwhile, at this stage, what I can offer is thought experiments. And also— but that’s an authority argument— I’m just saying it is not theoretical, it’s what Lokad has done for the last decade. I know it is an authority argument, but if I want to add an element of credibility: Lokad didn’t raise money from venture capitalists.

So for us, if it doesn’t work, we don’t have a plan B. We don’t have money coming from— we haven’t raised half a billion like some of my peers. Thus we cannot operate for a decade with a completely unproven, unprofitable, and maybe not working model. That is not possible.

We were constrained, and the reason Lokad survived is really that we faced a sizable degree of success, which led us to this book.

And again, my take is that when it comes to epistemology, I was talking about the beauty of Einstein. What convinced most of the community of physicists was not the experiments, it was the beauty of Einstein’s theory.

Relativity is incredibly beautiful as a theory. People were saying: okay, it is very interesting, intriguing. People were smart, so they were saying: it’s so beautiful that I want to learn about it. I will learn, but I will still wait for the experimental proof just to be sure.

But still, it was so beautiful that it is how Einstein managed to capture the heart of the physics community. It was not through successful experiments. It was through a certain degree of beauty in his theory that was completely unprecedented.

Obviously Einstein was a super genius, probably one of the top 50 most brilliant people that Earth ever had. That’s a very high bar. But at least, for inspiration, that gave us an example of the sort of thing we should aspire to.

Again: epistemology. We should aspire to knowledge for supply chain that is the aspirational perspective.

Conor Doherty: Well, again, it’s interesting because in the book you talk about Occam’s razor. Anyone who studies philosophy will be familiar with Occam’s razor: essentially, if you have two explanations to a problem, whatever the simplest one is, is the one you pretty much should go for first.

Again, if you have to choose between two and to apply that to your discussion on safety stocks: you have, let’s say, 50-plus years of companies applying them, becoming more and more profitable year on year. Yes, you can point to examples, maybe thought experiments to disprove it, but they’ll say, “Well, hold on. I’m doing this. I’m making money.” Occam’s razor would say: “Well, in the main it works overall.”

And you’re saying: no, it’s broken clock. When it works, it’s not that it works, it’s just a broken clock.

Joannes Vermorel: Okay, let’s apply Occam’s razor, but at the theory level, before jumping, because here we have so many confusing factors.

So let’s go to the mainstream theory. You’re saying we need to have theories that have an element of simplicity, where you don’t need to have so many things.

The mainstream theory: if I look at what the ASCM—Association for Supply Chain Management—is producing, they have their SCOR rulebook, and they propose over 300 metrics. 300.

How many metrics do I say in this book are relevant? I give a few metrics, but it’s like five, and maybe even four. Ultimately, the only metric that I say is the rate of return. So I put the rate of return on a pedestal, and then there is maybe three or four that are direct consequences of that, and this is it.

So, okay. Occam’s razor: my theory is elegant. It needs one metric as king: one rate of return, and you have a few, like the half-life of a decision, which is an immediate consequence—but not obvious. It’s not intuitive, but it’s fairly immediate once you’re nudged in the right direction.

And this is it. It’s a very, very limited set.

Then if I look at the mainstream theory, we are talking 300-plus metrics. So if we take Occam’s razor, I would say, on the grounds of assessment of the theory itself: yes, mine is orders of magnitude leaner and more conservative. No one would dispute this.

Now, for the rest: companies can succeed. That’s a philosophical statement. I can’t prove it. I’m just saying: companies can succeed for a whole variety of reasons, or fail for a variety of reasons.

And my intellectual stance is: I assume whenever I see a success or fail, that by default supply chain has very little to do with that. Why? Because the amount of stuff that exists outside supply chain is enormous.

So when you see a company succeeding or failing, having this intellectual stance— “I am not going to attribute to my pet interest, supply chain, their success or their failure”—I think it’s a reasonable intellectual stance, aligned with Occam’s razor.

So I’m just saying: when I see something—because the problem is that when you are a specialist, you want to explain everything with your theory. That is true for everybody. You don’t want your specialization to taint your view of the world.

Supply chain, if I look at humankind in the grand scheme, is very important, but if I were to give a completely made-up percentage: how big supply chain is for humankind compared to all the rest? I would say probably like 2 or 3%. It is of civilizational importance, because 2 or 3% means there are not that many things that can compete at this number.

But it also means that 97% is stuff that is non supply chain.

Thus, when you look at a business succeeding and failing, I would say you should assume—this would be my intellectual stance—97% chance that it has nothing to do with supply chain, unless you have explicit elements that prove that on this case their success or their failure is intimately attached to their supply chain execution.

Conor Doherty: But doesn’t that shackle your own perspective? Let me rephrase: you say Lokad’s perspective is better than this, but then you also say simultaneously, “Well, any success, 97% of the time it’s not supply chain.”

Joannes Vermorel: Without in-depth information about the company.

I read the news. I read LVMH has seen its profit skyrocketing this year. I am not a specialist of LVMH. I don’t have insider knowledge. Let’s assume I’ve not been working on their supply chain ever. I have very little information.

So I’m just saying: if you do not have in-depth insider information, that is the intellectual stance that I recommend. That’s it.

If you have privileged information, if you know a lot more about the company, then it’s a completely different thing.

Example: Amazon, which is incredibly public. Their memos are getting leaked all the time. If you’re willing to follow Amazon, you can have tons of information. It’s a fairly transparent company. Not on the level of an insider, but still: they are not very secretive, unlike Apple, for example.

So you can have tons and tons of information. And if you’ve spent a lot of time studying Amazon—and I have also discussed with many Amazon employees, private conversations, and whatnot—then I can revise my assessment.

On average, I assume success or failure is 97% due to other causes. For Amazon, my assessment would be: their success is—again, made-up number—70% supply chain.

But why do I have this assessment? It’s not because I read a case study. It’s because over the last two decades I had people telling me what is happening at Amazon, what sort of stuff they are doing, what is the mindset, how they conduct projects, etc. Thus I can forge this assessment that yes, Amazon’s success is very much supply chain-driven, and what they’re doing is really right, and it is inspirational for whatever you want to do supply chain-wise.

That takes a long time to arrive at that.

Conor Doherty: Yeah, I know, I know, but the problem is that it is so important—how to—so it’s very different from a company you know nothing about versus something where you have deep information.

Well, again, this ties into the criticism that you have of case studies, not in the marketing perspective. In the book, I think it would be fair to say someone could listen to most of this and say it’s kind of building a wall between him and pushback in ways.

And I will unpack that: if one of these mainstream ideas works, it’s fluke; companies make money, there’s a whole bunch of reasons why they make money. If a company publishes publicly a case study saying, “Hey, we use this approach. Here’s how we did it.” Survivor bias, totally nonsense.

Like, you’re making it impossible to falsify your own position.

Joannes Vermorel: No. Let me give you an example. For the case studies: you can falsify my theory. It’s very straightforward. Super easy, actually.

So I invite the audience to falsify my theory. My theory predicts that the number of negative case studies that will be published by vendors will be extremely rare.

This is a very important prediction. I am making a prediction. We have millions of case studies published, and I am making a prediction—not intuitive—that the percentage of negative case studies will be extremely low.

By “negative,” I mean people saying, “We failed.” Yes.

Now, we can have a counterargument: “The percentage of negative case studies is extremely low because those projects are working so well that they never fail.” So you’re looking for something that does not exist.

So the question is: why so little?

Now fortunately for me, I have plenty of other people who are surveying the success rate of supply chain projects. There were big studies done by Deloitte, PricewaterhouseCoopers, and whatnot. They conclude that we have between 80% and 90% of projects failing.

So where is the truth? I don’t know. But let’s be conservative. Let’s say those auditors are inflating numbers. It’s only 20% of projects that fail. Fine.

Now my theory predicts that despite the fact that we have 20% failing, the amount of negative case studies is going to be approximately zero.

That’s a strong prediction. How do you disprove my prediction? You disprove it by coming to me and saying: “Look, I have sampled a thousand case studies, and there is 20% that are negative case studies.” Then what I told you about incentives preventing negative case studies from happening is disproven.

Try it. You will not find 20% negative cases. In my entire career, I only found—on the number of my hands—negative case studies. I’ve reviewed thousands and thousands of case studies, and even the negative case studies that I found were not even published by the vendors themselves. They were published by investigative journalists, usually under immense pressure of the vendor to not publish.

So bottom line: my theory is very falsifiable. All it takes is go to Google, look up how many negative case studies you can find.

If you can’t find—if you’re completely overwhelmed by the positive ones—then that’s exactly as my theory predicts.

Again, it’s very important to judge a theory on its capacity to make correct predictions that cannot be easily invalidated by observation.

Conor Doherty: Again, if you’re espousing or encouraging skepticism around case studies, that’s fine. But you make the point that—and I’m paraphrasing now, but not very far from what you say—they’re trash, they’re poison, they’re marketing delusions.

That’s slightly different to what you’ve just said. Two things could be true simultaneously: they’re infomercials of things that work. Again, your theory is making predictions that all you will ever see is infomercial. Not necessarily.

That’s the difference between skepticism and absolutism.

Joannes Vermorel: No, no. Skepticism: what are you skeptical of? It’s the veracity of the claims.

Normally, again, we go back to Deloitte and PricewaterhouseCoopers: people can experience in their life that the vast majority of projects are failing. Every single time I talk to a practitioner, those statistics that auditors publish—it’s the reality of our industry. Even on LinkedIn: I talk to someone, and that’s their experience.

I’ve been talking to literally hundreds of supply chain directors. It has always been the experience. I’ve never met a supply chain director that told me, “Failure? What are you talking of? The last 50 projects have been perfect, perfect hits.”

If I ask the person in charge of a factory manufacturing airplanes, and I told them about failure rate for aircraft, they would say, “What the hell? The last 50 planes that we delivered were perfect. 100% performance, fly just fine.”

But I talk to a supply chain director: “Oh yeah, the last 20 initiatives that we had, 19 out of them were catastrophes.” This is so.

So I think, at some point, we need to have this reality principle: we should not just be armchair philosophers. Sometimes we should accept that some stuff is so obvious that we need to move on, otherwise we are going to be stuck in pure abstract philosophy.

Conor Doherty: If I were to apply this standard to medicine—because again that’s the example you used during the discussion on chapter 2—there are far more papers about clinical trials where the drug worked than there are of “oh, it failed.” Now I’m sure there are some, but there are far more papers of “this drug,” or “this practice,” “this methodology works,” or at least appears to work.

Now obviously the same principle applies: companies aren’t going to say, “We wasted money.”

Joannes Vermorel: Wait, wait, wait. Medicine is fully aware of the problem. That’s a big difference. They are fully aware. They have journals of negative results.

So the medical science community has acknowledged the bias, and they are doing efforts. And it’s fairly recent, in the history of science. I would say it only started like a decade ago in medicine that they really started to acknowledge this problem systemically.

The problem has been understood for, I would say, 30 years in medicine. It’s the emergence of the study of iatrogenic effects.

And it’s only over the last decade, in my casual amateur observation, that the community started to have systemic correction mechanisms: people do meta-analyses, people have journals of negative results, etc.

And if you want negative results in medicine: yes, the positive ones dwarf the negative, but the negatives are in the hundreds of thousands.

So again, you see, this is proving my point. I’m talking about skepticism. We should be skeptical.

Your position was: absolute nonsense—drivel—ignore—poison—it’s delusion. These are your words.

But again, the day supply chain sees high numbers—yes, not percentage, because maybe we will not get there, but high absolute numbers—of negative case studies, like thousands a year, then I will revise my position.

Until we have done this work as a community to have at least a few thousand negative case studies published every year, I will remain absolutist, saying this is complete delusion. Why? Because if we don’t have this counterbalancing force, knowledge-wise what we have is complete delusion.

In medicine, the very fact that one paper can be published negative is acting as a counterpower to 4,000 positive papers. It is a counterpower mechanism.

This thing is not in place yet in supply chain. So there is complete imbalance. Thus, knowledge-wise, what we have is complete delusion.

I will remain absolutist until there is enough negative papers published per year. If I have to give a number, I would say a thousand that are sufficient to put pressure on the world community to tune down the marketing-driven case studies.

Conor Doherty: When you say “the community,” do you mean companies? Vendors? Academics? What do you mean?

Joannes Vermorel: Everybody who is publishing stuff under the umbrella term of supply chain.

Which is tricky, because not everybody has the same definition of supply chain. So that’s a loose definition, because how do you group together people who do not even agree on the definition?

Conor Doherty: Are you familiar with Hitchens’ razor? Christopher Hitchens, the British writer. Hitchens’ razor—an offshoot of Occam’s razor—is that: that which can be asserted without evidence can be dismissed without evidence.

Joannes Vermorel: I agree. I very much agree.

Conor Doherty: The problem: one can apply that to a great many of the claims that you make in this chapter, and I would say overall, because they’re just stated.

We’ve talked for 85 minutes, and a lot more context has emerged here. Obviously that’s the difference between a podcast and a book, I understand that. But again, the absence— which by your own lights was a choice— the absence of real-world examples in supply chain to buttress what you’re saying: someone could say, “Well, you apply Occam’s razor, I’ll apply Hitchens’ razor, and carry on with what I’m doing.”

Joannes Vermorel: I would very, very much disagree.

The fact that we just demonstrated that a cornerstone of mainstream supply chain theory—safety stock—has a problem, even if this problem is limited, no one would dispute this. It is of massive significance.

Just the fact that we can point out flaws in the mainstream—even if all the rest is wrong—the fact that you can point flaws in the main pillars of the mainstream supply chain theory deserves a lot more attention. Just that.

That doesn’t prove my theory. It’s just proof for the audience that you should read the book and make your own mind. If you can, through a few chapters, come to the realization that the mainstream theory is deeply flawed and needs something, it may not prove that my theory is correct.

But I believe it is a sufficient reason to read the book. Even if my theory is wrong, maybe the criticism is right.

Obviously my theory is right and the criticism is right, but a weaker argument is: at least the criticism is right.

Conor Doherty: Fair. And to give credit: you are acknowledging it. Vulnerable. It’s open to pushback.

Joannes Vermorel: Yes. I am trying to have a theory that is very vulnerable. That is a strange thing. I am not trying to come up with a theory that is immune.

For example, the books that I have on my shelf behind my desk on supply chain theory, from academia, as a series of mathematical puzzles: they are indestructible. I cannot ever prove anything that is wrong. They are untouchable.

That’s what I discuss in chapter 3: those books are forever. Because they are applied math, they have the purity of elementary geometry. They will still be completely true, in a very specific sense of mathematical truth, 3,000 years from now. Nothing that happens in the real world can ever touch those things. Those books are fully immune to any criticism from the real world.

And I say: this is a problem. I invite the reader to contemplate: are you dealing with a theory that has made itself immune to any kind of criticism, or are you dealing, like mine, with a theory where it is very vulnerable?

I believe this vulnerability, as far as knowledge is concerned, is its greatest strength. It means that I am making tons—literally tons—of predictions, predictions that, just like poor Marx, might turn out to be completely incorrect.

If Marx had produced his theory immune to reality day one, he would not have had to suffer all the problems that his later followers had to deal with, where they had to patch the theory in so many ways. He should have made it fully immune day one. That would have prevented so many problems for the followers of his theory.

Here what I’m saying is: this is not what I did intellectually. I tried to have this approach—inspiration from the physics of the 20th century—to have a theory that is maximally exposed.

And by the way, this is one of the reasons why there is so little math. Because whenever I do math, I know that I’m making myself immune. If I put mathematical formulas in this book, they will be correct, and nobody will ever be able to contradict me, because that will be inner consistency, math-wise.

That’s not what I was trying to do. I was trying to have something that is an introduction, and demonstrate this vulnerability through the book itself.

I know, it’s very meta.

Conor Doherty: I understand. Again, the chapter is called epistemology, so you do signal your intentions right at the outset.

Well, again, we’ve been going for a while, so I’m going to close. We’ve talked now for, I think, somewhere in the region of four to five hours on the book. There’s more to come. It’s been a pleasure.

From the first three chapters—which again, in a book that you call a playbook—what are the tips and tricks that people can take if they tap out at, I think, page 66, the end of chapter 3? If they read the first 66 pages of the book and they get no more, what can they take away from it?

Joannes Vermorel: They have the mental model to bootstrap their thinking about supply chain.

That’s the key: you need—if you go until the end of chapter 3—you have all the elements to rediscover on your own, given time, all that follows. That’s the beauty of it. It’s the mental model that you need to discover on your own everything that follows.

And what I would say is that if you were to stop— you’re smart, you’re dedicated— and you stop at the end of chapter 3, everything that follows is what you would rediscover on your own. That’s a prediction that I make. For most people, by just applying this thinking model.

So in a way, the rest of the book is just saving you time—like 10 years worth of my time—because that’s what it took at Lokad. It’s saving you a decade of thinking process, just because someone has already kind of speedrun the game. So you get directly the good stuff, the conclusions.

But fundamentally it is a thinking model. Then, as a practitioner, suddenly you will be able to see your own domain in a new light, which lets you say: okay, this is relevant, I need to memorize; this is not relevant, dismiss and ignore it.

It’s very important. It will suddenly give you a whole new capacity to sort out, in your head: what do I need to memorize? What do I need to learn? Should I be spending an hour thinking about this problem, or should I be spending 30 seconds?

Those are very important questions. Once you understand that, you can speedrun the rest. I’m just saying: this is the foundation, and the rest is—you may not have the patience to do 10 years of thinking, so I’m giving you what the result looks like when you do that, and that’s the chapters that follow.

Conor Doherty: All right. Well, I believe the next chapter is the one on economics, which is something I’m quite looking forward to discussing, but that is a discussion for another day.

Thank you very much for your time and your patience. And again, I appreciate the candor, because I am pushing you quite a bit, but most people would not be as receptive to being grilled like this. So I do appreciate it, and thank you for watching.

If you want to continue the conversation, as I say every single time, you can reach out to Joannes and me on LinkedIn. Ask us questions, connect. We do love to talk.

But on that note, we’ll see you next week—and get back to work.