00:00:00 Chapter two setup and reader perspective
00:04:30 Terminology drift and practitioner confusion
00:08:55 Why ‘planning’ in ERP misleads
00:13:25 Avoid ERP planning traps; Excel isn’t shameful
00:17:55 Buzzword critique and Lokad’s neologisms
00:22:25 Using flawed language without empty promises
00:26:55 Operations research became supply chain, then diverged
00:31:25 Pre-scientific supply chain and falsification standard
00:35:55 Stochastic decisions require repeated trials
00:40:25 Broken automation promises and shifting goalposts
00:44:55 Optimality claims fail when better methods appear
00:49:25 Historical evidence of decades-long stagnation
00:53:55 Why real breakthroughs spread; supply chain lags
00:58:25 Will blunt claims alienate practitioners?
01:02:55 Mud-theory critique: shapeless claims resist refutation
01:07:25 Practical science: judge methods by track record
01:11:10 Graph databases as cautionary hype example

Summary

Conor tests Joannes’s “History” chapter as a practical tool, not a detour. Joannes argues terminology is a battlefield: meanings drift, vendors exploit confusion, and bad labels misdirect budgets—ERP “planning” being the prime case. Conor challenges Lokad’s own jargon (“holimization”); Joannes says the difference is substance and transparency, not fashionable word-swapping. The bigger claim: mainstream supply chain theory is “pre-scientific” because it avoids falsification; decades of “optimal” papers aren’t adopted, so the theories don’t work. Practical rule: trust history, not hype.

Extended Summary

Conor frames the episode as a stress test of Joannes Vermorel’s book: not as a friendly internal chat, but as what an average practitioner—one of millions—would challenge or misunderstand. The immediate friction is structural: chapter two is “History” in a book marketed as a practical playbook. Joannes argues the history is practical precisely because supply chain is flooded with terms whose meanings drift over decades, and vendors exploit that drift. If you don’t control the vocabulary, you can’t even search effectively, let alone evaluate claims.

The core example is “enterprise resource planning.” Modern ERPs largely do not “plan,” yet the name persists because of market games and branding battles from past decades. The cost of believing the label is not academic—it’s financial and organizational. Companies try to force planning into systems built as records, burn years, and eventually fall back to spreadsheets. The lesson is not “words matter” in a poetic sense, but “words mislead” in an operational sense: the wrong label directs budgets, attention, and expectations into dead ends, while vendors and consultants rarely say no.

Conor then presses a second contradiction: Joannes attacks buzzwords, but Lokad coins terms like “holimization.” Joannes’s defense is not that Lokad avoids jargon entirely, but that it introduces new terms sparingly and attaches substance—public methods, documentation, and testable claims—rather than swapping fashionable nouns (blockchain yesterday, generative AI today) through marketing copy with no operational change.

The argument escalates to Joannes’s thesis that mainstream supply chain theory is “pre-scientific.” His benchmark is falsification: claims should be exposed to failure by reality. He points to decades of academic “optimal” inventory papers that are rarely used in production and treats that absence of adoption—despite widespread corporate awareness of research—as evidence that the theories don’t work in practice. Conor objects that supply chains are variable, distributed, and human, making chemistry-like falsification unrealistic. Joannes concedes the noise is high, but insists falsification is still necessary, even if it requires repeated trials and comparative testing.

The practical advice is blunt: use history as a proxy for truth. Track what has been promised for decades, notice when promises shrink even as technology gets “more advanced,” and discount long-running ideas that still haven’t delivered (he cites graph databases). In short: follow incentives, distrust marketing language, and treat stagnant results as the most decisive data point.

Full Transcript

Conor Doherty: So Joannes, welcome back to the Black Lodge. This is episode two of an ongoing series where we take your new book, Introduction to Supply Chain, and we go chapter by chapter and we discuss it, we analyze it, and importantly, I try to push you on points of potential confusion or disagreement. And for that, I take the perspective of one of the, let’s say, 10 million or so practitioners in the world who might pick up your book. They don’t know you. They don’t know Lokad. They’ve never heard of quantitative supply chain. They don’t know me—perish the thought—and they just pick up your book and they start reading.

What would their reaction be if they could sit here? What would they say to you when they read certain things? So then you get to respond to, essentially, the potential readership of this book.

So given that—and this is the second episode—the first episode we covered chapter one. If you have not watched that, I encourage the audience to do so because this is an evolving conversation. We’ll probably make callbacks through the series to previous chapters. So to understand the context, either read the book—or read the book and watch the episodes in order.

But anyway, let’s push on. So today it’s chapter two. Chapter two that you call “History.” Now, a key detail here: this is not a preface. This is not a prologue. There’s none in the book, actually, and there’s not even an index in the book. So this is chapter two, called “History,” in a book that you claim on the back is a playbook to improve service quality and margin under real-world constraints. There are other claims, but again, that’s the primary stated claim.

So my question is: chapter two is “History,” in which you go back 200 years and chart the origins of supply chain. You quote French mathematicians from the 18th century. You talk about operations research. You talk about vendors and their fetish for inventing terms. So my question is: does this history practically contribute to a playbook for practitioners?

Joannes Vermorel: It does, because the landscape is full of words—keywords—that don’t necessarily mean the same thing, and that certainly did not mean the same things depending on the decade that you’re looking at.

And so, one of the challenges for practitioners is to just make sense of all those terms. So what are we talking about? We’re talking about things like logistics, or supply chain, or operations research, and those things.

The reality is that vendors are still using those terms. Companies are still using those terms. And you will find tons of online documents who are also using those terms, but sometimes with different meanings.

So when I try to clarify a little bit the landscape, I came to the realization that taking, I would say, a light—this is a relatively short-ish chapter—a light historical lens was probably the easiest way to make sense of where we are right now in terms of terminology, and to clarify what people mean by those terms, or actually used to mean by those terms.

And again, for a practitioner it matters, because they will need to make sense of the landscape. Supply chain is not something that is done, I would say, in strict isolation like geometry. You will need to interact with tools, systems, vendors, partners, and whatnot, and thus making sense of that is important.

Additionally—and the concluding thoughts—it is also an interesting exercise to do because that’s the first step to a journey toward an adversarial mindset when it comes to supply chain.

Conor Doherty: We’ll go back to adversarial—well, you might be able to answer that with this question. So when you’re talking about the terms, can you give any examples on why they’re so important? Exactly. So again, from the practitioner’s perspective, you’re saying vendors are misusing language. You give examples in the book—I’ll let you cite your own—but what examples can you give? And importantly, why are they so problematic that you need to dedicate time in a playbook for practitioners to learn about them?

Joannes Vermorel: So again, if you don’t know the terms of your own domain, you’re blind. You have no clue on what to look for. You can’t even use Google, and even ChatGPT will be a little bit at a loss if you’re asking the wrong questions, just because you don’t have the words.

It took me, when I started with Lokad, a few years to realize how much I was missing, just because I was lacking the words. I had not realized, for example, that some words meant something radically different three decades ago.

So you read a document and it just adds confusion because: what is happening here? They’re talking about logistics, but in a way that just doesn’t make any sense compared to what I’ve seen online. What is going on? And the reality is: terminology has shifted over the decades.

So for a practitioner it is very important. I think mastering the vocabulary is probably step one of actually becoming good. You need to understand the terms, otherwise you won’t even be able to make sense of the materials that you can find online. You won’t even be able to search efficiently with Google, and you won’t even be able to ask relevant questions.

So you see, those terms are immensely important. And we have to journey through that because, due to the incentives, a lot of those terms have been introduced not out of pure scientific purity, but with an agenda in mind.

Conor Doherty: Again, can you give me an example of a term? Because again, like what do you mean—like forecasting? I presume you don’t mean forecasting.

Joannes Vermorel: No. I think, for example, the term “planning.” If you start thinking about planning and “enterprise resource planning,” okay, that’s the sort of thing where it becomes very confusing.

Because if you think nowadays about the vast majority of ERPs—and that’s why people are not saying “enterprise resource planning” anymore—because those software are not about planning. So they are using “ERP” as a noun precisely because, well, there is no planning anymore, and it just does not really make sense.

Now, to understand a little bit: okay, so we have this piece of software that is very important. It’s called “planning,” but it doesn’t do planning. Why do I have that? The answer is essentially games played by market analysts in the ’90s.

And at the time, to understand a little bit, those things should have been called “enterprise resource management.” But due to games played by market analysts and software vendors, they tried to make a push before retreating on the planning front. And thus, we are stuck with this terminology of “ERP.”

But this is the sort of thing that is important to understand because, for example, that would mean that within your ERP, despite its technical name, there is no planning—just like in “business intelligence” there is no intelligence.

So again, initially, when you come into this domain, the terminologies are so bad. And to make sense of it, it’s just easier to understand a little bit the historical anecdote that led us to that. It’s easier to make sense, and easier to memorize.

Conor Doherty: Oh, I don’t disagree with—I don’t think anyone would disagree with the presentation. Again, just assume that it’s all correct and that the charting of history is correct. I don’t think anyone would argue about the semantics of the point.

What I’m asking about is the impact of this point. So if I say to you, “Yeah, yeah, I agree. There’s no planning in ERP.” Okay. What does that change for me as a practitioner on day one? Day zero is: I read the book. Day one: I now know that these terms are being misused. What changes?

Joannes Vermorel: I will not try to actually do planning within the ERP. You see, it’s immensely consequential, because some companies, they try to do the planning—the poor software, it was never designed to do these sorts of things. It’s a relational database. It’s completely at odds—it’s a system of records—it’s completely at odds with this undertaking, and it goes extremely badly.

And they waste literally half a decade in these sorts of things. I mean, you can go up to Lidl, and they wasted half a billion, quite literally, through a seven-year journey with SAP, by trying essentially to do what a system of records has never been designed to do, which is to actually do any kind of advanced analytics. It went extremely poorly.

Through the journey at Lokad, I’ve noticed that I’ve seen so many companies facing the same problem. Again, that’s something that starts with the vocabulary. If you are misled and you think that your ERP is supposed to do planning, then that’s going to be the place where you’re going to look for it.

And lo and behold, you will find a vendor that will pretend that they can, because enterprise software vendors—just like consultants—they never say no. “Can your software do anything?” and the answer is invariably yes. And if we can’t, we are just going to have a module or a small customization for it.

So again, having the right terminology is a starting point to direct your attention and direct your efforts in areas that make sense. This is extremely consequential.

Again, for practitioners, or for people who purchase the software: if I’m a CFO and I greenlight a budget for an ERP and you go, “It’s half a billion euros for a system of records,” that makes no sense for everyone, because again those software are hyper-granular. You have thousands and thousands of features, and to make sense of it you need to have high-level understanding.

So again, even if you’re not going to change the software, at least once you know, you know that if all that your company has is an ERP, then so be it. Your planning will be done in Excel. Don’t be ashamed of it. It’s just the way it is and it’s fine.

You should not be spending years trying to do that within your SQL database. It will not go smoothly. It will be extremely painful and tedious, and ultimately it will be replaced by a spreadsheet.

So that’s also the sort of thing that is very important to understand. And again, the incentives are extremely strong, and so that means that you will be facing tons of people who are constantly lying to you.

And again, I think the grandest lies are typically through the language itself, because it’s what you want. And many vendors have been so successful—software vendors so successful—is to literally distort reality through terminology. You just try to shape the language, the English language, so that it puts you on a pedestal somehow. And it works, to some extent. I mean, to some extent, but it does work.

And that’s the sort of thing that needs to be addressed. Otherwise, as a practitioner, you will be losing an immense amount of time and effort into essentially going into directions that are dead ends.

Conor Doherty: Well, when you talk about incentives, and one’s ability to tease them apart and diagnose them, and know—essentially—smell the BS. So, you know, ERP: it’s not really planning. It should be management. And it’s a system of records. It’s not a system of decision-making or of intelligence.

You’ve come to that because you have a lot of experience and you have a lot of cross-disciplinary knowledge. Now in chapter one—and this is why it’s important to watch these in order—in chapter one you do make the claim that there is a lack of formal training in supply chain.

So my question is: how exactly does the reader—again, the average practitioner, one of 10 million we agreed as an order of magnitude—how does the average practitioner who doesn’t have your level of expertise take the information in these chapters and apply them in the real world? I mean, they don’t have your computer science, economics, maths, or whatever.

Joannes Vermorel: That’s precisely what this book is for: to provide the minimal cultural background so that you’re not completely lost, and also that you understand a little bit the sort of battles that are happening in this field, and you frame them correctly.

For example, if we go to this ERP: you see, if you go back to most of what you will find online between ERP and APS—advanced planning systems—you will find as if most of the people involved, as vendors in those battles, frame it as “generalist versus specialist,” as if ERP was a generalist solution and APS, advanced planning systems, was the specialist solution.

And here, again, that’s the point being made in this chapter: it is just a completely wrong way to phrase that. Because then it gives the impression that systems of records are actually a valid candidate for planning. They never were and they never will be.

So you see, that’s where, again, we need to revisit those sort of terminology, because they are so bad that it is extremely confusing. They must be replaced with something that is more consistent and makes more sense.

And for planners, again, in terms of call to action: if you can’t think of your domain at least somewhat clearly, it is going to be extremely, extremely confusing.

Conor Doherty: All right. Well, on that note, let’s push on a little bit because again, a key part of the chapter is you attack, perhaps fairly, vendors for the use of language.

And I do have to set a little bit of context here because I do want to quote in context. So you use the term—you say that supply chain sits between Euclidean crispness—so very, very precise language—and AI’s shape-shifting jargon. And you say labels change every decade. You’ve established that.

You point out that vendors utilize buzzwords to, quote, “flatter org charts and marketing pitches.” You write, and this is important: “A now classic move is to coin, or at least adopt, new terms every few years to refresh the offering and brand.”

You also wrote, quote: “Supply chain terminology remains in flux and vendors are more active than ever. Some consultancies advocate agile, dynamic, and holistic supply chain.”

You agree with all of that?

Joannes Vermorel: That’s fair. Yeah. Exactly. That’s literally what is in the book.

Conor Doherty: Exactly. Now, you also tend to sprinkle obscure neologisms and academic terms not only in the book, but in your overall marketing pitches.

For example, last week you posted a blog on holimization, which is a portmanteau of holistic and optimization. Holistic, as I just quoted to you, is an explicit example of buzzwords that vendors—of which we are also one—use. So on one hand we criticize vendors for using certain language and kind of bamboozling people using opaque language.

But how do you respond to the claim that someone might make, if they’re familiar with your work, that it kind of looks like we’re doing that too?

Joannes Vermorel: Yes, but better. Okay.

No, the reality is that most of the co-optation of language by enterprise software vendors is just incredibly shallow. And they, to my knowledge, Lokad is one of the very, very few vendors who is even attempting to be precise in terms of terminology about what do they mean about planning, what do they mean about forecasting, what do they mean about optimization, or the limits of optimization.

And what Lokad is doing with holimization, for example—this degree of attention—is, I think, one of the key features of the Lokad initiative.

My peers—again, I might be biased here—but my perception is that my peers are nowhere as precise in their vocabulary as Lokad is. They don’t even try, and they probably don’t even care.

Conor Doherty: But they would say the same thing about their literature to you. That you see, “We define our stuff very clearly. Look at our website.” You say, “Look at my book.” These are authority arguments.

Joannes Vermorel: No, no, no. I mean, you see, most— they don’t even come where Lokad is doing something that is fairly unique for supply chain—not unique in the general human knowledge—but is to actually invest in making a lot of public documentation available to support those claims.

Because you see, the typical move by my peers is to say: there is a buzzword, “blockchain.” You produce, I would say, 50 pages where you have the word “blockchain.” And then there is a new buzzword, “generative AI,” and you can just do a cut-and-replace: replace “blockchain” by “generative AI,” and the pages that have just been published just work.

That’s my problem. And Lokad does not do that.

So you see, imagine I give you a brochure—50 pages—that says that Lokad is so great because it has adopted the blockchain. And now there is a new buzzword, “generative AI.” I can literally do Ctrl+H, “find/replace all,” the blockchain keywords by “generative AI,” and your document just works exactly the same.

That’s the crux. And Lokad does not do that. So you see, where we do introduce some terminology, I would say Lokad does it very sparingly. We are not introducing tons of buzzwords. It’s probably like half a dozen over the course of almost two decades of existence.

But they’re very consequential concepts. And then whenever we do, we try to be very, very detailed. For example, this concept “holimization,” it came after several lectures where I describe, for example, experimental optimization, which is a methodology.

So it came with tons of support materials, and I just decided to coin this umbrella term to package what is essentially an entire book chapter plus a lecture plus a lot of tooling plus a lot of articles.

Conor Doherty: So you see, the question—just to be clear, because I don’t want to get sidetracked—I’m not asking you, “Why is holimization better?” or—I’m not asking that. That’s orthogonal to the discussion.

Let me refine my question to perhaps help. Do you think it’s wise to use the same language that you criticize, even if you’re doing it brilliantly? Do you think it’s wise to use contaminated language—marketplace-contaminated language?

Joannes Vermorel: But you see, at some point you need to be able to understand. I cannot reinvent the entire language. So I need to use stuff that is flawed.

Because for example, if I use the term that is appropriate, “systems of records,” the problem is that people are not familiar with my terminology, and so they will probably not get what I’m talking about. And then I would say, “Okay, I’m talking about your ERP and your CRM and your WMS.” And they say, “Ah okay.”

But you see, that’s not really buzzwordy. A “system of record” is very clean. I can understand what that means. But “holistic”—what does that mean in the context of a supply chain? It’s more buzzwordy than the other.

And for example, “holistic,” Lokad is using this word very little, and when it appears on our website, we clarify that we mean “end-to-end,” that we mean very specific things.

So you see, the thing also—where I believe Lokad differs—is that out of all the flaws that we might have, we are not name-dropping buzzwords with no substance whatsoever. We are not name-dropping buzzwords without context and substance.

Because you see, again, people would say they have blockchain, and then they are completely non-specific: okay, what are your exact list of features? What are the consequences? What are you doing from a technological perspective? Is it possible to understand what is going on?

And I believe that Lokad is extremely thorough in making available publicly what goes under the hood. In the M5 competition, when we landed number one worldwide at the SKU level, we published the actual algorithm.

There is a degree of transparency that is very high. And again, I believe the main problem with software vendors is that very frequently it’s exceedingly opaque. And so you have buzzwords that are used, but without any context, without any substance.

And you will have a lot of what I would call “happy talk,” corporate happy talk, that surrounds that. And the terminology that we have at the present day is unfortunately what was left after decades of corporate battles between many, many software vendors and large consultancy groups.

Conor Doherty: Fair. And to somebody—again, the average practitioner—who just picks up… so I put two terms, or two brochures, in front of people. I say, “These are from vendors, doesn’t matter who, what their names are.” One is “holistic supply chain” and one is “holimization (holistic optimization).” To them, they might—and they don’t go any further—they just have to decide: I have limited time, which one am I going to dedicate time to?

How are people supposed to know if the language is so contaminated?

Joannes Vermorel: No, it doesn’t work like that. It doesn’t work like—you can’t, again, it’s not the magic pill. You can’t, by having one definition given by Joannes, it’s not the way it works.

It’s about how you understand the very reality in which you operate. It’s how you organize in your head the landscape. And you start, and you have to.

For example, you will have tons of university courses that tell you about operations research, and they have a lot of good things. They are not called supply chain. Is it supply chain? Is it not supply chain? That is exactly what this chapter is about, and that clarifies this relationship between what is called operations research and supply chain.

It turns out that operations research, as it was practiced in the ’50s, ’60s, ’70s, is supply chain. Nowadays, if you look for more recent materials also named operations research, it now becomes something completely different.

So when you see operations research: is it supply chain? Well, it depends on the date of the document. And it varies a little bit because some universities are clinging to the old terminology while some have adopted the new terminology.

So again, it is very critical to understand that. Otherwise, the whole domain is a little bit baffling with things that seem completely disconnected, but they are literally the same thing.

So for example, operations research is supply chain, but only up to essentially the late ’70s. Afterward, operations research became a branch of computer science, which is completely different, which is about mathematical optimization.

Conor Doherty: All right. I’m going to push on because there’s a lot to cover. We may come back to that.

I think one of the standout claims—because again, I tried to read this with fresh eyes—one of the standout claims that you make in this chapter pertains to the idea that mainstream theory is just completely broken. In fact, you use the term “pre-scientific.”

And again, it’s important, just to be fair, in full context—and this is at the conclusion of the chapter—you say: “A simple proposition is that mainstream theory is inadequate.”

Now, you arrive at that on the basis of everything we’ve just discussed, and chapter one which we’ve already recorded. So: “A simple proposition is that mainstream theory is inadequate. Practice deviates because the theory is flawed and the anticipated profits fail to materialize. In other words, despite its vast literature, supply chain remains in a pre-scientific stage where knowledge fails to yield consistent and predictable results.”

So, I do want to touch on that because I think a theme is often: even if you’re right, there might be a certain strength to language. That’s one point. The other is the veracity of that claim.

So to a casual reader, they might look at the idea that supply chain, and all of its technologies—however flawed—all of its technologies and its approaches, to categorize that as pre-scientific, presents a couple of problems.

What are you comparing that to? And I’ll say the rest of the questions, but what do you mean when you say “pre-scientific,” compared to what?

Joannes Vermorel: Any field where falsification actually works. Falsification means that reality can invalidate the claims.

And if you just look for this bar—for example in chemistry—if I tell you, take chemistry. Yes, it is very much scientific. If you know that there is a chemistry textbook that you have to take this product and this product, and you mix, and the thing is going to heat: well, you can do the experiment.

And there is a risk of falsification because if it doesn’t turn out as it is written in the book, the theory—the chemical theory—is bogus. The reality is that chemistry is now so well established and verified that if you don’t get the result of your chemistry textbook when you do the experiment, most likely you did it wrong.

But precisely, you can have so much trust in this specific scientific theory because it has resisted falsification for so long. So the fundamentals—you know, acid-base—you mix them and you will get something, a solution that will heat: it is so verified that there is very, very little chance that you will make a major discovery in chemistry by just doing something basic.

In contrast, when it comes to supply chain, the vast majority of the literature is not passing that bar. First, the vast majority of what has been published cannot even be falsified. So if you apply this criterion, it doesn’t even belong to science from the start. That’s one big problem.

But then, for me, the true way to see that supply chain is not a scientific theory: you need to look at history. You need to look at the fact that we have a million-plus papers. You need to have a look that those things have been floating around for literally half a century, and that for half a century automation has been promised and not delivered.

And again, you need to look at this entire history to see that the problem is not the fact that we didn’t have computers that were powerful enough. Those are not valid explanations.

So when you take this historical stance, you end up with: okay, I have 70 years of publications. I have at least 50 years of modern computing environments—when I say modern, I mean things that are able to crunch data well beyond what a human can do.

Starting from half a century ago, we could do a million numbers-plus of crunching with a computer. So this capacity has been around for half a century.

And for half a century, we have been having literally hundreds of thousands—now we have probably in total several million—papers that claim, to various extents, to have optimal solutions to manage inventory and perform all sorts of tasks for supply chains.

They are not being used in companies. And so what I say is: Occam’s razor. The simplest way to explain that away is that this theory does not work.

You can go through the very tedious process of going through each one of these million-plus papers and see why each paper is flawed—that’s going to take forever—or you can just take this historical stance and say: it has been around for so long, tried by so many people, with so many publications, to have so little results.

The most reasonable, simplest explanation is just that the mainstream theory is broken. That explains why you have so little that actually works in the real world nowadays.

Conor Doherty: So as is often the case, multiple things could be true simultaneously. You’ve made the claim that it’s broken. I want to come back to the idea that it’s pre-scientific, and especially to the claim—the standard—of falsification.

And again, I let you go there for quite a while. I anticipated you would bring up chemistry. So just to set the table again: chapter one, your definition of supply chain—your definition—“mastery of optionality under variability in managing the flow of physical goods.”

Okay. So first of all, when we take the geographically distributed nature of supply chain, we take the meteorological forces that act upon it, we take the hundreds—possibly thousands—of different agents who all have different incentive structures, who all have to collaborate with whatever available time they have in order to get a pen from wherever this was made into my hand.

To compare that to the robustness of chemistry might be a bit difficult for some to grasp, considering you can add an acid and a base here or there or in any other room and you’ll have a very predictable response.

The idea that supply chain can only be considered a robust science if it passes the falsification barrier, to some—certainly people who might read the first two chapters—could very easily seem like a literally impossible standard, considering what supply chain is, as you just defined it: variability.

So how can you falsify a constant shape-shifting entity?

Joannes Vermorel: It is very difficult, but nevertheless, if you have something that qualifies as scientific, certainly after half a century, it should have results. It should have transformed the entire domain.

If you go back to the claims since essentially the ’50s, right after World War II, the claim was very simple: we are going to devise quantitative methods that will let us automate the decisions for companies. And that’s it.

And people were very specific. The interesting thing is that the community kind of got lost along the way. But the operations research community of the postwar, they were very, very clear: we are going to devise quantitative methods—mathematical methods—to optimize our operations, to drive our allocation of resources, to make decisions for us.

And initially, they were even like: the fact that we have computers or not is a non-issue. If we have a very solid mathematical model, it will be humans who will just do the calculation by hand and that’s it.

And guess what? When they identified mathematical models of actual relevance—which was the case during World War II—they were actually used by people. People would say: “Oh, I have the very best methods, which is a mathematical instrument. Lo and behold, I am going to do the calculation by hand because I need to take this decision. I have a method, and this method is the best to allocate my scarce resources.”

So the claims were very clear.

Conor Doherty: And then my question—which is: is it too strong?

Joannes Vermorel: Sorry, sorry. Yeah. Is it too strong? I would say no.

I understand that you cannot have something that is as easy to falsify as in chemistry. Granted, the problems are distributed, you need to involve a lot of persons, etc. So we are going to fall closer to soft sciences.

But nevertheless, when we are trying to establish supply chain as a science, we should strive to make our claims as maximally falsifiable.

Conor Doherty: No one would disagree with that. Sorry not to cut you, but that is a fine claim, but that’s not the one that you make in the book.

Joannes Vermorel: No, no, no. That’s the one that I’m trying to make.

Conor Doherty: Again, I do need to hold you on that one. If I could paraphrase what you just said in a quote, you basically mirrored Oscar Wilde: “We might be in the gutter but at least we can look at the stars.” So we’re oriented in the right direction. Yes. But that’s not what you say in the book.

You said—and I can read it back to you—“Despite its vast literature, supply chain remains pre-scientific.”

Joannes Vermorel: Yeah. And then you said falsification is the only standard. And then you just said: “Yeah, well, I mean supply chain probably can’t really be falsified like chemistry, or maths, or engineering.”

No. Again, it’s the degree of confidence that you can have in your proof, and that’s different. And again, no, it is not.

If you have stuff that cannot be falsified, it’s completely pre-scientific. That’s a standard.

And then the quality of the proof can improve as you progress through the field, as you mature. For me, the scientific maturity of the field is as people find better and better ways to falsify their claims. It’s a work in progress.

It’s not like a zero or one: something can be falsified or it can’t. It’s not. I agree completely. And what I’m saying is that right now it stands in a pre-scientific stage because it’s not even something that is even remotely a concern.

You see, we have millions of papers. This idea that things could be actually tested, to make sure that we have a way to have reality tell us that something is wrong: it’s not even on the radar.

Out of this million-plus papers on “optimal” inventory optimization, none of them is even exposed to the risk of being falsified.

So this is why I say it’s really pre-scientific. And now we have to move to something where we find better and better methodologies to potentially make claims and theories falsifiable.

Also, that’s the point of this history: it’s to assess when we can have an empirical criterion where I just say: history is literally falsifying your theory for you.

You see, it’s an empirical claim. But fundamentally, what I’m saying is that if something actually works, then—but no, it doesn’t, because again we have to go back to the promise.

The promise was complete automation since World War II. It has been around for a long time. And the interesting thing is that when you have pre-scientific theories, their evolution is not to become better scientific theories. It’s to make themselves immune entirely to all criticism.

Conor Doherty: You think that’s possible in supply chain?

Joannes Vermorel: It’s exactly what happened.

If we go back to post World War II, people said: “We are just going to come up with mathematical methods that will give us the exact mathematical answer to our problem. And those methods will be the very best.”

And even if the only thing that we have is a human to do the calculation, it’s still worth doing it by hand.

Fast forward the ’70s: those enterprise software vendors at the time say: “We are going to automate all your mundane inventory decisions.” At the time, inventory management is thought as decisions. It’s really what is being sold.

And if you go back to SAP, they would say in the ’70s: “We are just going to robotize completely—completely.” I don’t mean like half of it. No, no, completely: those decision-making processes.

And then it didn’t work. And so people just backtracked on all of this. And now we say: “Oh yeah, I mean, we are just going to kind of have practitioners to do all the things for us.”

You see, so there is an amount of backtracking into what you’re supposed to deliver—what your theory is supposed to deliver—that is absolutely immense.

And for me, this is exactly the path that prescientific theories take: as they evolve, instead of becoming better, they just mutate to become more and more immune to reality.

It’s something that Karl Popper identified even in the early 20th century: fields that are not on a scientific track, they are not improving to become more scientific. On the contrary, they tend to have their theories, most of the time, to make them completely immune to any kind of contradiction.

Conor Doherty: Okay, by that standard, let’s be concrete here then. Because to be fair, this is going a little bit off piste from the book, because you don’t go into that level of detail in chapter two.

So a certain degree of latitude here. I’m putting you somewhat on the spot, but you’re talking about consistent—even if we say reasonably predictable—results. You said supply chain fundamentally is decisions.

Decisions that you take at any given moment reflect a very unique constellation of factors, variables, forces, incentives, prices, actions, people, availability, disposition, motivation, all of those things.

So if I take a decision today reflecting all of that, how do I falsify the decision that I just made? Because I won’t be able to repeat that. So again, due to the fact that you have—this is a stochastic aspect—if you take one, you can’t falsify anything.

Joannes Vermorel: Okay, let’s go with that. If you take thousands of decisions as an iterated game, you can very much falsify something in the whole, or in the main, so to speak.

The fact that you have some variability means that you need to do repeat trials, and the level of ambient noise is high. So you need a lot of rounds to conclude.

But because it’s a soft science, this is not just that. For example, you would have the exact same problem if you want to survey rare opinions.

For example: how many people do you need to survey in France to decide which monarchist branch—because among people who are pro-monarchy in France, you have people who are pro-monarchy for certain branches of monarchy.

The problem is that if you take people who are pro-monarchy in France, it’s probably like 0.5% of the population. So if you want to establish a reliable survey on who is pro the Bourbon and who is pro this other branch, you will probably need to survey several thousands of people.

So it’s just the ambient noise. And here, you see, it is not about soft science. It is just the fact that every single decision comes with a lot of noise. So you need a lot of decisions just to be able to average that out and to be able to conclude with not too much uncertainty that method A is better than method B.

But that’s it.

Conor Doherty: Again, you use the example of chemistry. We talk about molecular structure, or what happens when two very, very known compounds interact. We can predict that if we have a lot—if we have a little—and you’re then saying, for example, if you want to have the true equivalent in chemistry would be: what if we want to do an experiment, but with a quantity that you have is nanograms. Okay. And so your measurements are super crap just because you have a quantity that is so incredibly tiny.

And the only way to get the measurement that is kind of correct is just to do it thousands of times and average out. So you see this is a situation where your experiment comes with a lot of noise.

But far less noise—and that’s the whole point.

So again, I like the example and I’m going to build on that example. You’re talking about a low-resolution experiment that can be conducted in a very precise field, versus a supply chain which, by your own definition, is subject to way more variability.

Joannes Vermorel: Yes. But nevertheless, you can A/B test. If you have two methods to allocate your resources, you can A/B test.

Conor Doherty: But again, that’s comparative. That’s not falsification. Falsification is absolute.

Joannes Vermorel: No, the falsification will be: it works or it doesn’t.

If you have a method that claims, “This is optimal.” Optimal means: there will never be anything better ever. Ever. Ever. If I have my method that is optimal and I found something that is better, it’s not optimal anymore.

It doesn’t matter what happens with the math.

I just said that we have a million papers that say they have the optimal inventory. They might have been wrong, but what does that say about the potential hardness or softness of the field?

What I’m trying to achieve is to have something where we can have an arrow of progress, and we do progress through this arrow. And here, when you have a million papers that say they have optimal inventory policies, and none of them is actually used in production after decades, this is a problem.

This is not an arrow of progress. This is a field that is very stagnant, where people keep publishing things that are supposed to be optimal, but they are not used for good reasons.

Conor Doherty: And it’s not about—so the question is: that’s another way to look at it, assuming the conclusion. You’re assuming that the papers aren’t used because the papers are crap, not because you might not actually be able to obtain the kind of structure that you want in this field. It’s like trying to grab water. “Oh no, water, I should be able to grab it. Science should be holdable.”

Joannes Vermorel: No. That’s the point of this chapter: to think of those last maybe 70, or at least 50, years. If we have, let’s say, a million—more than that—but a million papers about inventory optimization techniques that claim optimal inventory optimization techniques, and most practitioners have not read them, by the way. Most people don’t know.

If we just fast forward 50 years more and now we have one more million papers, will that make any difference?

And my answer is: again, if we look at history, thinking about it the wrong way, probably not. And you see, that’s why we need this history chapter: to realize that there is very little progress. It has been very stagnant.

And the only way to realize that the domain has been stagnant for decades is to look at history. This is a high-level judgment.

And I believe that part of the problem that we have is the lack of falsification, and so we cannot really—we don’t have an arrow of progress. The arrow of progress in this field is kind of broken.

And that’s what I’m trying to point out and I’m trying to put forward. Falsification is just one instrument we need. It’s a critical instrument, but that’s not the only one.

And I do believe that, ultimately, if you have things that cannot be falsified, you are in very dangerous territory.

Conor Doherty: So, but just to be clear, because that’s a slight amendment to the position that was presented earlier. Are you saying earlier it was: something is pre-scientific if it resists falsification. Then you said it’s one of the tools of science. So are you saying: because science is more demanding, supply chain might not be falsifiable but could still be scientific?

Joannes Vermorel: No, no, no. Falsification is necessary, but it’s not sufficient.

For example, a theory has to be falsifiable, but also it has to be minimally concise—Occam’s razor. If you have two theories that would be equivalent in their capacity to predict the world, to give you the best decision, and one is vastly simpler than the other, then you should pick the one that is simpler.

But sure, it doesn’t say anything about the fitness or rightness of the other. It’s a way of differentiating or making a choice, but it doesn’t necessarily mean—because you chose A instead of B, A was shorter or simpler to understand—it doesn’t actually comment on the actual veracity of B at all.

Conor Doherty: No, but that’s how science—both would be right.

Joannes Vermorel: Yes. That’s literally how science works. If you have two theories that are exactly equivalent but one is way simpler, scientists would just say: we pick this one. That’s it.

They know that they have alternative theories that can be equivalently correct, but you just pick the simplest one.

And you see that as achievable in this field.

Joannes Vermorel: Yes. Again, I believe that the transition towards something that is scientific is: we need to be able to have criteria to just get rid of the stuff that does not work, and understand that we have the instruments to reject—to have clear rejection criteria—and bring back this arrow of progress, where the stuff that is uncovered, that is considered as actually scientific by the community, is indeed being used.

You see, when a discovery is made in chemistry, or many other actually scientific fields, it’s being used by practitioners. If you develop a new chemical process that is much more efficient than a previous one—and again, all those terms are correctly understood, you know, the yield of a chemical reaction, and whatnot—people will actually use it.

In computer science, when some people uncover a better algorithm, this better algorithm is being put to production usually within months by companies that are interested by the class of problems.

So you see, this is where I go back to this idea of pre-scientific: if we look at domains that are scientific, when there is a breakthrough, it ripples through the entire domain. People adapt. They are furiously chasing the latest paper, the latest discovery.

If you look at, for example, what has been happening in generative AI: when a team publishes a paper and says, “Oh look, I was able to do this, but faster, easier,” everybody scrambles to implement that as fast as possible. And the reason is that it works. It’s something that is valid.

Conor Doherty: It’s apples and oranges though. I understand the example, I understand the point that you’re making about—

Joannes Vermorel: No, it’s not apples and oranges. That’s the point.

Conor Doherty: How is choosing what to order, when, how much, from where, expedite or not, comparable to what you just described?

Joannes Vermorel: Because if the field had matured into something scientific, when people would publish a method, people would just A/B test it, and within a month, if the method was actually superior, it would be adopted.

And people don’t use the papers, as you pointed out. But again, my answer is: they don’t use it because it doesn’t work.

And again, that’s why we need to have this historical stance: assuming that those 10 million supply chain practitioners are just all ignoramuses, who are just so ignorant of all this academic literature—I think it is a completely unreasonable assumption.

It’s not the case.

Every single large company that I’ve been working with has at least a few people who are keeping an eye on the academic literature. They all do. All those large companies, they’re not idiots. They have collaborations with their local universities. They keep an eye on what is being published.

If it’s not adopted, it’s just that it’s not working. It’s the simplest explanation, and I believe it is the correct explanation.

And when we will go to a scientific stage for this community, we will see what is happening in other fields: there is a paper being published, it claims to make something significant, and then everybody scrambles to replicate and adopt. And that’s exactly what we’re seeing in computer science. That’s exactly what we’re seeing in generative AI. That’s exactly what we can see in material science and whatnot.

And for me, this sort of behavior is the anecdotal evidence that you have a field that really operates on scientific premises, where the arrow of progress is clear.

When people uncover something and say, “We have something big in our hands,” everybody tries to replicate, and if the replication succeeds, they just adopt.

Conor Doherty: We’ve been talking now—it’s been over an hour. We’ve done two hours so far on this. It’s great. I really enjoyed the conversation.

And there’s a thought now I think it’s appropriate to put to you, which is: so I’ve read the book, and obviously I can’t fully switch off the fact that I know you. I can’t fully—no, but I can’t. It’s like you can’t—it’s like you understand English. You can’t not understand English when I say it because you understand English. So you can’t switch that part of your brain off completely. You can ignore it, but you still understand. Even if you try to make your brain not understand.

And similarly, obviously, when I read, even if I try to look at it through completely fresh eyes, I still know what you’re saying. I know I might have phrased it slightly differently, but I know what he’s getting at. And obviously I know what you mean by all of this.

That said, this is two hours of conversation where we’ve taken maybe five or six points and we’ve gone back and forth quite a bit. And while I don’t pretend to be on your level of expertise on this, I’m not completely green on this either.

So, do you think that all of the information that we’ve exchanged—do you think that the average practitioner will read between the lines in your book and see, “That’s what Joannes meant when he said it’s pre-scientific”?

He didn’t mean that I’m an ignoramus, a self-taught surgeon—that’s chapter one—who doesn’t really have it, my training is all crap. But you know all the constructive stuff we just said, like the arrow of progress.

Do you think they’ll see that, or do you think they might be a bit turned off by the extremeness? And it’s an emotional question. There’s no right or wrong.

Joannes Vermorel: I believe that my discussions with practitioners is very frequently that the fact that their field is stagnant, I think probably 90% of the supply chain practitioners would agree.

People who have been around for 30 years, they would say: “Oh, when I joined this company 30 years ago, we were already doing the same stuff on an IBM mainframe at the time. And now we have a web app, but it’s still the same logic, still crappy, and still the same spreadsheets to duct tape the results. That makes little sense.”

So you see, the fact that the domain has been stagnant: most practitioners who are above 50 would probably agree, just because for them their entire life career has been around the same ideas and they have seen very little change over the course of the last three decades, even if the look and feel of the software has changed dramatically, going from black-and-white terminals on mainframes to web apps.

But again, while the user interface has changed dramatically, the math, the logic, the sort of understanding did not.

So you asked: would they read between the lines? I don’t know.

What I think is that because they will probably share intuitively the same diagnosis—stagnant field, broken arrow of progress—by reading this history section, it will kind of crystallize what they already perceive. And it will give them some interest. It will increase their interest to actually learn more and try to challenge the status quo.

You may feel that your domain is stagnant on a day-to-day basis, but you don’t really think about it on a daily basis. So it’s just something at the back of your mind. And then maybe through this book, you realize: “Oh crap, actually, it has been stagnant for half a century now. Something needs to change. And I need to maybe learn a little bit more.”

So, can they read between the lines? I don’t know. But can it still trigger the sort of reaction that is needed to actually resume this arrow of progress in this field? I hope so.

Conor Doherty: Well, I don’t disagree.

Maybe allow me to reframe from a more meta perspective, because again, you’re an author. You wrote a book. I write as well. So there’s an implicit social contract signed between author and reader: the purpose is, I want to convey a thought. I am the author. You are the reader. You are the audience for my book.

The implicit social contract is: as much as possible, I will meet you where you are. So for example, you didn’t write in Latin. There actually are some Latin phrases, but you didn’t write in Latin. That would have been an insane thing to do because then no one would understand it. So you didn’t write in Latin. You didn’t write in Aramaic.

And more programmatically: I put all the math in the very last section of the book. You made those choices. There were systematic choices made: okay, I want to adapt to the audience at least somewhat.

And what I’m saying is—let me frame it this way—we agreed on an order of magnitude of 10 million practitioners. Let’s say I, through magic, put the book in front of all 10 million people, and maybe 50% of them go, “Awesome. Yeah, this resonates with me.”

And 50% go, “I’m a bit troubled by some of the extremeness of the claims,” and they haven’t heard this. So they’re literally just reading: “Mainstream theory is broken,” disastrous, not to be trusted, prisoner’s dilemma, extreme—what was the exact phrase there that actually tipped this off—“pre-scientific.”

You’re being very precise, and I know in your head you are being very precise with what you mean. But again, if we take 10 million and we take an average, do you think they’ll go, “He doesn’t actually mean I’m an idiot. He means that it lacks the Popperian theory of falsification. That’s what he means.”

Joannes Vermorel: But also, you see, I don’t think that the community is even thinking of itself as doing science. And that’s kind of the problem.

It’s the fact that university professors publish papers that are not science at all is definitively a problem. But in terms of self-awareness, it is quite high, and very few practitioners would think: “What I have learned in supply chain is proper science, and now there is a guy who is challenging that.” No.

I think most practitioners know that it’s a hodgepodge set of heuristics, tribal lore, and things that kind of work for their company, but it’s very inconsistent. It has a lot of inner contradictions that are not resolved, that are a big pain, and as a consequence, everything requires more meetings than it should, etc.

So I believe that many people would probably not be too shocked by those claims.

But also, I think part of trying to bring this field to a status of science is to have sharper distinctions. And that’s why we need to be a little bit more extreme, because part of the problem that I have with this sort of pre-scientific materials is that very frequently they are impossible to falsify precisely because what they are saying is just so soft.

So you can read the supply chain 4.0 content and you can read it one way, the other way. Events can unfold in any way. It’s never going to contradict because it’s not sharp. It doesn’t have this crystalline structure where it can break and you will see it.

It’s like a big ball of mud. If you present a theory that is a big ball of mud, this thing can be distorted in all sorts of ways—it’s still a big ball of mud—as opposed to if you have something that is kind of a crystal, where if it breaks, it breaks, and it would be fairly obvious that you broke it.

I know it might sound terrible, but we just published a criticism of a paper co-authored by 42 authors. The title is something like “Supply Chain in the Era of Generative AI,” or something.

And this paper, when I say “big ball of mud,” this is exactly it. This paper is the archetype of something that is a big ball of mud. It has no shape, no consistency. You can just move it apart, it’s still going to be a big ball of mud. Nothing that will ever happen in the future can really contradict that, because it is so formless and crappy.

That’s what I mean by being pre-scientific: once you have something that has a much clearer structure, you realize, “Oh, this thing was a big ball of mud.” It takes the science to emerge to realize what you were doing so far was pre-scientific.

Conor Doherty: Okay. Well, we’re not going to finish the discussion—we’re not going to be able to close the discussion on whether or not supply chain can attain chemistry levels of falsification.

But as a close, last question: in terms of practical advice then for people, how can they insert a little bit more science into their supply chain from day one? So day zero is: they’ve started reading the book, they’ve only got a couple of chapters. Day one: they start applying them. How can they put science into what they’re doing?

Joannes Vermorel: If we have to go back to this historical perspective, I believe you need to have historical perspective. It’s very important, because again, if you don’t want to be fooled, you have to look at the history of claims that have been made.

When people have been claiming stuff for decades, and every decade they claim stuff that is less ambitious with more buzzwords than the decade prior, you’re not on a good track. Just think of it.

Major ERP vendors are now having claims that are much more modest in what they claim they will do, while bringing stuff that is even more fancy and sophisticated. For example, if my claim in 2025 is more modest, despite that I have access to generative AI, than what I was claiming in 1975, what is going on? What is going on?

So you see, that’s why I think it’s very important to study history: to understand who can be trusted.

And again, I think that we have seen that doing those experiments—they are very costly. That is true. Falsifiability is costly, it’s slow, it’s messy, it is difficult.

But I believe that the historical perspective gives you something that is a proxy. It’s not a super good proxy, but it’s a proxy nonetheless.

And you can have a look at stuff that has been around for decades making grandiose claims, and there is very little. At some point, you could say: okay, this is just bogus, otherwise it would have worked. It would have worked.

Otherwise some people would have managed to convert all this knowledge into something useful.

For example: graph databases for supply chains. Graph databases have been around for almost three decades. Open-source graph databases have been around for two decades. This thing has not gone mainstream. There is very, very little use that I know of.

I don’t think I’ve ever met an actual company using, in production, to support anything supply-chain-related, a graph database.

Can I now conclude, through this historical knowledge, that graph databases are never going to have any impact on supply chain? I mean, it is very empirical. That’s a historical argument based on observation. It’s not exactly a proof, but still, it counts. For me, it counts as a very solid indication that it’s a very unlikely direction for future breakthroughs in supply chain.

So what I’m saying is: you need to have a look at the history. Look at the stuff that has never delivered much. If things have been around for a very, very long time and they don’t seem that they have delivered much, probably they will never deliver anything.

And that would be the essence of this chapter: pay attention to history. The stuff that has not been working for half a century, most likely it will never work.

Conor Doherty: All right. Well, thank you very much for your time. I’m out of questions. That brings chapter two to a close. I’ll see you, presumably next week—2026, at least—for chapter three.

And to everyone for watching, thank you. If you want to get in touch with us, or if you want to escape the giant ball of mud you find yourselves in, connect with us—and me—on LinkedIn, or send us an email at contact@lokad.com. And on that note, do please get back to work.