00:00:00 Supply chains exist to anticipate uncertain futures
00:04:55 Taleb exposes planning’s illusion of certainty
00:09:50 Human agency defeats deterministic supply chain forecasting
00:14:45 Preparation differs from rigid, time-series planning
00:19:40 Greengrocer example illustrates opportunistic resource allocation
00:24:35 USSR planning absurdities reveal compliance’s hidden costs
00:29:30 Means fail when human behavior turns extreme
00:34:25 Human systems vary far beyond natural systems
00:39:20 Fat tails justify insurance and downside protection
00:44:15 Uncertainty also creates profitable upside opportunities
00:49:10 Intuition often outperforms broken mathematical instruments
00:54:05 Companies survive by quietly ignoring defective plans
00:59:00 Probabilities better capture risk than time series
01:03:55 Airline delay story makes fat tails tangible
01:08:50 Personal choices already rely on probabilistic thinking
01:13:45 Practitioners should stop gospel planning immediately
01:18:35 Closing thoughts on subtraction, pragmatism, and chapter seven
Summary
Supply chain is not about obeying a plan. It is about making decisions under uncertainty. Vermorel argues that mainstream planning treats the future as if it were already known, reducing reality to forecasts and time series. But markets are shaped by human choices, not mechanical laws. That is why averages mislead and extreme events matter so much. Good practitioners prepare, adapt, and seize opportunities. Bad systems cling to rigid plans. The lesson is simple: stop confusing numerical sophistication with practical intelligence.
Extended Summary
Chapter seven argues that supply chain is, at bottom, a matter of making decisions under uncertainty. That may sound obvious, but Joannes Vermorel’s point is that much of mainstream supply chain thinking proceeds as if the future were already known and management’s task were merely to comply with a plan. In that view, the world is reduced to time series, forecasts, and targets. What disappears is choice, judgment, and adaptation.
His objection is not to preparation, but to a particular conceit masquerading as science: the belief that elaborate numerical plans can reliably capture a future shaped by human decisions. Markets are not celestial mechanics. Customers, suppliers, competitors, and regulators are not particles obeying fixed laws. They change their minds, react to each other, and create outcomes that are often discontinuous, lopsided, and surprising. Hence the central role of uncertainty.
The practical difference between planning and preparation is illustrated through ordinary examples. A greengrocer who buys produce each morning does not succeed by rigidly following a twelve-month target. He succeeds by noticing quality, price, seasonality, novelty, and customer appeal. He acts opportunistically. Likewise, a taxi driver does not need a theory of time series to know where demand is likely to emerge. He uses judgment. In both cases, readiness matters; rigidity does not.
A major theme is that averages are often misleading. In systems shaped by human behavior, extreme events matter disproportionately. A promotion may do almost nothing, or it may empty the shelves in an hour. A grounded aircraft may be a nuisance in one city and a financial disaster in another. These are not rare curiosities. They are the events that dominate profit and loss. To plan around the “average case” is to risk being repeatedly blindsided by the cases that matter most.
Vermorel therefore favors a probabilistic view of the future over deterministic planning. Not because probabilities are perfect, but because they at least acknowledge ignorance, asymmetry, and risk. The larger lesson is subtractive as much as additive: many firms would improve not by adding more planning, more process, and more numerical theater, but by stopping practices that create recurring economic damage. Bad planning survives not because it works, but often because everyone is doing it badly together. The first gain, then, may come simply from refusing to confuse scientism with science.
Full Transcript
Conor Doherty: Welcome back. This is episode seven in a special series where Joannes and I take his new book, Introduction to Supply Chain, and we discuss and debate the ideas chapter by chapter. Now, as you might recall, for this series I assume a very specific posture. That is someone who doesn’t know Lokad, doesn’t know Joannes, and certainly has not worked here for four years.
I am essentially a proxy for the 10 million or so practitioners in the world who might see Joannes’s book, possibly on Amazon, buy it, read it, and have certain questions. And then Joannes and I come here to Lokad and we debate them. Now, I said this is episode seven. That of course means that there were six episodes before this.
I strongly recommend you go back and watch those because a lot of what we discuss today will probably reference that. And with that out of the way, Joannes, good to see you. Welcome back.
So, chapter seven, “The Future.” Before we get into the core ideas, and I have lots of questions about the core ideas, what is that about at a high level, in a few sentences?
Joannes Vermorel: Every single supply chain decision is an allocation of resources that anticipates a future state of the market. If a company starts producing anything, it’s because this company expects that later on clients will show up to buy the products. If a retailer puts a product on the shelf, same thing, they expect future clients to show up and buy.
Essentially, pretty much any decision that concerns the flow of physical goods is reflecting some kind of projection of a future state of the market. And because sourcing materials takes time, producing takes time, transporting takes time, because all of those operations take time, you need to think about the future and plan your decisions so that you can act ahead of time and be ready in time.
So fundamentally, that’s why this notion of the future is omnipresent in supply chain problems. It’s because it is fundamentally about acting now correctly, and the correctness is defined by whether it will match the future state of the market and whether it will be a good decision in retrospect a few weeks, a few months from now, once we can see how the future unfolds.
Conor Doherty: Well, for me, reading it, what struck me as a core through-line that comes up with all the little subsections and all the examples is the idea specifically of uncertainty. Because I don’t think anyone disputes the idea that making decisions in supply chain is essentially a forward-looking bet. I mean, everyone fundamentally understands that. Like, I buy, I order a thing today, it won’t arrive now. I’m planning for future states.
But you focus a lot on the uncertainty of the future.
Joannes Vermorel: Yes. And the problem is that actually the mainstream supply chain perspective does not think like this at all. Literally, the mainstream supply chain view does not think in terms of decisions. Decisions do not exist. There is none.
The mainstream perspective says the only thing that exists is a future that I already know. And then comes the orchestration. I know how many units will be demanded by my customers tomorrow, the day after tomorrow, one year from now. I have the numbers. So there is no… you see, a decision implies that you have a choice.
The classic perspective, the mainstream supply chain theory, does not make this statement. They do not assume that you have any choice. They just say the future is known now. You can be compliant or non-compliant. And there are tons of statistics that argue around that.
But it’s fundamentally a vision that is extremely binary, where it is just about the future being known, and there will be compliance, and that’s it. And that’s why decisions feel so alien, because once you have a plan, it’s either you are compliant with your plan or you’re not compliant. The idea that you would have decisions doesn’t even really fit.
And that’s why, in classic mainstream supply chain books, the notion of decisions is just absent, because it doesn’t fit the paradigm.
Conor Doherty: Well, even at the start, you used the term plan and planning. Now again, we’re going to get into that in a moment, but you have slight reservations, or slight robust reservations, around the term planning. And we’ll get into the implications of, well, if we can’t use the word planning, what do we mean?
But at a high level, on the topic of uncertainty in planning, you actually quote Nassim Taleb in the book, and you say, and again just because it is quite a nice quote, Taleb writes in Antifragile, great book, I know you love it, I’ve read it myself, “the illusion that you know exactly where you are going, and that you know exactly where you were going in the past, and that others have succeeded in the past by knowing where they were going.”
This is Taleb’s challenge to the idea of planning. Now, what exactly about Taleb’s idea there resonates with you, and why is it relevant to your supply chain vision?
Joannes Vermorel: So, I would say there are two very different ways to think about planning. There are many other ways, but let’s say there are two dominant ones. First, there is the layman intuition. The layman intuition is that I act now to improve my readiness for whatever the future will throw at me. That’s preparedness. This is making plans to be prepared.
It is fundamentally an intuition of what you should do now so that later things happen in a more beneficial way for you. That can be something as simple as a taxi driver who decides, “This hour I will drive for half an hour to end up in this neighborhood, and that will be a good neighborhood to pick a customer.” That is the sort of intuition of planning.
And then you have another sort of planning, which became very popular in the 20th century, and that’s Gosplan planning. The Gosplan is fundamentally a mathematical intuition. So it is not the one of the taxi driver. It is fundamentally saying, “I can project time series into the future.”
For everything, for every single consumption of a resource, any resource that is demanded or requested by my clients, I can take note of how many units do I need per day, and I can extend that into the future indefinitely. And that gives me a sort of baseline. Then once I have this data that can be projected about the future, I can just say, “Now I orchestrate my resources to fulfill this future demand.”
And this is really the Gosplan perspective on planning. The central idea is: everything is about time series. Everything is seen through time series. It is fundamentally, I would say, a mathematical projection of what the future even means. This is not what the taxi driver has in mind.
The taxi driver doesn’t think, “Oh, I have a time series of people showing up in this neighborhood.” He is not having a mathematical intuition. Most likely, if you ask a taxi driver why this person is doing it, he would say, “I don’t know, my experience.” Just something, pattern recognition. It would not be a mathematical formula.
Okay? And that’s fine. What I’m saying is that we have this other way to think about the future, which is really the Gosplan: let’s use time series everywhere, project everything, make those time series accurate, and then once we have that, we freeze the forecast and we say this becomes the plan.
And that’s where I think this vision, which is very, I would say, for me, very much scientism… it looks scientific because there are a lot of numbers, because you have time series projections and whatnot. It looks scientific, but for me it’s complete scientism. It doesn’t work. But it completely captured, I would say, the interest of intellectuals for pretty much an entire century. I mean, this captivation for this idea even predates computers.
Conor Doherty: Okay. But how does that actually inform the worldview that you’re pushing? I understand your objection to Gosplan, but I’m still unclear from that answer. How does that fit into your conception of planning versus policy versus supply chain decision?
Joannes Vermorel: So the first thing, and that’s what I discuss in this book, is that we have to think of what the future is made of. What is the very structure of the reality we’re looking at? I know it sounds like, what? But okay, if we are looking at the propagation of radio waves through space, we have the Maxwell equations to tell us what will happen in the future.
If we look at celestial bodies, we have Newton’s laws. And if we want to be very fancy, we can go for relativistic stuff, and that will tell us where those celestial bodies, like planet Mars, will be in the future. We don’t have that luxury in supply chain.
I mean, the problem is that… that’s what you’re saying. But again, the mainstream supply chain theory makes the exact opposite assumption. They say, “Oh yes, we are just one mathematical formula away from knowing the future perfectly. It’s just a matter…” And the interesting thing is that the very first economic forecasters, literally in the beginning of the 20th century, were very explicit.
They would say, “We will capture the future of humanity perfectly.” And when I say perfectly, Roger Babson, for example, said, “I’m going to do what Newton did for celestial bodies, and I will predict, literally with five digits of precision, the future demand for every single commodity or for every single product.” That was the intent.
And if you go into science fiction, for example Asimov, with his Foundation series, that’s again the same idea. He was half a century after Babson, so he understood that the idea of capturing every single thing would be maybe too demanding. But if we do statistics, we might capture the aggregates very accurately, even if individuals are a little bit too noisy.
And Asimov has this idea that just like statistical physics, you can near-perfectly predict the behavior of gases, even if you don’t have the individual position of every atom in the gas. Now, I know it might sound very abstract, but it is fundamental, because this idea, I would roughly say, the idea that you can project the future with near-zero uncertainty, doesn’t work in supply chain.
And the very simple reason, and I’m circling back on the nature of reality, is that in supply chain the relevant future is praxeologic. Praxeologic means it is the study of human actions. The future is made of decisions that have not yet been made by people who have agency.
So the entire future is the aggregation of decisions that have not yet been made. And as long as those people are people you do not control — your clients, your suppliers, your transporters, your competitors — as long as all those people have agency, then the idea of a certain forecast doesn’t even make sense.
Because if you had a certain forecast, all it would take for, let’s say, a client to defeat this forecast is to decide that this person is going to do otherwise. If you predict that this person is going to buy today, and the person is made aware that you made this prediction, then this person can decide that finally, really no, out of a spirit of contradiction, they just change their mind.
It may sound silly, but in fact the effect is very real. People have agency, and thus depending on what you do, they will alter their behavior, especially your competitors.
Conor Doherty: Okay. I’m going to slightly rephrase my question, I think, to make it a little bit more concrete. And again, I’m going to quote you so then there’s a bit more context. So, you write again in chapter seven, “The future is irreducibly messy. You need not be perfect, only better. And decisive actions beat elaborate planning. And luck rewards the prepared. Preparation, then, is the art of cultivating opportunities.”
Now, you wrote that. Someone reading that and listening now might struggle to parse the difference between preparation and planning. So again, in a concrete context, how do those two differ?
Joannes Vermorel: Is your plan reified as a collection of time series? That’s the question. And again, in 99.9% of enterprise software, time series is the only option. So it doesn’t matter what you think. Your software is rigidly geared toward this model, and the supply chain books that you have in your library, if you have any, are also rigidly codified around this idea.
And pretty much that’s everything. If you go back since World War II, there is a million-plus papers on inventory optimization, and the immense majority just adopt this perspective.
Conor Doherty: And that is not a concrete explanation of the difference. I didn’t ask for a methodological or an atomistic… I said, I didn’t say under the hood. I meant practically, on a day-to-day basis, for the average practitioner reading a playbook. What’s the difference between your view of preparation and dedication to planning, or Gosplan planning? How does that cash out in the real world on a day-to-day basis?
Joannes Vermorel: So, how it cashes out is that if you adopt an incredibly rigid view of the future, then you are blind to any opportunity. For example, you plan to produce a certain quantity of a product. Okay, that’s a 12-month plan. Now we are two weeks onward in the plan. So we are still 50 weeks to go in this one-year plan, and your supplier improved their tech, and they tell you, “Now we can actually divide the price by two.”
You’re lucky. So you have something that becomes much, much cheaper to produce. That’s good. Should you maybe change your plan? I mean, obviously, if you had a very nice opportunity, something where you had to buy a component for your own product, and it just turned out that one of its main ingredients is suddenly massively cheaper… this can happen. The other way can happen also.
Now obviously, if you do nothing, you can be undercut in so many ways. Your competitors might decide to lower their price massively and just completely outcompete you, and so the demand that you will observe will drop to zero. Or you can maybe negotiate with a supplier that you will capture most of this supply at this price point and literally buy the stuff for up to one year ahead.
And then you will be able to capture a lot of market share by selling at a lower price point but producing a lot more. Again, those things are very simple.
Conor Doherty: Mhm.
Joannes Vermorel: But they are inconceivable if you operate with the classic planning perspective, because for the classic planning perspective, those opportunities just do not fit. There is only compliance to the plan.
Conor Doherty: Okay. Now I think we’re getting a lot more concrete there. And again, I realize the example that I’m about to give is not from your book, and it’s actually about logistics. But we recently spoke with Adam DeJans Jr. and John Elam, and in their book, The Decision Factory, it begins with a logistics company. They have a plan for how to deliver all of their products to all of their customers. They have an optimal plan. The solver runs at midnight.
But then six hours later, reality occurs. The plan is completely null and void. It’s operationally infeasible because people are sick, there’s traffic, trucks have broken down, etc., etc., etc. So in that situation, there’s chaos, there’s uncertainty that presents possible opportunities, but your ability to capitalize or react to those opportunities or those events is contingent upon: do you have a fixed, very deterministic planning worldview, or a more flexible, reactive — I believe the term you use is policy-based — worldview? You have ways to respond.
So tease that apart again, because we are getting concrete there: the difference between if your worldview in supply chain is governed by planning versus more flexible and you have a policy-based approach.
Joannes Vermorel: Again, the simplest way is to imagine a simple situation. You are running, let’s say, a small store selling groceries — fruits, vegetables, very basic stuff. So you go to your wholesaler every day at like 4:00 a.m. to buy the stuff, and you buy some stuff, and then that will be on your shelves during the day. And on the wholesaler, because again this is fresh stuff, sometimes you have opportunities.
You will see, for example, very, very good-looking lettuces that are very, very cheap. Or on the contrary, you look at what you have and you say, “They are really not good, and they are quite expensive.” So do you want to behave by saying, “I have my plan. I will buy, let’s say, 10 kilograms of lettuces for my store no matter what”?
Or will you actually look at the price point of what your wholesaler has, and the quality that he has, because again it varies day to day, and appreciate whether it is a nice opportunity? And maybe there is some stuff that I usually don’t buy, let’s say strawberries, but on some days the strawberries are cheap, look very good, and then I decide, “Yeah, I’m going to take that because my clients will like it.”
I usually don’t have those, but I can bet on that because it sounds like a good typical supply for this period. Again, if you were to go to the wholesaler at 4:00 a.m. in the morning and say, “I want 5 kilograms today for my little store of strawberries,” you would say, “What if your wholesaler has no strawberries? Or he has them, but they’re super expensive?”
Conor Doherty: Yeah.
Joannes Vermorel: Or they are just not good-looking. That’s the sort of situation where maybe, for example, if they are super expensive, I will not take my target 5 kilograms. I have a small store. I will maybe just take 500 grams, because maybe I have a few clients that will be willing to buy those strawberries no matter the price, but they will maybe buy just 100 grams because they are gourmets and they really don’t care about the price.
So you see, what this grocery manager is doing in what I’m describing is being incredibly opportunistic. This person is thinking about the future, but he’s not thinking with rigid targets. Everything is very in flux.
Conor Doherty: Yes.
Joannes Vermorel: And the only way to appreciate that is to see that he’s thinking about his whole assortment. He wants vegetables. He wants fruits. He wants good-looking stuff. He wants diversity, assortment, maybe some novelty, maybe some stuff that is for the season. He has like a hodgepodge mix of constraints, a fuzzy view of the future.
And then he is also very self-interested. He wants to make money. He’s not in the business of making his store look good. He’s in the business of making his store very appetizing so that people buy a lot of stuff and he can make a nice profit at the end of the day.
So that is planning as I advocate. It’s a mindset that looks at the future, trying to keep track of the euros and dollars that you’re going to win or lose. The fact that you’re making bets, and that being prepared means yes, you have to do the right actions ahead of time, but every single action is a bet, and you do not try to rigidify your future with hard targets.
That is kind of the path that I advocate. And what I’m saying is that for the last 50 years or so, the entire supply chain theory and pretty much all the enterprise supply chain software in existence are adopting a completely different perspective, where they rigidify, Gosplan style, the future into time series. And that doesn’t work.
Exactly, if we take this guy going to his wholesaler, it’s as if he had already made a list for every product, in kilograms, how much he will buy, regardless of what opportunities are there, and disregarding even basic availability at the wholesaler. It would say, “Oh, you need to have, for whatever reason, today watermelon,” and the wholesaler doesn’t have any.
So on his list, he has, “Oh well, I don’t get that, and I just don’t spend it because it’s not on the plan.”
Conor Doherty: Yes.
Joannes Vermorel: And so that’s a very nice example. And again, when you think about it, the mathematical plan is crazy. That’s literally saying… imagine again, let’s go back to this taxi driver. Okay, I have a plan that I need to go from point A to point B, and I have the exact directions I will take: turn right, turn left, exactly, I can follow the roads.
And then at some point the road is blocked.
Conor Doherty: Mhm.
Joannes Vermorel: And the mathematical planning says, “Oh, the road is blocked. I don’t know, take explosives and just blow up the obstacle and move forward. You have to go through this road no matter what.” If the road is blocked, no, we don’t care. Just muddle through.
And that’s insanity. I mean, obviously this is so insane. But the interesting thing is that the example may sound like complete insanity — go through no matter what — but this is exactly what was happening under the USSR with Gosplan. They were making a plan. Reality stuff were happening. The plan was becoming completely infeasible.
Conor Doherty: Yeah.
Joannes Vermorel: And then people would end up doing crazy, crazy things just to make the plan work. There was even an anecdote — I’m not even sure it really happened, but it was really the style of things — that the only way, for example, for a shoemaker in the USSR to fulfill their quota was he decided he would just do left shoes, give up on the right shoes, just do one type of shoes, because that would expand the amount of shoes that would be produced, and so you could meet the quota.
The quota wasn’t specifying that it should be right and left shoes. And that’s the sort of absurdity that you get through rigid planning.
So the concrete benefit is that as soon as you decide officially that you give up on that, your company improves. If you want, as a practitioner, to just benefit from this perspective, just say: we give up on time series, we give up on this dumb, dysfunctional planning. We can just do it with intuition. And the thing is that it will work better because a broken mathematical model is something that gives you incredibly crappy results.
You do not trust a broken mathematical model. It is better to do things that at least seem to make directional sense, even if they are very approximate, as opposed to following a model that is completely wrong. Again, just imagine you want to travel north and you have a very, very fancy compass, but the compass is pointing east.
And should you follow this compass? Or just, even worse, the compass, whenever you look at it, points a random direction. That would be a purely broken model. So you have your compass, but it’s completely broken. Every time you look at it, it points a different direction.
I mean, are you better off trying to navigate north by just vaguely looking at the sun and making a super, super guesstimation of where the north might be, assuming that you can see the sun, versus following your compass, which every time you look at it points a different direction? I mean, looking at a broken compass that gives you a random direction, you would say, “But it’s madness. Anything will be better than that.”
And that’s what I say. When you use a mathematical model that is broken, it is just madness. It’s just like having a compass that gives you a random direction and saying, “Oh, but at least I’m doing something scientific. I’m using a compass.”
No. You’re using a broken compass. And that’s even worse than doing something extremely fuzzy, because at least what you’re doing fuzzy is approximately correct.
Conor Doherty: I really like the analogy of the greengrocer wanting to buy fruit, wanting to buy vegetables. I think it does illustrate the limitations of approaching your supply chain decision-making, even if you don’t agree with the concept of decisions in isolation, approaching what it is you’re going to do with your resources from a purely static plan perspective versus a more agile, reactive example. Like okay, opportunities on a day-to-day basis after my plan has been made will reveal themselves because the future is uncertain.
Now, in order to implement that, we don’t have to get into the software of it now, but in order to implement that, there are a few core changes or assumptions you have to accept regarding uncertainty, and a few of them you cover in the book. And I think it’s time we get into them. One of them is planning, whatever language we want to use, making decisions around the mean, around the average scenario. You’ve pointed out in the book that that is not great.
Yes, please expand on why it is not great to make decisions around the mean.
Joannes Vermorel: So, the average… we have to go back to why we have this uncertainty of the future, and the root cause is agency. People can act.
Conor Doherty: Okay.
Joannes Vermorel: And when people… and the world, stuff happens in general, not even just people. You see, those people are not atoms bouncing randomly on walls and stuff. People communicate. They tend to listen. And for example, if people think that the world is going to run out of toilet paper, they all go and buy toilet paper. You give the example of Johnny Carson, the anecdote in the book.
Conor Doherty: Yes. And it did happen, by the way, several times through the last couple years ago.
Joannes Vermorel: So again, the consequence of people having agency, but also people communicating with each other, is that the mean is bad because it says as if you had a central behavior and only statistical deviations. This is not how the world behaves when you have people with agency. When you have deviations, deviations can be incredibly large.
And when I say incredibly large, I mean, for example, the price of oil can become negative. That did happen in the past. There was a point in time, it was like a decade ago, where there was weird stuff happening and for a brief moment the price of oil per barrel became negative because essentially people were stuck with the ports. They could not discharge their oil.
So they were stuck with tankers that were costing money by the hour to run. So it was a cost of storage, no opportunity immediate to sell. So owning oil was just suddenly dead weight. And so strangely enough, the cost of oil, in a very tight period, became negative.
Again, that is the sort of thing that will not happen if you’re in the realm of natural sciences. If you’re looking at gases and, for example, you have two containers with a pressure difference and you make a hole, the gas with the highest pressure will flow into the container with the lowest pressure. It always happens like that.
But with humans, no, it doesn’t, because people do not obey simple physical laws. They can change their mind. And thus, if we go back to this mean, what happens is that you can have much more extreme behaviors.
Again, if you look, for example, at promotions in grocery stores, most promotions in terms of increase of sales volume have almost no uplift. So you do a promotion and the demand is just the same. It barely moves. And sometimes you do a promotion and you run out of stock in the first hour after opening the store.
So literally, your store opens at 8:00 a.m., let’s say at 9:00 a.m. the stock of the thing that was promoted is gone. And you thought initially that this thing was enough to last, let’s say, a week. It lasted an hour. So the demand was 100 times higher than what you expected. It happens.
You even have funny situations in the Netherlands where, for example, a supermarket does a promo and literally the entire area is completely blocked because there are like a million people who converge with their cars toward the one hypermarket doing the promo, and you have kilometers of traffic jams around the hypermarket. That’s the sort of thing that, if you think in terms of statistical average, normal deviations, and whatnot, you will never get that. You will never get that.
You will definitely get that again in supply chain. If we look outside retail, aviation for example, one of our clients, a few years back, had parts to maintain the 737 Max from Boeing, and it turned out that there was a crash, a deadly tragedy, and people rightly so were absolutely scared of running those aircraft because the question was, are they safe? And the answer was unclear.
So what did all the airlines do? They grounded, at the same time, their 737 Max. So the demand for parts specific to this aircraft went from super steady to absolutely nothing, and stayed at nothing for pretty much an entire year.
So you see, it’s the sort of thing where if you think in terms of average deviation and the fact that you think your mean is significant, this is not the world where we live in, in terms of supply chain.
And again, another way to think of it is: think of a stadium. If you look at the heaviest man and the skinniest man, you don’t even have a factor 10 of difference. If you’re incredibly skinny, pathologically skinny as a man, you’re going to be, what, like 40 kilograms? Maybe 35 if you’re literally starving.
And if you are morbidly obese, you can be, what, 300 kilograms? If you’re 300 kilograms, I don’t know if you can even walk into a stadium.
Conor Doherty: It will be difficult.
Joannes Vermorel: Yes. So probably the maximum is going to be something like 150, and then you can’t even go into a stadium. So you see, the range of variation is in fact very, very small. So from the skinniest man to the fattest man in the stadium, we have not even a factor of 10, maybe a factor of five.
Now let’s have a look at bank accounts. The guy who is the poorest in the stadium is most likely, let’s say we are in the US, a student with student debt: minus $200,000. And then the wealthiest guy, a billionaire.
Okay, so we are literally talking of… it’s not that the billionaire is like a million times richer, it’s that the guy is even negative. So how do you even compare that? And that’s the realm of human stuff. When people can act, the differences can be absolutely enormous numerically. We are talking of differences that can be times a million.
Again, if you’re in nature, times a million do not happen. If you look again, the smallest cat, the biggest cat, it’s not even going to be a factor 10. And that’s the sort of thing for everything. For example, how many days of sun do you have on average in Paris or in Ireland? And again, if you look at the variation from year to year, it’s not going to be that big.
So that’s the sort of thing. But again, if you’re looking at decisions, things can be wild.
Conor Doherty: Yes.
Joannes Vermorel: Again, price of silver, for example, for the last 20 years, it was multiplied by 20. So we have now better mining techniques. So we can actually extract more silver than we ever did from the earth, and for reasons that have nothing to do with our capacity to produce silver, silver is now 20 times more expensive than it was two decades ago.
And that’s exactly what you should expect from praxeology, from a perspective where it’s human agency. If people collectively decide that silver is worth more, then it is worth more. It doesn’t need any kind of explanation for that. It’s just belief.
Conor Doherty: So, this actually brings us to what I think is probably one of the most consequential points, certainly in the chapter if not the book itself. And there’s a bit of context here I want to get out, and I have a couple of quotes, and that is the uncertainty surrounding fat tails.
And again, some quotes. “One of the biggest pains associated with uncertainty lies in the tails.” To quote you, “profits and losses are dominated by extreme events. Fat-tail distributions guarantee extreme events will routinely appear.” Now, quick primer for any non-stats people: a fat-tail event is simply an extremely damaging event that you feel is very, very, very unlikely, but in fact happens much more often than you would think.
So it’s not one in a million. It’s maybe one in 100, one in 200. But of course, a one-in-100 or one-in-200 event, like extremely high or extremely low demand, might happen one to three times a year. And if it’s a fat-tail event, then the damage of being wrong on those days is catastrophic.
So again, thinking in terms of the fat tails is really where the risk lies. If you plan toward the mean, the average, you’re probably going to be fine when the mean or the average occurs. But if you plan toward the mean or the average value and you get struck on one of those days, one of those fat-tail events, you’re basically screwed. Is that more or less…?
Joannes Vermorel: Yes, exactly. And again, if we go back to our grocery story, what is the fat-tail event? The fat-tail event is your store gets vandalized.
Conor Doherty: Mhm.
Joannes Vermorel: And if it’s bad enough… if it happens and imagine you have a situation where it gets vandalized, you have no insurance. So you don’t even have the money to rebuild your store afterward. So game over. You’re not a grocer anymore, because now you’re just having a vandalized store and it’s not in a condition to even sell anything anymore.
If you have insurance, then at least you can recover from that. So you see, that’s a very different perspective. You could say, for example, “I am a grocer. On average, in my city, stores are getting vandalized very rarely. So I don’t need insurance.” But the other way to look at it is, “I’m a father of three. My family depends on me having sustained revenue coming from this business, and I don’t want to be in an unlucky situation where suddenly I have completely lost everything because I have lost my way to earn money through an event that I cannot recover from. Thus I take my insurance.”
You see, again, that’s the intuition. The simple intuition is very correct, that most people who operate a business would take an insurance against fire, insurance against this and that and that. And this is a correct sort of thinking, but again in terms of risk.
Conor Doherty: Yes.
Joannes Vermorel: In terms of risk. And if you start thinking about supply chain the classical way, they would just say: imagine what this vandalism event looks like from a time-series perspective. You would just say the probability. So you would just say how many events will you have. So you have a time series that is going to be zero all the time.
Every single day it’s going to be near zero. It’s going to be 0.000001. And that’s going to be your average expected amount of events. And that’s just the wrong way to look at it, because it’s an event that is so impacting that unless you’re prepared, you cannot recover from it.
You see, that’s why I say the classic planning perspective is nuts, because it is completely blind to stuff that will happen with probability one, for example, and more often than you think. For example, having vandals break your store, the probability for any given day is probably less than one in a thousand.
Conor Doherty: Mhm.
Joannes Vermorel: But if you’re a grocer and you operate a store for 40 years, that’s a lot of days. If you say that you’re open, let’s say, 250 days a year, for 40 years, we are talking of something like 10,000 days. So the probability that you will be vandalized is like one.
Conor Doherty: Yep. So the point here is, I agree with the philosophy of what you’re saying and the practicality of what you’re saying and the risk of what you’re saying. The example might not resonate as much because if we move to aerospace, for example, I remember you’ve given before the example that there are situations where certain items are so cheap that the cost of not having it when you need it is so catastrophically damaging that it makes sense to carry years’ worth of inventory even though you might not even use it.
Because the carrying cost is a dollar — I’m just picking numbers — but the differential between that carrying cost and the cost of not having it, the stockout, might be hundreds of thousands or millions. Therefore, even though you know you might not actually use it, from a pure risk or insurance perspective, it makes absolute financial sense to carry an unseemly amount.
Joannes Vermorel: Yes. And people, when they think rare events, they usually think negative events. But in fact you have also plenty of positive events. For example in aviation, frequently you have airlines that dismantle aircraft, meaning you can pick up parts. And temporarily the market is completely flooded with certain parts, and because demand is not really there — because people feel that they need to buy the parts to maintain their fleet — very frequently we have seen that they are not necessarily jumping into opportunity buys, but they should.
Because ultimately, even if it’s like two years from now, they will need the parts. They don’t need it right now, but if it’s half price, and if you have the storage capacity to keep it safely for a long period of time, then this opportunity is really something that you should grab.
And that’s interesting. Again, when people act based on intuition, they do that. For example, if you’re a man and your favorite brand of white shirts is doing a massive discount, you would say, “Oh, I’m just going to take four. It will be useful, maybe one year from now.” It’s like a slow consumable.
So you could do those opportunity buys. And the interesting thing is that again, if you look at supply chain textbooks, those things that most people would do intuitively when it is their money and their life, as soon as they enter into a corporate setting, they would say, “Oh no, we don’t do that. We have a plan. The plan does not include this opportunity buy, so we don’t do it.”
And again, that is not very smart. This is not smart. So the classic planning, which ignores this uncertainty of the future, makes you very vulnerable. Again, focusing on the mean makes you very vulnerable to bad events, but it also makes you immune to the profits that you could have by being smart with the good events.
Conor Doherty: Well again, that’s the whole point here. It is worth, I think, teasing it apart a bit. Again, just using simple round numbers: 10 days in a row, you plan to the average value. Average value is 10. It’s fantastic. And you’re right. Let’s say you’re right. You have 100% accuracy, 100% service level, 10 days in a row. On the 11th day, you’re out of stock. You have zero.
The cost of lost sales or expediting or back orders or things like that could easily eat up the previous 10 days of profit of what you’ve taken in. So again, that might represent an extreme event. There was a probability that that extreme event might occur. You chose, essentially intentionally or not, to ignore the possibility that that would occur, plan to the average, and when that extreme event occurs, essentially you’re kind of wiped out.
Joannes Vermorel: Yes. I mean, again, large companies survive. So usually their processes are not so broken that they would be wiped out.
Conor Doherty: Well, the profits I just described are wiped out. You can lose, because the damage of being wrong is greater than profitability.
Joannes Vermorel: So, I say… and that’s very interesting, because the reality is that practitioners intuitively know that. So again, large companies play a very strange planning game, which is: the theory is completely broken, the commitments are broken, and practitioners are not abiding by the plan.
And people say, “Oh yes, but next year, next year we will be compliant with the plan. But this year, no, we need to do something differently.” And the reality is that the practitioners who just decide, “Screw the plans,” are almost invariably correct.
So we have this sort of schizophrenic approach where if you have to discuss with your colleagues, you say, “Oh yes, we are going to implement safety stock, we are going to be compliant with the plan, we are going to do this and that and that and the other thing.” And then when it comes to actual resource allocation, people do things that are completely different, and it turns out that those completely different things are actually what makes a company profitable.
And what I’m saying and explaining is that you should not trust theories that have a broken perspective on the future. Whatever theory you have must treat the future in a way that is sane. That means understand that uncertainty is irreducible, it’s the agency of other people, and understand that when those people change their mind, it’s usually not in isolation.
And thus you can be hit by positive and negative events that will be much stronger than what a naive mathematical model would give. But again, for example, taxi drivers know that. The intuition of a taxi driver: how much money are you going to make on any given day? Well, if there is a massive sports event, maybe as a taxi driver you will do three times the normal amount of money, because there is so much demand.
And same thing for the grocer. If there is an event happening in the neighborhood and there is a massive influx of tourists, he will be selling a lot more on this day. It wasn’t perfectly predictable, but at the end of the day, that was a record-holding sales day. The grocer is not stupefied by this outcome. They say, “Yeah, I mean, it’s not every day that is like this, but those days happen once in a while.”
So the surprise is only in the eyes of the people who are using simplistic mathematical models. For people who are relying on their intuition, for them it is not that surprising. That’s the interesting thing. Again, I say that the main problem is those time-series-plus-normal-distribution sort of models that are extremely simplistic. Those are the broken mathematical models. Those are the broken compass that gives you a random direction every time you’re looking at them.
That’s the focus of my criticism.
Conor Doherty: Well, that’s the thing. And just to slightly tighten the phrase I said earlier, because I realized I was a bit loose with the phrasing, what I was trying to say was: the damage of being wrong in the event of a fat-tail event can wipe out a wide series of being right in terms of average values. And again, that comes to the idea of the intuitions that we have around these decisions and also the asymmetry of costs.
Particularly, and again just because I do like… I want to inject as many concrete examples as possible, but to take the example of being a little bit overstocked. Let’s say you’re a fashion retailer. Being a little bit overstocked, like you have a few shirts too many you don’t sell on a given day, okay, my carrying costs are what? That’s capped because I know what the cost of those shirts were. It’s not theoretically limitless what I’m going to lose there because I have a certain amount of extra left over. I know what that’s worth.
However, the cost of not having shirts… there’s theoretically no limit, depending on how many people, on average, let’s say, come into the store, and you operate, let’s say, 10 hours a day. But theoretically, a thousand people could come in wanting the exact same thing. You don’t have it. They don’t buy the white shirt. Then they also don’t buy the blue jeans. They don’t buy the belt. They don’t buy the hat. They don’t buy the flip-flops, whatever. They also possibly don’t come back.
So the direct losses plus the indirect losses, both now and stretching forward in time, there’s theoretically no upper limit to how damaging that can be. So just being out of stock versus being just a little bit overstocked, the asymmetry with cost there is just not something we intuitively feel.
Now, do you see that as a flaw of human evolution, that we’re just not evolved to think in terms of that?
Joannes Vermorel: I think the human mind is actually quite good at intuitive risk management when it comes to their own money.
Conor Doherty: That’s the point you make, when it comes to their own money.
Joannes Vermorel: Yeah. I mean, when it comes to something where you have information. That’s the problem. Supply chains are very large, very distributed, and so you need to fly with instruments. You cannot fly by looking where you’re going because fundamentally you cannot even see the thing. There are too many products in stock, you have too many locations. So you need to fly by instruments.
And that’s where the problem with the human intuition… I say the human intuition is good, but if your instruments are complete crap, then do not expect that anything will come out right. And what I’m saying very specifically is that when it comes to those asymmetries, the instruments that are of software nature, because it’s information processing, so it’s the software stack that has been produced over the last essentially five decades, are complete crap.
And they are all relying on incredibly simplistic takes about the future that are completely bogus. For example: time series upon time series, normal distributions, fixed lead times. Ignore the fact that you can have cohort effects or anything that would be inter-product effects like substitution, cannibalization, or the basket perspective.
Conor Doherty: As we buy baskets in combination, not isolation.
Joannes Vermorel: Exactly. All of that is entirely absent. And for me, the crazy thing is to expect that a system that is so dysfunctional in terms of information processing… again, let’s go back to this analogy of flying with instruments. You are flying with instruments that are so overly broken, and you expect that you will be able to fly your airplane safely. This is insane. You see, this is insane.
And the interesting thing is that companies survive. And the answer is: why do they survive? It’s because, first, people keep doing things that are not… they don’t respond mechanically to the instruments. They know that the compass is broken. They know that the gauge to assess the altitude is also broken. They know their own instruments, and they don’t trust them.
So they tend to do a lot of other things precisely because they don’t trust them.
Conor Doherty: Yes.
Joannes Vermorel: And also because, well, the companies suffer a lot of ongoing costs, and because their competitors have the same problems, nobody goes bankrupt. Because, you see, even if you do something very badly, as long as all your competitors are equally bad, all is well. Markets do not prove that you are good at doing your job. The market only proves that the other guys are just as bad as you are, or worse.
And what I’m saying is that as far as the game of supply chain is concerned, it’s not a game that is played in a very efficient way. The instruments are terrible and it has been very, very wasteful. It turned out that we have so many other areas of the economy where the progress has been so incredible that it completely offsets those massive inefficiencies.
But nevertheless, if you try to coldly assess the contribution, positive and negative, of supply chain, the picture is fairly dismal.
Conor Doherty: Mhm. Well, you mentioned the idea of information. You said information. I know that is a chapter, I think it’s chapter five.
Joannes Vermorel: Yeah. Chapter five is information. Chapter three is epistemology. Chapter six is intelligence.
Conor Doherty: So the idea of information, and here we’ve been talking about uncertainty, about future information, uncertainty about risk, and you’ve multiple times, not only in the chapter but throughout the book and obviously in many lectures, criticized time series, using time-series forecasting to try and approximate that information.
In your estimation, is the only way to do that, to estimate the risk, to estimate the information that you lack, is the only way to do that to use probabilistic forecasting? And if so, why?
Joannes Vermorel: I mean, it is not the only way. It is the most tractable way considering the sort of technologies that we have. Amazon has one or two papers for essentially techniques that are reinforcement-learning-style, that bypass the need to explicitly make a forecast. So essentially you can craft a policy, a decision-making process where the risk management is buried in the parameters of the model.
So the probabilistic forecast is never made because it’s implicit and it’s buried in the policy itself. My take is that those approaches suffer from massive drawbacks in practice due to massive opacity. I mean, you end up with deep-learning-style numerical recipes. It is extremely, extremely opaque, even for the data scientist who is wielding the method.
What I’m saying is that if we want a method that is not too much of a black box to the data scientist who operates the recipe, then something that makes a statement about the future explicit is better. And because we don’t want… and then what sort of statement are we going to make about the future? It’s going to be probabilities.
Again, maybe a century from now, smart mathematicians will figure out a way to think about risk in a way that is smarter than probabilities. But for now, let’s say it’s the gold standard of the sort of mathematical instruments that are available to us. So what I’m saying is that we need to have those probabilities.
And I’m saying that the focus should be on the stuff that matters business-wise. So that’s where the time series says you take your measurements in the past and you focus on extending that into the future. I say, hold on, this is not the right perspective. Because of those asymmetries, we need to pay attention to stuff that matters.
So what does that mean? If I run a grocery store, people vandalizing my store and my store burning down are two types of event that I want to characterize. They are very different. The nature is different. The sort of prevention is different. So I don’t want to just do a time series. I want to think about the priority of that.
For example, if you’re in aviation and you want to think about what will be the cost of an aircraft on ground…
Conor Doherty: Yeah. Classic example.
Joannes Vermorel: Well, the cost is going to vary massively depending on the day and location. The worst case, which was I think achieved by an airline company a few years back, was to have an A380 grounded in Dubai on New Year’s Eve.
Conor Doherty: Okay.
Joannes Vermorel: So you had to basically put your customers in hotels, and the hotels that still had rooms were essentially palaces.
Conor Doherty: Yeah, yeah.
Joannes Vermorel: So it was like a multi-million event where not only you had something like 700 passengers, which is like twice as much as a normal jetliner, but on top of that the only rooms that were still available in the city were only at the top of the top in terms of quality, and with crazy prices.
So that’s the sort of thing where okay, now, is an aircraft on ground going to be as impacting if it’s in a city where there is a lot of spare hotel capacity, and the price of the hotels are cheap, and there is a lot of spare capacity for rerouting my passengers through other aircraft? Maybe there will be a factor 10 between the difference between the worst case of an aircraft on ground and the easiest case.
And again, that means that I need to focus on… don’t think time series, think adequate characterization of the good and bad events. That’s what I’m saying. Don’t be distracted by the neat mathematical formulation. You need to keep your eyes on what matters.
Conor Doherty: That’s a really, really, really good example. And it is one where I can give a personal anecdote that demonstrates that quite well with a little bit more detail. So twice this has occurred to me, again an AOG, basically because planes can only fly so many flight hours in a day, or when the plane is actually… even if it’s waiting, it’s still technically in use.
Twice, flying from Dublin to Paris, to Charles de Gaulle, that window was exceeded, so the plane is grounded. It cannot fly. Now again, that’s for operational reasons, whatever. The thing is, when that happens, all the passengers have to be accommodated.
Now I, being from Ireland, know that there is an incredible shortage of not only housing but hotels in Dublin. And I also know my rights, which is if this occurs, there’s European legislation that means that you will be refunded for your meals that you incur due to a delay, and accommodation. So I just immediately cracked open booking.com — not sponsored — and I picked one of the most expensive rooms in the Hilton.
Joannes Vermorel: You’re a terrible passenger.
Conor Doherty: It’s not my fault.
Joannes Vermorel: Exactly.
Conor Doherty: But I knew this was coming. So I knew, okay, and not only that, but I also know there’s dynamic pricing. Suddenly you have hundreds of people…
Joannes Vermorel: Yes.
Conor Doherty: …who are all going to try and clamor for the very scarce hotel resources at 9:30 at night on a weekend in Dublin. That is… sorry, excuse me, I’m so excited, I’m animated. That’s a fat-tail event from the airline’s perspective. That is huge losses. Plus, you have to reroute all those people. So you’re not selling those seats, you’re rescheduling those seats.
So, you know, I ate very well for, I think it was two days in fact, actually, and stayed in a very, very, very nice hotel. Basically had a nice suite to myself. All of it paid for by the airline.
And again, so what’s the cost? What’s the cost of having an AOG event in Dublin, not even that exotic of a location? Well, it depends. Is it the weekend? Is there a sporting event already occurring that weekend that has already eaten up a huge volume of available accommodation? Is it the beginning of the day or the end of the day? Because again, if it’s 9:00 in the morning, then you have all day to accommodate people.
But there were hundreds of people waiting. And of course many passengers just went to the customer help desk thinking that, forgive me, the person on the other side is going to be able to divine out of nowhere hundreds and hundreds of hotel rooms. They’re not going to be able to. So I have no idea what happened to those people or where they were shuttled to on the island of Ireland, but it was not in Dublin.
Because I know that there were not a lot of places. They’re talking about hostels maybe, but again, just sleep on the floor. But basically, that’s the situation, because again I work at Lokad, we have a lot of aerospace clients, I knew this was coming. So as I was sitting on the tarmac for like hour three, I knew very likely we’re going to have to get a room. I’ll book a refundable room while I’m on the plane.
Joannes Vermorel: Yes.
Conor Doherty: And then flights canceled. I just walked, collected my bag, and went straight to my hotel room. Other people were not as lucky. So the whole point is that this does happen. And from both the client and the customer perspective, be prepared for that, because again that fat-tail event happens more often than you think, it’s very damaging, and you should be aware of that risk.
Joannes Vermorel: Yes. And again, the “much more frequently than you think,” I think it’s a little bit misleading. People don’t have a very quantitative intuition, so if you ask them to express that as a percentage, they would give you mostly bogus numbers. But still, despite the fact that you don’t know the probabilities, you can still end up with a fairly decent decision.
For example, booking a room where you can get a refund is a no-brainer.
Conor Doherty: Exactly.
Joannes Vermorel: Okay. If it happens, I win. If it doesn’t happen, I don’t lose much except spending a few minutes on the website canceling the stuff. So the loss for you was 10 minutes of your time. The win was massive discomfort avoidance.
Conor Doherty: Yeah, yeah. Exactly.
Joannes Vermorel: Insurance. Insurance is what we’re talking about. Risk and insurance. And again, what I’m saying is that if you were to try to operate through your life saying, for example, what would be a time-series perspective? A time-series perspective would say: every hour I have a time series that gives me the GPS coordinates of my position.
Conor Doherty: Mhm.
Joannes Vermorel: Just imagine you have two time series that say latitude, longitude, and you extend that hour by hour for the next week. Well, your plan would be: at this hour I should be on top of the ocean flying. But you’re not. So what about the plan?
Again, imagine you have your two time series that are latitude and longitude. What is this time series telling you about what you should do or not do? It is so incredibly dysfunctional. It will never tell you about booking a hotel, maybe making a phone call for your wife to say, “I’m going to have a problem.”
The interesting thing is that the time series might sound very scientific — your exact position at every hour in time — but in practice it is completely non-operational for the basic decisions that you have to make, which is: inform your relatives, manage with the hotels, think of your baggage, think also of your own tiredness. Can I endure sitting in the airport for the next four hours, or do I need to sleep, eat, and whatever?
You see, all of that, again, when you take a very individualistic perspective, a subjective perspective, it’s obvious. And when it becomes corporate, suddenly people say, “Oh no, time series are good.”
Conor Doherty: Yeah.
Joannes Vermorel: No, no, no. Time series didn’t suddenly graduate into something functional just because we are using them in a corporate setting, that suddenly they start making sense. They were dysfunctional from the start. Using them in corporate settings doesn’t make them a better way to think about the future.
Conor Doherty: Reminds me, a few years ago I was at an ISF conference, I think it was in Dijon, and I was moderating a panel, and we were talking about this exact point, the difference between what people will do personally and what people often will do once you put them in the same office.
And I gave the example of planning a holiday. If you were going to plan a holiday, or let’s say you’ve already booked flights and you’re going to an exotic location, and you look at the probabilistic weather forecast a week from today, which is when you’re going to travel, and it’s like 20% chance of a tsunami occurring — again, a tsunami cannot be, let’s say a storm, a storm, a storm, an absolutely horrible storm, the entire trip is going to be horrible — would you… and you can refund your tickets then, yes, how would you proceed?
Or even 20%, 30%, 40%, 50%, whatever. At a certain point, most people, if what they’re optimizing for is “I desperately want sun,” that’s the objective function, “I want sun, I want the beach,” are you going to continue to go to that location? You have the option to pull the money out, redirect it, but you’ve made a plan, and you have a probabilistic input that tells you that that plan you have made really might not play out the way you had planned, which is to have a nice time when you get there.
Personally, what would you do? And most people agreed, yeah, I would probably reevaluate. It’s like okay, so you trust the probabilistic input for your own finances, but then you also advocate for the time-series perspective when you go into…
Joannes Vermorel: And again, you can also be incredibly sensitive to subtle questions. For example, do you have an infant with you?
Conor Doherty: Yeah.
Joannes Vermorel: That will completely change your assessment. If it’s only two adults traveling, or if they have a six-month-old baby, it’s completely different. And you’re caught in a difficult storm. Again, imagine if the airplane is completely shaken during the storm and you have an infant with you. It’s going to be hell for the baby and for you.
And what I was saying is that it’s the same thing for supply chains. You have very subtle nuances that make all the difference in how you’re going to appreciate a risk, because you can have substitution, cannibalization, partners that can support you, clients that can wait.
We even have, for example, in aviation situations where, for example, you are supposed to repair a component. So you think of it as an MRO, and the MRO is lacking certain parts. But actually the client might even, in certain circumstances, be willing to actually send the parts.
Conor Doherty: Mhm.
Joannes Vermorel: Because the client also has stock of parts. So you are sending a component to be repaired to a company, and this company normally is supposed to have their own stock of spare parts, but in some circumstances they don’t. And because it’s your component, and you’re still very interested in the outcome, you might end up sending some of your own spare parts to this supplier so that they can proceed and complete and send you back the stuff that is serviceable again.
So those are situations where it doesn’t fit a time series, but if you think of it from a pragmatic perspective, it’s a no-brainer. And that’s really the crux when I say thinking about the future: people need to be thinking about the future in a way that is non-dysfunctional. And if they want to use math, it is fine. Math is necessary if you want to be able to translate your intuition into software.
So if you want to use math, which is completely fine as formalism, just use a formalism that is not completely broken. And I’m just saying: probabilities. Because that’s a safe bet. That’s a very reliable tool. There are more exotic options, but I would say as a safe bet, to get started with risk management, just use probabilities. It’s a very safe approach to deal with risk management.
Conor Doherty: All right. Well, we’ve been going for quite a while now. I think it’s fair to start drawing it to a close. But I will say, we’re now… my customary question. We’re now seven chapters into the book. We’ve now covered a lot about uncertainty, which again, as I said right at the top, I think is a central part of Lokad’s philosophy and the book and obviously this chapter.
What can people take from this chapter and immediately apply going forward?
Joannes Vermorel: Identify Gosplan-style planning in your company and just stop doing that.
Conor Doherty: You see, there you go. Heard it here.
Joannes Vermorel: Once you understand how you should think about the future, which is not too difficult, by the way…
Conor Doherty: It just will occur.
Joannes Vermorel: Yes. And you have to be opportunistic. So when something favorable happens, you need to seize the opportunity. And when there is a possibility that something extremely damaging may happen, you have to prepare so that you minimize the damage when it happens.
This is not an incredibly sophisticated idea. What I’m saying is that I’m just clarifying in this chapter: this is the correct way to think about the future. Do not think that those plans that are very elaborate, sophisticated, are holding a superior truth. They are not. They are dysfunctional. They work very badly.
Many practitioners look at those plans and say, “Oh yes, those numbers, they’re so, so, so bad. But it’s science, you know, trust the science, we need to do that. I can’t really use those numbers, but I recognize that that is the true science.” And the message of this chapter is: no, this is not true science. This is completely bogus. This is scientism.
And if you understand that, this chapter will give you the arguments to argue with your top management that your company should stop doing something that is actually creating economic damage on an ongoing basis. People underappreciate that if you stop doing something that is just creating economic damage for your company, your company is better off. You don’t need to do anything good. If you just stop doing something bad: profit, more profit.
And that’s something I think is very difficult, especially in large companies, is to think subtractively. They think that to make money, we have to do more things. We have to do something better. But my observation is that for large companies, usually that’s not the problem. The magic source for them to make profit is already there. They are already doing the right thing on many, many things, otherwise they would not even exist.
Conor Doherty: Yes.
Joannes Vermorel: And if we want to unlock the next stage of profitability, usually for established companies, the amount of opportunities in just removing the bad stuff is absolutely gigantic compared to improving the good stuff. For your average large established company that has a supply chain, subtractively removing the bad stuff is almost always the easiest and simplest area.
But it is counterintuitive because I think the human intuition is: we want to add stuff. For example, make better.
Conor Doherty: Yes.
Joannes Vermorel: Very few people would think, “If we just remove all those people and we don’t replace them, everything will be better,” or just remove a crappy practice.
Conor Doherty: Yeah.
Joannes Vermorel: Or just the people. That’s again what Elon Musk did with Twitter. He started by firing 80% of the company, and then half of the 20% that remained just quit. And it turned out that now they are like 90% down in headcount, and X is actually working better than ever before.
They brought tons of features that they didn’t have before, like videos and whatnot.
Conor Doherty: Mhm.
Joannes Vermorel: I mean, I’ve been a longtime Twitter user, now X, and this network, after 90% headcount reduction, is as a software product much better than it was before. So you see, that is literally the power in action of: if you just stop doing things that are literally harming your company, you will be much better off.
And I think for established companies, bureaucracies are accretive. You accumulate a lot of people, a lot of processes, and it’s ongoing. And every manager wants to add new stuff. And the idea of just suppressing stuff is a little bit terrifying.
And in the software industry we have the same problem. When people say, “Should we just delete the code?” people say, “Ah, this code might be useful in the future.” So we say, “Oh, we invested so much to write it in the first place” — sunk cost, essentially.
Conor Doherty: Yes, sunk cost. Should we really delete it?
Joannes Vermorel: And the answer is: mercilessly. You just delete, delete, delete, delete. And this chapter about the future is, look, it will give you the mental instruments so that you can look at your planning practices and just say, “We need to delete that, that, that, and that,” and it will just be better. Just be better.
Conor Doherty: All right. Well, Joannes, thank you. I don’t have any more questions. Thank you for joining me, and I’ll see you soon for chapter eight.
And to you, thank you very much for watching. As always, I say at this stage: if you want to continue the conversation, you can reach out to Joannes and me. We’re both on LinkedIn, or you can send us an email at contact@lokad.com.
And with that, we’ll see you next time for chapter eight. And yeah, get back to work.