Back to Lokad TV


00:00:00 予測の手動 override の隠れたコスト
00:04:34 専門 planner 不足が override 運用を悪化させる
00:09:08 ソフトウェアの粒度がレビュー負荷を膨らませる
00:13:42 予測ソフトを直し、迅速な修正を可能にする
00:18:16 実務知見をシステムへ組み込む
00:22:50 vendor のインセンティブが override 依存の複雑さを残す
00:27:24 組織の慣性が古い計画パラダイムを温存する
00:31:58 技術導入は歴史的に予測可能なパターンをたどる
00:36:32 説明責任を systems と scientists に移す
00:41:06 企業と vendor がアルゴリズム責任を共有する
00:45:40 顧客は domain insight を vendor に伝える必要がある
00:50:14 事実を客観化して経営上の対立を解く
00:54:48 Supply Chain Scientist が numerical recipe の品質を持つ
00:59:22 Quantitative supply chain は業界へ浸透する
01:03:56 日次更新で長期予測への依存を減らす
01:08:29 遅れはリスクを増やし、市場が遅れた企業を選別する

概要

手動 override は注意、時間、資金を消費する税金です。希少な専門家を一時的な修正の監視役にしてしまいます。人間の知見は個別の出力ではなく numerical recipe に組み込み、1時間のコードや parameter 変更で大量の override を置き換えるべきです。同じ意思決定に二度支払っているなら、それは supply chain ではなく casino です。

詳細概要

手動 override は慎重さに見えますが、実際には人を忙しくするためのシステムが課す税金です。主流の planning では予測が計画になり、計画が資源を配分します。計画を支配する人が会社の財布を支配します。

代替案は人間の洞察を否定することではありません。それを numerical recipe の中に置くことです。vendor は system integrity に責任を持ち、Supply Chain Scientist は事実を意思決定へ変換する rule を所有します。日次の probabilistic decision は、何週間もの process で小さな精度改善を追うより latency を下げます。

全文書き起こし

Conor Doherty: This is Supply Chain Breakdown and we are back, and today we are breaking down the hidden cost of manual override. My name is Conor, Communications Director here at Lokad, and to my left as always, Lokad’s founder and a tonic for sore eyes, Joannes Vermorel. First of all, before we get started, comment down below. One, is manual override a daily part of your planning process? And two, are there any situations where you think it is appropriate for humans to challenge or override the system?

Get your questions in as soon as possible and I will pose those to Joannes in about 20–25 minutes’ time. And with that out of the way, Joannes, good to see you again. The hidden cost of manual override, our first topic of the new year. What is the scope of our criticism today?

Joannes Vermorel: The scope of the criticism is that the supply chain game is played wrong. You know, that’s pretty much it. And when I say it’s pretty wrong, by wrong I mean those forecast overrides, because we are talking about that specifically. It’s not any kind of override. Those forecast overrides are directly harming your capacity to play the supply chain game, right?

And what do I mean by supply chain game? It’s fundamentally you convert your precious dollars or euros into stuff, you know, physical stuff. You transform them, transport them, repackage them, distribute them, and then sell, and then at the end of the chain you sell those things transformed, transported at a higher price point, and you rotate. You know, you just keep repeating that. That is a supply chain game and you want to have it maximally profitable. That’s the game that is being played.

And what I’m saying—and the scope of the discussion is really—if you are overwriting a forecast, I would say anytime you do it, you are literally harming your company. You are doing something that negatively impacts its capacity to play this game profitably. That’s what I’m saying.

Conor Doherty: Okay, well just a side question here because again the topic is manual overrides and obviously by definition manual overrides can apply to any tweak to a system. But the thing is, as you said, let’s be real, it’s almost always not just about forecasts. It’s usually demand forecasts because it’s not scrap rates. It’s not lead times. It’s not pricing. It’s not allocation. It’s simply demand forecasting tweaks.

And I’m just curious, why is it—it might as well have just been called manual overrides to demand forecasts—but why do they happen? And why are they so common on that very specific issue?

Joannes Vermorel: So first, why such a focus? Because in the mainstream supply chain theory, the demand forecast is not just a forecast. It is in fact the foundational layer of the plan and the plan is both a forecast and a commitment. Those numbers that you put in the future, they are implicitly you say this is it.

This is the future and then as soon as you say that, this is not a forecast anymore. This is a commitment. Everybody has to do whatever it takes so that this future becomes real. Thus the plan in the mainstream supply chain theory is put on a pedestal. It is like the king and central and thus fundamentally overriding that is overriding the plan.

And overriding the plan is so important because it overrides the resource allocation that will happen in your company. Thus who can control the plan can ultimately control how the company allocates its resources. Thus the temptation to just tweak things endlessly because the downstream effects are so important. That explains why people are so frequently doing that. It doesn’t explain why it’s actually a perverse sort of practice and why it hurts the company.

So I can proceed with that too. So now why does it hurt the company? Essentially for two reasons. The first one is that ultimately the scarcest resource in your company, supply-chain-wise, are people who know how to play the supply chain game. You know that is the bottleneck. That is ultimately, you have a lot of people who can do a lot of mundane stuff. But if you want your supply chain to support your company and to be a profit center, not a cost center, you need people who can play this game profitably.

Those people are rare. Those people are scarce. Those are the scarce resources of your company supply-chain-wise. Now whenever they do an override, this is not accretive. They do something, so they spend an hour, and then the next day the system is just going to regenerate the forecast. So this thing is going to be gone.

You see that fundamentally it’s not accretive because instead of trying to use those scarce resources, those people who know how to play the supply chain game right, to do something that would be of lasting value, they just do the correction and this correction comes with like an expiration date which is exactly the date of what is corrected in the forecast. You see, if I correct manually the forecast for next week, then at the end of next week my correction is gone. It is like it comes as a perishable thing by design; thus it is not accretive. That’s the first reason.

So we have two—there’s two pillars there. Yes, two pillars: the first one is again the scarcest resources are the people who can play the supply chain game and if they do manual overrides what they are doing is essentially everything that they do is thrown out of the window and discarded within a few weeks or a month at most, so it’s not accretive.

The second thing is, the second problem is, what about the overrides themselves? What would increase that? It turned out that the evolution of the software technology itself, the planning software, really inflates the amount of overrides that you need. Why? Let’s go back in the 70s. 70s, your forecasting software is super basic, forecast at the category level, super high, and at the monthly level.

So your software is not sophisticated and thus it produces very few forecasts and you have very few edits. But keep in mind the vendor wants to improve the products. So what will they do? They will say, oh our forecasting tool was so basic in the early 80s, only categories and only monthly buckets. Now we can do weekly buckets and we can do subcategories as well. So you went from a forecast that was like, let’s say, 500 numbers to 5,000 numbers.

And by the way, how does the software vendor make money? They sell licenses based on seats. So the planning software vendor is increasing the specification of their forecasting tool. They generate more forecasts, more granular in time, in geography, and then suddenly, because there is so much more forecast, they need more people to review that and override. And by the way, more people mean more seats. Yes.

So this is—it was not intentional—but from the software vendor perspective this is a virtuous cycle. You increase the sophistication of your tool. You can sell it more expensive because it’s more sophisticated. But in turn, people need more people to deal with those endless corrections because your software is going to regenerate those forecasts. So the amount of people you need, it is going to be endless. Thus they need more seats, more seats, more revenue for the vendor, more sophisticated, etc., etc., etc.

And also we have a very, very perverse incentive where under the guise of “we are going to give you more accurate forecasts,” which is kind of tangentially true. Yes, the forecasts are slightly, slightly, but ever so slightly more accurate. Certainly not to the extent that vendors claim. You know it’s more like, yeah, the next generation of forecasting tool is like 1% more accurate than the previous one. Vendors would tell you, oh our new forecast is like 99% more accurate than the previous one. Certainly not.

It’s incredibly incremental and we have reached, I would say, a plateau about 15 years ago, a plateau of how good you can get a time series forecast. It has not really been moving much for the last 15 years. So bottom line anyway, there’s still improvement but it’s very small, but what you can do with scalability with cloud, you can explode the granularity of a forecast. You can do it at the SKU level; you can do it at the SKU level per store, per site, per factory, per everything. So the amount of forecasts that your software planning vendor generates explodes. The amount of people that you need to do the correction explodes.

And vendors make profit. Does the company make profit? No, because the company is spending more resources than ever to do something that will never end, which is just fix an endless stream, a self-inflicted amount of forecasts. You see that? That’s why I say this is where it harms a company. It harms your capacity to play supply chain games.

You are not playing the supply chain games right. You are playing the game of the supply chain software vendor.

Conor Doherty: Okay. Well, when I announced this—thank you—I want to push you back much like we’ve been doing in the other series on your book. I do want to push back with some of the information anonymously that I’ve elicited from having conversations with our audience when I announced this topic. And again, I’m just going to sort of collate and compile some of it. I don’t want to straw man the other perspective here because, again, a lot of people who are listening and who will listen to this will say things like, yeah, okay, we do have a sophisticated system, maybe not as sophisticated, the system of intelligence as you, Joannes, have espoused in your book, but we do have a pretty sophisticated model that generates forecasts and we use that.

Sorry, we use that primarily and very sparingly. Overrides are applied where we perceive, you know what, it’s lacking that little bit of je ne sais quoi, it’s lacking that little bit of market insight or intel that, you know what, only exists in my head, the planner, and those are the rare occasions where I manually intervene. To situations like that, do you still think, well, that adds no value, that’s a cost, it’s a waste, you shouldn’t be doing that?

Joannes Vermorel: Yes. Okay, yes, because again, think again about the consequences of your action. The consequences are the more corrections you do, the more people are involved with this software, the more your software vendor will prepare the situation so that it needs more. You know, it’s a vicious cycle. It’s a virtuous cycle from the software vendor perspective. It’s a vicious cycle from the client company on the receiving end during the supply chain. So you should not play that game.

It’s a game. It’s like going to the casino. You’re not going to get rich at the casino. The house always wins. That is the sort of thing where, if you play this game, the only winner will be the software vendor. I can guarantee you that. I’ve been there, done that, seen my peers doing that for almost 20 years now. The only winner—it’s like a casino. The only winner will be the software vendor.

And you cannot—again, thinking that you can beat the vendor at that is just like thinking you can beat the house in a casino. You might be clever, but you’re not going to beat the house. You know, maybe occasionally because you’re super lucky, but if you’re rational and if you think in terms of what are my odds and will I make money on that? No, you won’t.

You may enjoy the process. You know, it might be like a casino, the process might be very enjoyable. Why? Because the software vendors, they are not fooled. They will make it pleasant for their users. You know, they want more seats. To have more seats, the process has to be pleasant and whatnot and whatnot.

But so what is the alternative? You see, because the thing is that people would, I think the main objection would be, but Joannes, you cannot leave those forecasts, they’re broken as they are. And I agree. And then so the solution needs—you need to think of a way to correct the tool that generates the forecasts in the first place. This is where you should intervene.

Well, it’s fixing the disease versus the recurring symptoms. That’s the way I think about it. Those forecasts are produced by a piece of software. Yes. You need to fix this piece of software. And yes, and people say, “Oh, but I’m a practitioner, a supply chain practitioner. I’m no software specialist.” I say, “Yes, but you have a software problem.” And guess what? That’s also what I say in the book.

Yes. To a large extent, a modern practice of supply chain is the mastery of the modern way of doing software. Yes. And that’s just the way it is. And people already touch forecasts which are generated using software anyway. So it’s a very limited argument in my opinion.

Exactly. Exactly. I mean, so the solution, and it’s relatively straightforward, is that the software needs to be changed. And now there is another side effect of that: how frequently will you change the software? The answer is all the time. Yes. Because why? Because those insights—I don’t, you see, I do not dispute the fact that supply chain practitioners may have insights, yes, that only they have. You see, I’m not disputing that.

So, you know, the US president publishes something on Truth Social or whatever and then the madness breaks loose or whatever. It can happen. These sorts of things can happen. You know, there are things, it will not be in the system; it’s not possible. Okay, it’s not possible. You cannot have every single crazy contingency in your system.

Thus the system—because it’s a piece of software, it’s not Skynet, it’s not like omniscient AI—the system that you have to generate your plan is automation. So it’s a piece of software, limited. It doesn’t know everything. Fine. So what you want is just a way to be able to intervene in this software, and the software itself needs to be designed in a way that the timeframe for a correction is one hour.

Mhm. Roughly end-to-end correction to add a new critical insight because the rule changed, because it was announced online or whatever. The rule changes. Sometimes it can be tragically a disaster—there is a ship in a port that just blocked the port and whatnot. You know, strange things can happen. Tragedies happen.

And it can be a massive snowfall that is surprising everybody, whatever. I agree with also crises. Yes, they are part of the game. You just need to be able to modify software and the software itself should be amenable to such correction within a one-hour timeframe. That’s what we’re talking about. If you have that, you’re good.

Conor Doherty: Well, again, this is something I wrote down earlier and it’s an important way to frame the discussion because, again, I don’t want our position to be straw manned or misrepresented. You’re not—correct me where I’m wrong—we’re not making the argument that there will never be a situation where an experienced practitioner will be—there will never be a position—there will—we’re not saying there won’t be a time where a supply chain practitioner won’t have a valuable insight. What we’re saying is, is that once-off, is that something that reveals a pattern that the system should absorb, and then we’ll know going forward so that the machine, the system, won’t continuously generate a broken recommendation that you, the genius, must come in every day, every single time and go, “Ah, that should be a five, not a four.”

And we end up with very strange patterns which is, then those insights, for large companies, they come in daily and so the system is changed daily. Yes. And you would say, but how is it different from forecasting overrides, because here you’re overwriting the system itself instead of overwriting the forecasts.

I would say, well, the difference is that you have probably like four orders of magnitude of difference in the magnitude of what is overwritten. You see, if you override the outputs, it is potentially thousands and thousands of numbers that you need to change. If you override the software, very frequently it’s just one meta-parameter or just one line of code to add a condition and that is done and all the software will stay untouched.

You see, so you can have something that is extremely complex happening in your market, but the impact of all of that is a judgment call and you modify one line of code and you’re good-ish. When I say good-ish, again, approximately right rather than exactly wrong. It’s—this is a game being played now in supply chain. And here I say if you displace that from editing the output to edit the numerical recipe itself, you are minimizing the amount of friction by again probably about four orders of magnitude.

And as a plus side, sometimes—not always, but sometimes—you realize, damn, this insight is part of a broader pattern I had not thought about, and thus this is not a quick fix that I’m making. I am actually improving for good the numerical recipe because I realize, damn, there is a class of stuff happening in the world, and if my recipe can actually take care of that, and I have now an extreme example that magnifies the thing that I had not understood, I will be playing the supply chain game better.

So you see, it’s very interesting, and that’s why it becomes very accretive. Most of the corrections are not accretive. But I would say, again—that’s the experience of Lokad—maybe one out of ten turns out to be very accretive. Yes. And that’s, again, we are back in business in the sense of back in business in the scarcest resources are the people who know how to play the supply chain game, and here we are maximally extracting their most valuable insights to make them permanent in the system.

Conor Doherty: Well, again, this is to try and create, to try and convert the system into an asset. Again, you keep using the term accretively, but to me, I think the one that’s, while precise, “a productive asset” is the thing that I think resonates stronger with people. The idea that, no, I want this. I am still applying my override, my insight, my expertise. It’s just I’m not doing it in a way where I am holding the hand of the system every single day and I dedicate my own time and bandwidth every single day to trying to put together a PO. It’s no, no, how can I make this system a better asset so that I have more free time.

So again, it’s changing it from a routine override to maybe, okay, this is a once-off override, but the compounding effect of applying that to the system is that the asset gets better and better and better because it learns. Yes. And again, this is the system of intelligence that we’ve worried about before, and it learns—again, no Skynet—it learns how because the supply chain practitioner himself is learning. Yeah.

Joannes Vermorel: This is just, again, I’m not going to invoke genius or anything. In what I describe, Lokad has been doing that for 10 years. The system learns because there are smart humans in the process who just learn. And the interesting thing is that, again, this alternative approach that I propose is very parsimonious in the way it consumes those people.

So those scarcest assets, those people who can play the supply chain game right for your company, it doesn’t abuse of them. It preserves them so that they can improve the system itself plus all the other things. Because you see, the thing is that if you want to play the supply chain game right, it’s not just about having the right forecast. You need to have good relationships with your suppliers, you need to have good relationships with your clients, you need so many other things.

But again, you can only do those other things if you’re not in constant firefighting mode against your own forecasting system.

Conor Doherty: Yes. Well, again, just to provide a little bit more perspective because I want to push on and dig a little bit deeper, but I do want to provide some outside perspective here and also reference something you’ve said before. So, for the sake of evenness, we’re talking about overrides, people discussing, coming to consensus. ASCM’s S&OP guidance would suggest that consensus is “a core step of the decision-making process in any successful company, not a failure mode.” But in your book—and I know you’ve said various versions of this—but in your book, you refer to routine manual corrections as signaling a serious design flaw. That’s page 330.

So my question to you, a very pragmatic one, and one I’m sure people are interested in learning about, is: is there a practical test for differentiating between overrides that are design-flaw-based—they’ve come from a design flaw—and overrides that are, no, no, this is volatility, and I do need to step in?

Joannes Vermorel: Again, this is a game that you cannot win. The vendor is acting against you. The house of the casino always wins. You see, it’s a catch-22 situation. You are facing an adversarial situation where your company—no matter how smart you think you are—you are facing a software vendor who has 100-plus clients. And they have played the game so much more than you. You’re not going to win this game. You’re not.

And you see, it is the sort of situation where the only way to solve it is to accept that there are classes of games that are, you know, they’re not fair. Again, we are going back to that. If you start to say there are classes of overrides that are legit, okay. What will be— you will be discussing with your software vendor. Your software vendor will reverse-engineer that and you know what, within two years, this class of acceptable overrides will benefit from sophisticated features to help you be commoditized, to be magnified in sophistication.

You see, because you say, oh, you only care about this subclass of overrides. “Let me help you. I will design features to help you.” And those features—damn, they are so good—but they introduce complexity. Complexity means suddenly it takes more time because you have things that are more complex, more powerful, but more complex and thus more people, and we are back into the problem of, you know, the incentive is to sell more seats.

So you see, you will not solve this problem. It is not a problem that can be solved. It’s like expecting someone in the 19th century that was a horse trader to suddenly start advocating for automotives. Not going to happen. Not going to happen. Every single time something comes from the automotive sector, the guy will come up with plenty of good reasons to keep you on using your horses.

You have like a deadlock of incentives. Yeah. And how the situation was unfolded—well, the people who were doing horse trading mostly went bankrupt and that’s it. They removed themselves from the market and that’s it. Unfortunately, we have this deadlock of incentives, and due to opacity and a few other things, I believe that this situation has gone very, very bad specifically in supply chain.

Fine, don’t worry, the market will find its way because the market is not a great educator, it’s a great filter. The market will filter. It will do. And the interesting thing is that the market is an unintelligent filter; it doesn’t need people to understand what’s going on. It will filter nonetheless whether people understand or not. The beauty of markets.

Conor Doherty: Okay, well I’m going to push on because I think there’s a—from your perspective, I’m sure one of the hidden—not really all that hidden, it’s very explicit—costs of manual override is basically keeping people as much as possible in the loop. Now, in your book, you argued that most vendors, not us, but most vendors “deputize users as human co-processors of the system.”

Now, in response to that—I know, because I’ve had this discussion a couple of times this week already—IBP and S&OP practitioners, no problem with that. They would argue the exact opposite that—and I’m paraphrasing here—the process in our experience can’t replace human decision-making, especially for cross-functional trade-offs that just aren’t evident in the data or process. So is there any—you, I know, you’re already saying no—but is there any merit to that position before you address it?

Joannes Vermorel: The only way to assess the merit is to step back because you see here we have a conflict of interest. Because obviously if you’re part of the S&OP team, you have a vivid interest in the preservation of the S&OP process. So we need to step back and think, okay, we are in the 19th century—that’s our analogy for now. Companies need horses to operate, so they buy horses. So that means that inside the company there is a mini department who is—and horses are very expensive—and by the way, it’s a little bit horrifying statistic, but in the 19th century the life expectancy of a horse that was used to work for, you know, a mine or something was about four years.

So horses were dying all the time. Very interesting historians claim that, for example, 19th century New York was smelling like a dead horse because there were so many horses that were dead all the time on the streets that the entire city smelled like dead horses. Okay, back to the anecdote. You have this vendor who’s selling horses. You are a large company. So you have actually an entire division of people who are expert in getting good prices and good horses from the vendors, you know, that’s their specialty.

And guess what? Do you think that someone who inside the company has been, you know, for his entire life being an expert in getting the best horse at the best price is going to be super enthusiastic about the idea of transitioning toward cars? No. So you see, the problem is, I would say, so my answer to that is: the opinion of people who are conflicted should be disregarded and you need to step back and make your own mind. And that applies, for example, for Lokad—what I’m saying.

I am a vendor, I’m trying to sell my stuff, so what I’m saying is that you need—you cannot trust what I’m saying. I have my own incentives, so be skeptical about what we’re saying. Exactly, exactly, maximum skepticism. Maximum skepticism. You have to assume that I am maximally biased.

And then, because we are dealing with evolving technologies and whatnot, the general advice is pick analog situations in history because this meta game of a new technology enters, it disrupts, and whatnot, it happens hundreds of times. Yes, what I’m saying is like a mini, mini, mini battle in a gigantic war of progress where you had people that were company-wise that were kicked out and replaced by better technologies. This thing has been played literally for 200 years and there have been dozens and dozens.

And the interesting thing—if you study history, I suggest to the audience to study a little bit of history—is that the patterns are always the same. It’s always the same. And I love Douglas Adams’ guide, right—yes, a fiction writer, incredible writer. I really recommend his Hitchhiker’s Guide to the Galaxy. He has such an incredible quote. He says, before—if you actually discover a new technology before—your age is before 25, for you it’s not new.

It has always been part of the world. It is just the world as it is. If you discover this new technology between 25 and 35, he says this is an incredibly exciting development that captures your interest. And then if the technology emerges after the age of 35, he says, this is heretical and this technology should be banned. And that’s the three stages of human development with regard to new technologies and new stuff. It is hilarious, and it is so true. It is so true. It’s like instinctive visceral reactions. And there was also Niels Bohr that was saying that science progresses only one funeral at a time. It’s a little bit more morbid, but it’s exactly the same idea.

Conor Doherty: Well, another point again implicit in the question, which again it’s a sort of a composite from comments that I received from the audience building up to this discussion—excuse me—the idea that, well, the system can’t handle or human judgment surpasses the system that we currently have. And it’s like, okay, that might be true in this specific context, but the context needs to be evaluated just a little bit. For example, if you’re talking about a company that has more than, I don’t know, a few hundred million turnover to a billion dollars per year, their IT budget’s going to be quite expensive. They can afford the kind of systems of intelligence and technology that we’re talking about.

So again, it’s like if you’re spending, I don’t know, hundreds of millions on your ERP, you can afford the kind of technological intervention that we’re talking about today. Yes. So again, every situation is unique. Yes. But also the thing is that it is actually quite cheap. Well, that’s the other—that’s also the sort of thing that is maddening—is that because in terms of paradigm, those companies are stuck with, I would say, ideas that had a time of validity that ended in the early 2000s.

You know, the validity of those ideas, it used to be valid. I would say, you know, this paradigm of manual overrides I think was valid from probably 1975 to 1995. That was, I would say, for those 20 years I would say yes. Computers were crappy. Languages and compilers were crappy. There were so many crappy things and the capacity was so limited.

People—humans—had a genuine advantage in just touch the numbers and be done with it. So that was, I would say, the time period for which this had validity for this approach. It expired, I think it expired in 1995. It became absolute nonsense by 2005, and by 2015 it was crazy. So you see, it’s again—is it crazy to use a horse in 1910?

Not yet. Cars are still crappy, unreliable, etc. Is it crazy to use a horse in 1920? Ah, cars start to be quite good. So at this time it starts to be a little bit insane. Is it crazy to use a horse in 1950? Yes. Yes. Absolutely. The cars of 1950 are absolutely better than horses in every single dimension that you can imagine. There is no way back. So you see, the craziness or the damage to your company increases as time passes.

Conor Doherty: I think you can—just to stitch together a point, it’s just from a discussion we already had on chapter two, chapter two, history of the book. Yes. We were talking about vendors, the fetish for buzzwords and whatnot. I do want to point out that there has been obviously an aggressive campaign to convince people that, you know, you need to keep these—you need to keep horses in this analogy that you’re using.

Again, skepticism has been missing is what I would say. Yes. Yes. I mean, again, we are talking of worldwide, you know, a dozen vendors that are in the billion dollar annual revenue that are lobbying like crazy, this whole industry. So, I mean, just Lokad is just like a very, very small guy and we are doing what we can on YouTube and LinkedIn, but we don’t have our own trade shows, we don’t have massive events. We are again very, very small.

And I’m just saying that don’t expect the incumbent, the people who are deadlocked in an outdated technology, to ever see the light and propose the alternative. By the way, again, look at the history. All, for example, for digital photography and Kodak. Kodak internally quasi-invented digital photography. They literally had reports, fairly accurate reports with literally like a 15 years runway, to say digital photography is coming. We need to transition everything because this thing will win eventually. And they even got like a decade and a half ahead the correct timeline.

So not only they knew—but it’s usually knowing that something is coming is relatively easy—to know exactly the timeline, much harder. But no, but Kodak, they knew it was coming and they had a very, very good estimate of the timeline, and yet, and yet, they failed. They failed to react. You see, this is again: if you’re an incumbent and you’re dominant in a certain market, certain way of things, disrupting yourself—most companies fail. Most companies fail. That’s just the way it is. I hope for Lokad we will manage, but you know we have to be reasonable: most companies fail.

Conor Doherty: Again, there’s some comments from the audience and I do want to come back to—there’s a concluding question I want to get to, but it ties it together nicely so I’ll save that. But I do want to touch on the idea of accountability because that is, again, having these conversations with people, one of the things they’ll say—even if they, because look, let’s be honest, a lot of our audience kind of already agrees, but I don’t want to just preach to the converted; I do want to push—one of the things they will say is, even if all of that’s true, we accept that there’s maybe a cost of inefficiency, like having people pay for the same decision twice, whatever, but we have accountability.

So for example, if I have a PO, the forecast is generated, it says order 200, Janice comes in and says, “That should be 2,000,” at least when I’m stuck with, I don’t know, 1,500 extra units or whatever, I know who to blame. Joannes is accountable for that decision. Yes. And this—but in the system of intelligence that you’re advocating where you take people out of the loop, where is the accountability? Where’s the audit trail? Who’s responsible for good or bad decisions?

Joannes Vermorel: So again, history—this game has already been played. So let’s have a look at a situation where it has already been played. Late 60s actually, the first business systems emerged, IBM, and you know what? They can do accounting and they have processors to do arithmetic. They can do addition, they can do multiplication. Guess what? Those systems have bugs.

So something that nowadays we don’t even—we have forgotten because computers are so good—computers never make any mistake for arithmetic. I mean, in fact, in reality they do, but with a probability that is so low we don’t care. So essentially we take for granted that we have absolute godlike perfection machines as far as basic arithmetic is concerned. That’s a modern computer. That’s a basic expectation: it can do basic arithmetic perfectly. But what about in the 60s? It was not yet the case. Computers at the time had problems, bugs, and sometimes, you know, bugs like the multiplication is wrong.

Now the question the accountant at the time was asking—same question. I am doing the accounting for the company. I am responsible to make sure that, you know, everything, this double-entry system is done right. You want to say that I am going to—who is going to be accountable for arithmetic mistakes? You know, the accountant himself was responsible for arithmetic mistakes and it was his value to say, I am the accountant. I guarantee that the calculations are right. And now you want me to say that this company—how is it called—Intel or something, IBM or something, computers, machines, I don’t know, they are going to take this responsibility? What if our accounting—and by the way, it was a real concern—is wrong? Are we going to blame IBM?

Well, guess what? Yes. Yes, that’s exactly what unfolded. People said—long story short—the accountant decided, okay, I’m going to trust that the basic arithmetics are going to be done right, and you know what? I am going to sue the hell out of IBM if it’s not the case. And guess what? IBM got their machines quite right and problem solved.

So, fast forward here, what we’re saying is that it’s not that there is no accountability. It’s just that the accountability is different. The vendor is responsible for essentially the integrity of this programmatic system. Yes. But what Lokad does—the system of intelligence—yes, it is responsible for its integrity. But there is a supply chain scientist, potentially with the help of the brand, who is responsible for the integrity of the numerical recipe.

So you see, fundamentally, the numbers are still—there is still very much at Lokad—that’s why we say the supply chain scientist is a person, and every single one of our clients has the supply chain scientist attached to them, is personally responsible for the quality of the result. And what is the purpose of Lokad? The purpose of Lokad is to provide all the support that every single supply chain scientist needs at Lokad to be able to bear this responsibility.

So you see, yes, the supply chain scientist is ultimately taking the entire responsibility, but it is the responsibility of the company, Lokad, to make this responsibility as manageable as possible. So it’s like, you know, it is not like you are on your own and if you fail, it’s all on you. No, no. The company is responsible a little bit. Think of it like airplane pilots. Yes. The pilot is fully responsible to making sure that passengers arrive alive at destination. But at the same time, the aircraft manufacturer is fully responsible for the integrity of the airplane and making sure that this plane is as good as it can ever be made for the pilot.

You see, it’s a core responsibility. And yes, there are limits to the responsibility. Yes, there are limits, but at least, you know clearly, there is a lot of technology involved, but there is a lot of responsibility involved. It’s not like technology without accountability. There is accountability at every stage. Lokad is the same.

Conor Doherty: Yeah. Again, just to echo that slightly—and excuse me—the idea that, because we are talking about Lokad here, the idea that using the technology—this, at least, our system of intelligence—it will differ depending on who you go with, but just for the sake of that discussion, the responsibility for the integrity of the algorithm, the plane in that analogy, rests with Lokad or with the supply chain scientist, the maintenance, the repair, etc. Okay.

Joannes Vermorel: But, for example, we have a co-responsibility because what if the supply chain practitioners miss a very important insight? See, we are not specialists of every single business that we serve. So ultimately, there’s a relationship with the client, of course. So ultimately, where the client bears his own responsibility is that what we want is to be incredibly diligent, reliable executors of their insights. Yes. But fundamentally, if someone is doing a very specialized market like wind turbine for Canada in this region and whatnot, you know, tons of specificities—we are not expert in that.

And there is a change of regulation about one of the provinces of Canada that we didn’t know about—I mean, Saskatchewan or something like that—yeah, I mean, and it’s the responsibility of the client to tell us, the vendor, “By the way, beware, we have this thing that is coming. You need to do something. We think it is impacting for your—” And that’s impact. That’s improving the asset. That’s what we’re talking about.

Exactly. So you see, Lokad is responsible but if critical semi-hidden business insights are never communicated to the vendor, I mean, again, we don’t pretend to the client that we’re going to become even better experts of your own business than you already are. Again, the scarcest resource comes back to the supply chain practitioner who knows. That’s why this person is so important.

Conor Doherty: And that comes then full circle. We’re going to go on to the audience questions now, but that comes full circle to the point that was made right at the start, which is that we’re not making the argument that there don’t exist core insights in the heads of practitioners. So again, when we say humans in the loop currently is bad, what we mean is humans in the loop as the loop currently is, is not what we would advocate. Simply, what if the loop equals assets that you’re trying to improve on a daily basis such that tomorrow you don’t have to correct the mistakes of today, we’re all on the same page on that.

If it is, no, we’re going to have a system where you have to babysit it every single day—yes, we have to have consensus on every single line every single day or anything like that—then no, that’s where we differ.

Joannes Vermorel: And again, just think of it a little bit—another analogy. Think of it as the Toyota approach. You know, you produce cars and you have cars that come out of your factory with defects. So what do you do? Step one, you just have a crew that is there ready to fix the defects. So you give the car to the client, they drive 20 kilometers, there is a defect, they come back, and you have a crew ready on standby to fix the defect in one hour. That’s one way. Do you think that is the way Toyota would do it?

Absolutely not. They think, if a car after 20 kilometers has to come back, we need to fix the freaking factory so that it doesn’t produce cars that have defects that force people to come back after 20 kilometers. You see? So you see it is the same sort of thinking. You need to have the deeper fix. The deeper fix—fixing the numerical recipe—is a deep fix as opposed to the manual override which is a quick fix. The manual override is the exact same thing analogous to teams on standby at the exit of the factory, just to get the vehicles back after 20 kilometers because they faced an issue.

That is—that’s why I say it’s madness. That’s why I say that the correct way is, no, you have to fix the factory, the thing that produces the forecasts in the first place. That’s what needs thinking. Okay. And fixing.

Conor Doherty: All right. Well, thank you. I’ll push on now—some comments and questions. So, this is from Antonio. Hello, Antonio. Comment followed by question. “I agree that manual overrides will be gone by the next cycle of forecasting, i.e. next week.” You were talking about what you do today will expire, the lifecycle. “However, repeated manual overrides could suggest a disconnect between business and leadership expectations, perhaps tied to long-term financial gains and the best estimated demand. In some cases, the best demand forecast might even be at odds with the narrative pushed top-down. In situations like this, how can supply chain analysts like us bridge the disconnect?”

Joannes Vermorel: You need a method that makes the disconnect obvious for everybody. That’s it. You see, what you’re saying is that, “What if my CEO is wrong and he pushes things that are just nonsense?” You know, it’s a polite way to just say that. You’re not going to win a battle, an argument with your CEO. So, forget about that.

The only thing that you can do is essentially to have—again, that’s a way you engineer the thing—to tell your CEO, to tell everybody, those are core assumptions: cost of money, for example, you know, the interest rate when the banks lend us money, the lead times that we observe, the z that we observe, that, that, that. It’s just facts, facts, facts, facts, and now I can tell you that my system is just a machine that takes the facts and generates the decisions. If you don’t like the decision, we disagree on the facts.

The good news is facts are not opinion. Facts exist independently from your opinion, the opinion of the CEO. And the CEO would say, yes, let’s get the facts right. You know, if we have—in fact we have a disagreement because in fact the CEO says, but your interest rate is wrong. It’s wrong.

Say, okay, okay, let’s double check. Let’s call the CFO and see who is right, who is wrong. But you see, you need to objectivize the system so that people can play this game rationally, rationally without having perverse incentives all over the place. Thus that means that you need to have essentially a machine that is what Lokad engineers—that is facts in, decisions out.

And yes, there is an element of subjectivity in the facts because what counts as a fact or not needs—you need to think for that. It’s not a given. There is nothing in our universe that is objectively a fact, immutable, and something that is objectively not a fact. The boundary between fact and non-fact is, I would say, obviously there are things that are very easy, you know, very, very, very easy, but there are cases where it’s difficult, where the boundary is fuzzy, especially business-wise. What counts exactly as a fact, it can be a little bit.

So the only thing that you have to say here in this sort of situation with the CEO is that we need just to clarify this boundary of what counts as a fact and what does not. And then we are back to the accountant who says to the CEO, those are the books that the machine computes based on our accounting rules. I don’t make the rules. If you dislike the fact that the company is losing money, don’t blame the messenger. I’m just, as an accountant, I’m just a messenger of that.

It is obvious to a CEO that he cannot blame the accountant for the losses of the company. And if, again, we upgrade the way the supply chain game is played, those gaps and disconnects will be vastly reduced, just like nowadays there are very, very little battles between the CEO and accounting because they disagree on the numbers. You know, it’s again, modern accounting systems are sufficiently good and reliable so that it’s not a political battle. The books are just what they are. It’s not really up to debate.

Conor Doherty: Okay. So DMs come through and there were also some saved questions that I do want to pose. I’ll ask this one quickly because you kind of already touched on it, but I just want to read it because it is verbatim and it is interesting. Quote, this is someone who works in retail. Quote: “In our office we have people who will say, ‘Just change it.’ Sometimes it’s a planner, sometimes it’s a manager, sometimes it’s sales themselves. When they get it wrong, and they often do, at least we know who is to blame. How does Lokad’s system handle accountability?” You already touched on that but a shorter version of the previous question.

Joannes Vermorel: Again, it’s exactly the same way. A numerical recipe is attached to a supply chain scientist. This person has a name. We know who engineered the recipe. If the recipe is crap only once a year—you know, once a year there is a few glitches—you would say, ah, we can live with that. You know, it’s quite good overall. If this thing is glitching and crashing every day, and it works for other clients, then maybe there is a problem with the supply chain scientist.

You see, again, the accountability is just the same, just the same. Lokad is not—you see, that’s the mental paradigm. People are wrong. It is not a Skynet. What I’m proposing is not a Skynet transition. It is not about having a freaking autonomous AI that is accountable to no one to run your supply chain. That is not what I’m saying. I’m saying that—

Conor Doherty: You say that that’s not what you’re saying though. That’s the impression people sometimes get, I think.

Joannes Vermorel: And they misunderstand. And by the way, our peers are all too happy to reframe what we’re saying as if Lokad was pushing for that. This is not. On the contrary, we are pushing for something where there is very clear accountability, very clear. And I would argue that in a classic supply chain setup, the accountability of endgame decisions is completely diffused. And same thing with forecasts.

The problem with this collaborative forecast is that everybody is touching the forecast. So at the end you have maximal confusion on who is accountable for what. I very much disagree. The status quo is completely horrible in terms of standards of accountability. This is not a strength of the present-day system or mainstream supply theory. It’s one of its main weaknesses.

Conor Doherty: Okay. Excuse me. I’m going to push on. This is a question from friend of the channel, Boris. Good to see you. Just a few parts to this but I’ll just take it section by section. So first, Joannes, do you think there is a visibility or legitimacy threshold for economic decision engines and supply chain systems of intelligence after which traditional planning software starts to lose credibility at scale? So like a certain amount of big companies start using these things, therefore everyone sort of jumps on the bandwagon.

Joannes Vermorel: Again, it will happen. You know, that’s the sort of thing. I just say, when people start using cars—when will people get—you know, it’s what I said about, you know, remember Kodak. I say not only they could see the future, they could see the timeline right, which is incredibly hard. Usually it is not difficult to see that a certain future will happen, as a little bit inevitable, and a lot of people get it. But the genius is to get the timing right.

For example, Steve Jobs with the iPhone. He not only got the future right, he got the iPhone exactly at the right point in time. I can go into the details—some key technologies—he had that in the back of his mind for quite a few years. And in fact, if you read the story of Steve Jobs, it’s very interesting. He had some key technologies and he waited just the right time so the technology would be there and it would snap and it would be a success.

So again, the problem with this question is that what you’re asking fundamentally is: supply chains are going to transition—and I know it, that’s my intimate conviction—to this new model. Why? Because banks already do it. It’s like the ancient trading system versus quantitative trading. And that’s even why I called this approach, Lokad, Quantitative Supply Chain. It was an analogy with quantitative trading in banks. It will happen. For me it’s just a certainty. I see—and the interesting thing is that every year I see more and more companies who, on themselves without Lokad inputs, reinvent exactly what Lokad has invented.

So you see, it is not like Lokad is just incredible geniuses impossible to reproduce. No, no. Some people think completely on their own, they have never heard of Lokad, they just go through the journey, the same as Lokad, and they reinvent exactly what Lokad has invented. It is a little bit scary for me as a vendor, but it’s also very exhilarating because it means that it’s an idea where its time has come.

More and more people, even if they have never heard of Lokad, they just reinvent the exact same thing. So at some point the amount of people in the world who will just reinvent—whether they have heard of Lokad or not—they will reinvent the same principle, come to the same conclusions, and act accordingly. It will indeed reach a threshold where, bam, the world transitions. That’s very frequently like that for technologies.

When will it be? I don’t know. By the way, I’m just trying to accelerate the process. That’s why I’m spreading those books. I don’t know, but please trust me, I’m doing whatever I can, the very best. That’s why we’re here actually. We are trying to accelerate, to speedrun this transition. That’s what we’re trying to achieve.

And by the way, the term—if you take chemistry—is percolation. You know, what we’re trying is to have this theory percolate the supply chain community. The audience can look it up. It’s exactly the right physical analogy for what we’re trying to achieve. It’s also how coffee is made. That’s the verb for it: to percolate coffee. A percolator.

Conor Doherty: Excuse me. Some more practical questions—literally has practical in the—so again, this one DM. Okay, it’s very short. Okay. “Joannes, practically: where do companies usually see the first payoff when they cut down on overrides?” It’s an interesting question actually because it’s like saying where do you see not waste—where does not wasting money show up?

Joannes Vermorel: Again, when you have a switch of paradigms, it is unfortunately not the right question to ask because you are looking at this through the wrong angles, through the wrong lenses. You’re looking at the problem from the old paradigm angles. So first is to accept that you have a change of paradigms. Again, let’s revisit horses and cars.

We are in 1910. Roads are terrible. So horses are the winning solution because many roads are so impossibly crappy that the cars of the day—which are not, you know, the sort of modern Land Rovers that we have—the cars of the day struggle immensely on those crappy roads. So it is not an obvious win. You know what they’re saying. But suddenly, if you start embracing cars, there are so many things that you can do more efficiently.

So I would say the benefits are extremely vast, but they are also extremely diffused, and most of the benefits come from things that you would maybe not expect at first, such as what you gain with this new way, for example Lokad, is just that you can refresh your forecasts every day completely, and up to the decisions, so that you have decisions that are always completely up to date. Yes. At scale. At scale.

And even if your forecasting model has not improved at all. So it’s the same forecasting model. You know, it’s not a forecasting—Lokad doesn’t pretend that we have a way to see the future better than anybody. No. But at least we are more up to date, and instead of having like a lag—because you see the problem of the game played with forecast overrides and whatnot is that you add delays. You add weeks and weeks and weeks of delays.

And remember, no matter your forecasting technology, it doesn’t matter how you do it: the further you have to forecast far into the future, the more inaccurate you are. You see, it’s the iron law of forecasting. Forecasting tomorrow is much easier than forecasting next year, and it’s much easier than forecasting a decade from now. The further you are into the future, it becomes exponentially more difficult to forecast. So whatever you can do to reduce the dependency on long-term forecasts will be a net win.

You see, and again that’s independent of the technology. And what I’m saying is that Lokad—one of the key unlocks—is just: just refresh every single day, and you get a boost that is much bigger than what you would get by a marginally more accurate forecast.

Conor Doherty: There was a quick follow-up to that which is, “I’ll be honest, our lead times are pretty much always wrong. Suppliers miss dates.” Yeah. Yes. “How much of that can you handle?” All of it. Okay. Again, I don’t want this to turn into an infomercial. I’m just—I’m literally reading what’s on the screen, just for the record.

Joannes Vermorel: Again, that’s the contract. The problem: you have the wrong pattern. You are thinking, I’m going to get a forecast and I’m going to get it right. This is not the way Lokad operates. We have a probabilistic mindset. We say we don’t know about the future. It’s just irreducible, the uncertainty. So the future demand—ah, very fuzzy, very uncertain. Future lead times—very fuzzy, very uncertain. But uncertainty doesn’t mean lack of information.

For example, you tell me, “Our suppliers are so erratic.” Yes, but what are we talking about? If we talk of, for example, getting a new airplane, Airbus has a seven-year backlog. So we are talking of a seven-year lead time. Yes. If we are talking about fresh food, people say, oh it’s so erratic. You realize the strawberries—sometimes they don’t deliver the next day and sometimes two days. You realize for strawberries. Okay. Okay. So there is uncertainty. Yes. But we are not even in the same universe.

In one way, a big uncertainty is tomorrow or the next day—that’s strawberry universe. The other way is maybe if we’re lucky it’s going to be seven years, and if we’re not lucky it’s going to be nine years. You see? Yes, you have uncertainty, but still it’s not exactly—you still know something about the future. The fact that the future is very uncertain doesn’t mean that it’s completely uncertain. Another example: will oil, the price of oil, go negative?

People had the mistaken assumption that it could never go negative. It can go negative, but people like Nassim Taleb correctly predicted that it would not stay negative for long. So he correctly predicted that yes it could go negative and he correctly predicted that it would not stay indefinitely negative. So you see. So you have stuff that is structural about the market.

And you know that oil can sometimes, in very strange situations—it did happen in the past—end up having a negative price on the market. But you also know that oil is fairly valuable, that oil is consumed, and that the world is not going to operate for any sizable duration of time with negative oil prices. So you know that if it happens it will be a fringe event and not like a lasting condition of the market. And the fact that you know that—again, there is uncertainty, but it’s not like you know nothing. And that’s exactly what Lokad leverages: that there is tons of uncertainty, but uncertainty doesn’t mean lack of knowledge. It just means imperfect knowledge, and we need to have a system that works with imperfect knowledge. That’s it.

Conor Doherty: Thank you, and just responding to those messages. Hope that worked. All right, last thought here—well, this one actually is quite tourist—and then I have my closing question, which is again from Boris. Excuse me. “Assuming that adoption for the kind of technology that we’re discussing is still in an early stage in supply chain—which it is—how long do you expect it will take to move from early adopters to broad mainstream usage across the industry? Predict, Joannes, you’re smart with that.”

Joannes Vermorel: That’s again a tricky question because it’s again a question of timeline. You know, again, we are back to the sort of question of timeline, and that’s the essence of—you know, it’s fairly, I mean, a lot of people manage to see the future right, but will they get the timeline? The timeline is know when to start and how fast the transition will happen. Again, my take is that it will probably be a slow transition for some verticals, but for others it will happen super fast.

So I think, you know, there is this quote that says, “The future is already there; it’s not evenly distributed.” So here I think we have companies that operate essentially with monopoly protections. France is a specialist, you know, we have 57% of the economy that is managed by the French state. Those companies face no competitive pressure whatsoever. So they literally can take two decades to upgrade anything. They’re fine. The market pressure doesn’t apply to them.

So—and by the way, India had the same sort of nonsense. A lot of countries have this sort of nonsense. So can we expect—and here I will just give one anecdote. It was from General de Gaulle. He discovered in a review—and I think it was in the 60s—that the French army was still keeping cats. And why did they have cats? It was because actually in Azincourt, you know, a battle that happened like 600 years ago. In Azincourt we lost against the British because, in part—there were many reasons—but in part one of the many reasons was rats had eaten the strings of the bows.

And so the army decided that it was a necessity to keep cats around so that the rats would not eat the strings of the bows. Fast forward, we are at the second half of the 20th century, and General de Gaulle just realized that the French army still has cats around, despite the fact that since essentially the Napoleonic wars we have been fighting with muskets. So these things were 200 years obsolete, but they were still there. Whatever.

So you see, I would say the transition will take a long time. I think historians will laugh at the fact that some industries were 200 years late to the party, just like, you know, General de Gaulle—“we had cat masters.”

Conor Doherty: Okay. But so I think we will have, you know, some hilarious—some bad—that we—some situations that will be hilariously bad. I know that in India some very backward administrations were still using typewriters like five years ago. I just read it. Again, that’s anecdotes. So it will be slow, but that doesn’t mean that it will not be super fast for some verticals.

Which verticals? I would say probably e-commerce—very fast; retail—very fast; probably domains like aviation, oil and gas, people who have a very strong engineering mindset because they will get it, they will act on it, and the market is very competitive. And then the second wave will probably be people who are a little bit more protected from the technology but who are full of very smart people. So that would be, for example, luxury. You see, luxury is not super, super tech dependent.

But luxury is full—because it pays very well—yes. Luxury is full of very smart people. So I do not expect that they will—talked to some of them this week—yes. Yeah. They will not be on the front line, but they will be just behind because they have very smart people in general because they pay very nicely, and thus those people will react quickly even if they don’t have, I would say, an immediate competitive pressure.

You see, because if you’re selling champagne, you’re not threatened by OpenAI. You know, champagne will be sold. But if you’re selling champagne, it turns out that your margins are very nice and so you can afford to have very smart people, and those very smart people will do smart things in general.

Conor Doherty: So the advantages are still there, but the pressures to pursue those advantages are not necessarily the same. Exactly. Exactly. Exactly. And then you will go on and on and on through the markets, with probably, you know, essentially the state-run monopolies at the very end of the queue that will upgrade 200 years from now when people have even forgotten what was the point of having those forecast overriding practices in place. You know, just like General de Gaulle wasn’t even readily aware of why his own army had cats. It was like a mystery. “Explain to me why we even keep cats,” and a lot of people didn’t even know why they were still keeping cats. You had to resort to a historian to get this answer.

All right. Well, we’ve covered history. Yes. We’ve also been talking for almost 70 minutes and it’s been lovely. It’s good to be back. Yes. What I will say is, to draw things to a close and ask for your concluding thought and to reference something you said before—because I want to make this very practical for people because again it’s a hidden cost of manual override—and there was a comment you made, I think it was last week in one of your posts on LinkedIn, and you talked about—the question you asked the audience was—where are you paying for the same decision twice? And when it comes to overrides, I really like that phrase. Where are you paying for the same decision twice? And for me, that’s a really nice way to conceptualize the hidden cost of overrides.

Ultimately, you’re paying for the same decision, the same action that you take, at least twice. If you have 10 people overriding, it’s more than twice, in fact. So it scales with every touch point.

Joannes Vermorel: And again, the pain is rising exponentially. For now, the cost—you know, the cost is basically the cost of not doing that is just like the cost of not using cars. In 1910, for some reason, everything is not fully ready for cars. So the cost of not using cars is kind of bearable because your competitors using cars, they still face those crappy roads, they still face those crappy cars.

But every day that passes, the gap magnifies. Yeah. And then very quickly it reaches a point where the gap is so large you can’t even recover, and then your company has disappeared. And that’s what happened. So what I’m saying is that this is a situation where the cost starts low but increases fast. How fast I’m not exactly sure, but what I can say is that at the moment where the world would start transitioning, the companies who haven’t made this transition will most likely go bankrupt, and it’s very obvious.

You know, it’s just like you can resist not using computers, for example, in your company—just stick to typewriters—but there is a point in time where if you resist too long introducing computers because you want to be stuck with typewriters, there was a point in time where the market sanctioned you by bankruptcy. And it happened again and again and again. It’s—again, markets are filters. Remember, if there is one thing that the audience must remember: the market never educates, the market filters, and it does that without any coordination, without people being even aware of what is happening. The market filters irrespectively of the will and intent of the participants. It’s beautiful. A little bit scary as an entrepreneur, you know. For me, it’s a little bit scary, but this is free market at play.

Conor Doherty: All right. Well, Joannes, thank you very much. We’re out of questions. We’re definitely out of time, as always. Thank you very much for joining me. Good to have you back in the studio. And to you, thank you all for attending, for registering, for your questions, for the conversations that we had in anticipation of this discussion. It’s always good to get some perspective from the audience so that I can filter it through to Joannes. And you know what I’m going to say: if you want to continue the conversation, reach out to Joannes and me on LinkedIn. You’re already here. You can see our names. Click on them. Connect with us. We’re always happy to talk. And with that, we’ll see you next time for Breakdown. Please do get back to work.