00:00:00 Introduction and clarification of the topic
00:02:55 Companies revisiting old approaches with AI
00:04:28 Failures of smart engineers in supply chain
00:05:44 Lokad’s automated website translation with LLMs
00:09:15 The four key proofs for failures
00:12:24 Why RFPs are dysfunctional
00:21:28 Why time series are dysfunctional
00:32:47 Why safety stocks are dysfunctional
00:50:04 Why service levels are dysfunctional
01:09:59 Audience questions
01:32:15 Closing thoughts
Summary
In a recent LokadTV episode, Conor Doherty and Joannes Vermorel discussed the inherent flaws in mainstream supply chain management, particularly the over-reliance on AI. Vermorel criticized longstanding practices like Requests for Proposals, time series forecasting, safety stock formulas, and service levels, arguing they are outdated and economically unsound. He emphasized that AI cannot rectify these deep-seated issues, as it is not yet at human-level intelligence. Vermorel suggested that practical, experience-based adjustments by practitioners often compensate for these flawed methods. The conversation concluded with a Q&A session, highlighting the challenge of removing entrenched processes in large companies.
Extended Summary
In a recent episode of LokadTV, Conor Doherty, Director of Communication at Lokad, engaged in a thought-provoking discussion with Joannes Vermorel, CEO and founder of Lokad, about the pitfalls of AI initiatives in supply chain management. The conversation, which took place in Lokad’s new studio, revolved around Vermorel’s assertion that mainstream supply chain approaches, particularly those involving AI, are fundamentally flawed and likely to fail.
Vermorel began by critiquing the longstanding practices in supply chain management, which he believes have remained stagnant since the late 1970s. He argued that simply adding AI to these outdated methods is not a solution but rather an exercise in futility. Vermorel emphasized that the failures of past supply chain initiatives, even those led by highly intelligent engineers, should serve as a warning against the over-reliance on AI.
Conor Doherty challenged Vermorel by pointing out that many view AI as a panacea for supply chain issues. Vermorel responded by highlighting the limitations of AI, using the example of ChatGPT. He explained that if highly intelligent engineers have failed to solve these problems, it is unrealistic to expect AI, which is not yet at human-level intelligence, to succeed. He stressed that AI can reduce costs and improve efficiency in areas where solutions are already known, but it cannot solve fundamentally flawed problems.
The discussion then delved into the specifics of why Vermorel believes current supply chain practices are misguided. He identified four key areas: Requests for Proposals (RFPs), time series forecasting, safety stock formulas, and service levels. Vermorel argued that RFPs, particularly for enterprise software, are dysfunctional because they assume a level of knowledge and specificity that is unrealistic. He likened the process to writing a detailed specification for a smartphone without understanding its complexities, leading to a selection process that often disqualifies the best vendors.
Time series forecasting, according to Vermorel, is another flawed practice. He explained that time series data can be misleading because it fails to capture critical nuances, such as the difference between having one major client versus many smaller clients. This lack of granularity can lead to poor decision-making and increased risk.
Safety stock formulas and service levels were also criticized for being non-economic and overly simplistic. Vermorel argued that these metrics do not consider the broader economic context and often lead to suboptimal decisions. He suggested that a more holistic approach, considering the entire system and its economic impact, would be more effective.
Conor Doherty raised the point that many companies still achieve significant success using these flawed methods. Vermorel acknowledged this but attributed it to the practical, experience-based adjustments made by practitioners on the ground, rather than the theoretical models taught in supply chain management. He argued that these practitioners often rely on spreadsheets and manual overrides to correct the deficiencies of the established methods.
The conversation concluded with a Q&A session where audience questions were addressed. Vermorel reiterated that the main obstacle to change in large companies is the difficulty of removing entrenched processes. He emphasized that adding new technologies, like AI, is easier than eliminating outdated practices, even when the latter would lead to better outcomes.
In summary, Vermorel’s perspective is that the current mainstream supply chain practices are fundamentally flawed and that AI, while useful in certain contexts, cannot fix these deep-seated issues. He advocates for a more economically sound approach that considers the entire system and its complexities, rather than relying on simplistic and outdated metrics.
Full Transcript
Conor Doherty: Welcome to LokadTV, broadcasting today live from our frankly lovely new studio. We’re closing 2024 with an inoffensive and light-hearted topic based on his discussion at SCT Tech. Joannes Vermorel, to my immediate left, will explain his perspective on why AI initiatives in supply chain are probably doomed to crash and fail. Feel free to submit your questions in the live stream at any point, and we’ll get to those a little bit later. While you’re here, subscribe to the YouTube channel and follow us on LinkedIn.
And one last piece of housekeeping before we start talking about how much smarter we are than everyone else. It would be rude not to acknowledge the effort of so many to make the studio you see before you so nice. Everything from the screens behind me to the microphones in front of Joannes and me is a result of a lot of work at Lokad, particularly by Maxime Larrieu behind the camera over there and Baptiste Grison. So thank you both very much for your efforts. And with that, Joannes, I ask you, why are people so stupid?
Joannes Vermorel: Generally speaking, I think it’s the curse of the human species, myself included. But actually, with this playful title, I just wanted to bring attention to the fact that what I typically refer to as the mainstream supply chain approach has been largely dysfunctional for the last four decades. It has been pretty much a dead end in terms of technologies and practices. What companies are doing nowadays has barely changed conceptually since the late 70s. It’s the same numerical recipes, the same ideas, and it’s not working too well.
Now, the idea that you can just take things as they are and drop a little bit of magic AI powder on top of it and suddenly those problems will go away, I think this is insanity or, in the title, stupidity. Again, I don’t think that people were stupid at the end of the 70s to try that. I just say that after four decades of consecutive failures, it’s never learning from your past mistakes that’s where stupidity starts. When I see companies that try to revisit, keeping to the same approaches and processes in their supply chain with generative AI, I don’t need to wait and see how things will unfold. I already know that it’s just not going to work. It’s just going to be a big waste of time, energy, and money.
Conor Doherty: But a lot of people would, in fact, view AI as something of a silver bullet when it comes to supply chain initiatives. Like whatever’s broken or flawed or based on poor assumptions will be remedied by the insertion of Gen-AI, for example. So you’re saying fundamentally that that is a misguided approach?
Joannes Vermorel: Absolutely. Let’s pause for a second. Let’s imagine that ChatGPT was as intelligent as an engineer from MIT. Excellent, we have artificial general intelligence now. It turns out that a lot of competitors of Lokad for the last four decades have done exactly that. They take engineers from MIT, give them big supply chain projects, and the ambition is to eliminate spreadsheets, automate decisions. They are very smart, and you give them budget and time, and yet they have been failing.
Those failures are not exceptional. Pretty much any company that I know of that is above, let’s say, one billion dollars of turnover and older than, let’s say, 20 years old has probably three or four failed supply chain initiatives under their belt. Initiatives intended to eliminate spreadsheets by introducing smarter, much more integrated numerical recipes, and they failed. So now the question is, if you failed using very intelligent engineers, why do you think that using something that is an inferior intelligence, because let’s be honest, ChatGPT is not yet human-level intelligence, why would you think that it’s going to succeed?
Automating intelligence, the benefit is cost. For example, at Lokad, we have robotized the translation of our website. Now, if you look at the Lokad.com website, it’s available in many languages. For a decade, we did that with professional translators. It is now done automatically with large language models. Excellent. What we have saved is a matter of cost, but fundamentally, it was a problem that we already knew how to solve manually with people. AI did not solve an unsolvable problem, which was translation. It just let us do that cheaper and faster, which is great.
But now if we go back to the initial problem, which is supply chain predictive optimization, if all your previous attempts have failed while you had very smart engineers at your disposal, why do you think that having less ingenious and a little bit more fancy instruments would really make a difference?
Conor Doherty: What you just touched on there leads to this question: when you use the term stupidity, I just want to unpack that a little bit. I know it was deliberately provocative, but even still, when you talk about companies making decisions based on faulty assumptions, and we’ll get into specifics, but when you say things like companies repeatedly making mistakes, that’s one form of error. Maybe you could classify that generously under stupidity. There is also an alternative, which is ignorance. Ignorance is neutral.
Stupidity, dumb, imbecile—these terms originally were in the psychiatric literature and refer to cognitive impairment. They denote a very specific meaning. Ignorance is neutral. You and I have IQs of 180 on a bad day, but we’re both ignorant of many things. I know nothing about botany, I know nothing about how shoelaces are made, but I’m not stupid. I don’t lack the neuro infrastructure to learn these things; I just don’t have the time or access to the information. So, to the question, you have companies making bad decisions resulting in terrible or suboptimal outcomes, and then you have companies that don’t actually know of the existence of alternative paradigms. Do you see that as two fair representations of the problem, or do you see it just as people being stupid and making mistakes?
Joannes Vermorel: Yes, it is a fair representation of the problem, which brings us to the case of what exactly we are looking at. When we look at the specifics, we can decide whether we are talking about stupidity or ignorance. My proposition for today is that when we look at the specifics, it is so obvious that claiming it is ignorance is a stretch.
Conor Doherty: Let’s actually get into the specifics. You have four key proofs, or four ways to demonstrate what you see as the problem of either natural stupidity or natural ignorance in terms of corporate decision-making. They are RFPs, time series forecasting, safety stock formulas, and service levels. We’ll get into each one systematically, but top-level view, what is it about those four concepts that you think demonstrate your position?
Joannes Vermorel: I picked four, but there could be 20. At least four major ingredients of the mainstream supply chain theory and practice. These are major ingredients that you can find in probably 90% of large companies. For smaller companies, it varies a lot, but those practices tend to be quite uniform among larger companies. Due to the fact that they are very widespread, we can look at those practices and ask the question: does this thing even make sense? Do I need an MIT PhD to realize that it’s complete nonsense or not?
If within a minute you can realize that it’s complete nonsense by just careful examination, we are definitively in the camp of stupidity. If the only way to recognize that you are in error is to do a very fancy, sophisticated experiment that requires a lot of funding and time, then this is much more an error in the category of ignorance.
Conor Doherty: As I said, let’s take them systematically. So the first proof in your argument is the existence of RFPs. Now, I presume that’s a catch-all for requests for proposals, requests for quotes, requests for information, etc. Is it all?
Joannes Vermorel: Yes, and again, specifically for enterprise software dedicated to supply chain optimization. We can discuss about… I’m not discussing whether RFP is the proper way to source bulk paper or some kind of super obvious commodity. The context is supply chain, yes. And more specifically, because again, if you want to have barcode printers for your supply chain, this is not what I’m discussing. I’m specifically discussing about whatever you want to source that is going to address your decision-making process. By supply chain, that’s what I mean. I don’t mean logistics, I don’t mean hiring truck drivers. I mean the decision-making processes that govern the flow. So all the fine print of what do you buy, what do you produce, at which price do you sell, where do you put your inventory, all of that.
Conor Doherty: Okay, well then I throw it immediately right back to you. What’s wrong with using the RFP process to source a vendor?
Joannes Vermorel: RFPs are completely dysfunctional. If you want to get a feel for the audience of what an RFP looks like, just imagine if you had to write in a Word document all the things that you expect from your smartphone. It is such nonsense. You don’t know. It has a bazillion features. Most of your smartphone works because of a lot of things that you don’t know about. Just listing all those features is an enormous amount of work, and if you were to list what you think your smartphone does, most likely you will get a lot of stuff wrong.
Just imagine you have hundreds of item points that you need to cover, and what are the odds that by producing those hundreds of pages of requirements for your smartphone, you end up with a document that disqualifies both Samsung and Apple? Most likely you will.
Enterprise software is extremely complex, and this complexity mainly reflects the problem you want to solve. Supply chain optimization is itself very complex and quite complicated, so you cannot expect a super simple answer. You’re not purchasing iron by the ton or crude oil. You’re purchasing something very sophisticated, and that means you have no vendors that are substitutes for one another. There is no one-to-one matching between what vendor X is offering compared to vendor Y.
The problem with RFPs is it assumes that you already know exactly your solution, that you can have a complete specification, and then you want to channel supposedly a large number of vendors into your list of requirements. Software just doesn’t work like that. Producing a good piece of software takes about a decade, give or take. No vendor is going to radically adapt their technology for your RFP. You are channeling everybody through hundreds of pages of nonsense.
The process makes so little sense that usually when we get RFPs, we end up with something like 400 to 600 questions, and those questions are full of spelling mistakes. Very frequently, even the name of the client company itself is misspelled in the document because people could not care about the questions themselves. It has been delegated to interns, to consultants, to whatever. You produce a huge amount of paperwork, and nobody even knows what half of the questions mean because they are so badly phrased. Most of the questions are not even questions but requirements in disguise.
Then the vendor replies with dozens, possibly hundreds of pages of answers that nobody reads. There is a committee that goes in stages for that, and the idea that you will have a rational decision that comes out of this completely irrational process is just mind-boggling. There is nothing in real life where you, as an individual, would engage in such an insane process. Why do you think that suddenly, just because you are operating for a large company, what would otherwise appear completely insane in your daily life would suddenly make sense just because it’s the practice of a large corporation? It doesn’t.
Conor Doherty: Well, again, a couple of points to tease apart there because there’s a lot. First of all, is your criticism about… Oh, sorry, let me go back. I’ve seen some of the RFPs you’re talking about. I’ve seen some of the examples like, “Do you still have a fax machine? Do you store your fax reports in fireproof cabinets?” I mean, I’ve seen these things. Of course, that’s utterly nonsensical. That’s an RFP in its current state. Are you saying that in a vacuum, divorced of any poor execution, just in general, the concept of RFPs to try and find a piece of software is just an utterly insane thing to do? And if the answer to that is yes, please explain what the alternative would be.
Joannes Vermorel: No, the idea of doing market research is not insane. Obviously, if you want to pick a vendor, you have to do some market research. The idea that you have to operate through the established practices of RFI, RFPs is nonsense. That’s my point. My point is that those practices are deeply flawed, deeply, deeply flawed. When you have a process that is completely dysfunctional, then improvisation is much better.
If you are doing something that is not working, that is so terrible, stop doing it, and pretty much anything else will be better. Anything that is not even more bureaucratic. My take is that those large companies would be better served by just an informal process, and that’s it. If you are willing to entertain the idea of having a superior version of the process, then there is also an alternative path. It’s discussed in one of my lectures for adversarial market research, where I outline a better way to do it. But even in the absence of knowledge of this better way, just removing this nonsensical process would already be an improvement.
Having a super bureaucratic process is not a good thing. It is a terrible thing. It slows everything down, dilutes the responsibility of everybody, and adversely selects vendors. Imagine, again, we go back to Apple. Do you really think that Apple, if you do an RFP for them, will actually pay attention to you? Will they modify their precious iPhone to please your corporate requirements? No, they won’t. So what you’re effectively doing is having the good vendors opt out on their own from your market research, which is complete nonsense. That’s the opposite of what you want.
My take is that when you have something like cancer, remove the cancer and do not ask yourself the question, “What do I put in place of the cancer?” If you’ve removed the cancer, you’ve already done something good. It is an improvement. Now, we can then discuss what could be even better, what to put in place, but the first stage is to acknowledge that when you remove cancer, you are improving the situation.
Unfortunately, and that’s where I come to the bureaucratic stupidity, is to think that the only alternative to a bureaucratic nightmare is another sort of bureaucratic nightmare. This is complete nonsense. Clearly, I have never seen in 15 years of business any RFP that was not deeply, deeply dysfunctional. It’s only variations between the circles of hell. Some RFPs are like the fifth circle of hell, some others are like the ninth circle of hell. It’s just variations in terms of nightmarish intensity, but otherwise, it is uniformly super, super bad.
Conor Doherty: That was Thomas Sowell and Dante Alighieri in the space of 60 seconds. It was very good. Well, actually, that transitions from the first point, which is about RFPs and the criticism of RFPs and RFQs, etc. That’s sort of how you might select an AI vendor.
Joannes Vermorel: Exactly.
Conor Doherty: If I can just finish the question because I’m transitioning a bit. The second point, though, is once you have selected an AI vendor, this is where we move on to the second point, which is time series forecasting, which you cite as the second proof of why your AI initiative is going to fail. Now, this is once you’ve already selected a vendor. What’s the problem with time series?
Joannes Vermorel: So once you’ve selected… First, you’re going to most likely, thanks to your RFP, select a very bad vendor. That’s a given. You have a process that makes no sense whatsoever, so most likely you will get one of those worst vendors that specialize in doing everything that ticks those RFPs, no matter the amount of nonsense. You are already in a position where failure is almost a given, even if the vendor was not too dysfunctional. But you’ve really selected the dysfunctional vendor in the first place. Now, which brings me to time series.
Time series are like the Alpha and Omega of the modern mainstream supply chain perspective. What is a time series? It’s just a series of points according to a given period. That’s going to be one value per day, one value per week, or month. When I say time series perspective, it means that you look at everything through your sales or your flow per day, per week, per month aggregated. Everything kind of fits those time series.
Obviously, with those time series, what you want, or at least according to the mainstream supply chain theory, what you should be wanting is time series forecasts, which is the extension of those time series into the future. If you have your sales data up to today, you want to have the forecast, which is just those time series extended into the future. So you have the amount of sales tomorrow, the day after tomorrow, etc.
Conor Doherty: What’s wrong with being provided with one actionable data point to plan for, for example, demand next week will be 10 units? That sounds great.
Joannes Vermorel: The main issue is that your supply chain cannot be meaningfully represented with time series. And what does that mean?
Well, let’s start with a super basic situation. You have a product that is selling steadily at 1,000 units a day. It has been selling at 1,000 units a day for, let’s say, the last three years. Super nice. Okay, what is the future looking like? Now, I’m going to look at two different situations that have the exact same history. Situation number one: you have a thousand distinct clients, and they are once in a while ordering one product. In aggregate, those 1,000 clients give you 1,000 units a day. Some clients are leaving, some clients are arriving, but it’s very stable. So, that is generating the time series. Now, what does that tell you? It tells you that you have a very steady demand that seems fairly robust. A thousand clients is not millions, but it’s not zero, so that looks good.
Now, the second situation is you have 1,000 units a day, but from one client. Yes, this client has been very steady, ordering 1,000 units a day for the last few years, but it’s one client. Now, what are the odds that demand could drop tomorrow to zero and stay at zero forever? Obviously, from the first perspective, where you have a thousand clients, I would not say it’s impossible, but it is very remote. Even if you had a catastrophic brand-damaging event, most clients would not even discover that. Even if you had a massive case of fraud, you would still have hundreds of clients that do not hear about it for months. So, the odds that all those clients in perfect coordination would just stop purchasing from you the same day is not impossible, but it’s a very, very low probability. We are talking one in a million, probably. It is rare.
Now, the opposite, if we have just one client, then it just takes a single manager to decide to pick a different supplier, and bam, you go to zero. If you say that you are going to lose this loyal client once in a decade or so, we are talking about a 0.1% chance. It’s not one in a million; it is several orders of magnitude more. This is still improbable, but compared to the first one, it is something that will most likely happen in a few years. Given enough time, something like a decade, it is almost guaranteed to happen. Here, I’m just describing two very basic situations that have the exact same time series representation. That is the crux of the problem: time series are simplistic. You can have several situations that are completely different and yet have the exact same time series.
Conor Doherty: And that matters. Why?
Joannes Vermorel: Because your decisions are very different. If you have a thousand clients, you can be very conservative with inventory. You can say, for example, “Oh, we are going to have many months of inventory in stock because it’s okay. If we lose some clients, we will adjust production so that we don’t run into a large excess of stock. Even if we lose clients, we will still have time to liquidate the inventory.” On the contrary, if you have only one client, that means if this client stops purchasing, your stock becomes dead stock overnight. All that you have left is a guaranteed inventory write-off for everything that you have in stock.
So, you see, in terms of supply chain decisions, you have two very different situations that require very different decisions. That’s why I say time series are insane. The hypothesis is that if you frame everything as time series, which is exactly what the mainstream supply chain is doing, then you can make reasonable decisions. What I say is no, you can’t. You can’t because time series do not let you capture some basic nuggets about your activity. You’re just blind. It doesn’t matter if you have more time series. Again, we go back to this one client versus 1,000 clients. It doesn’t matter if you have more time series; you are still stuck with the fact that it’s a bad representation of your data. It’s a super simplistic representation of your data.
Conor Doherty: Sorry, just to make sure to unpack for anyone who might not know what you’re driving at there, from a risk management perspective, you have to have different approaches in terms of financial allocation because your exposure is different.
Joannes Vermorel: It’s very different. Again, if we look at perishable items in a store, time series will let you represent your stock level over time. So, how many units do you have in stock in your store for, let’s say, yogurts? But the reality is that your products are perishable, so they have expiration dates. Let’s consider again, you have 10 units in stock. That’s a time series perspective. The day before, you had 11 units, whatever. You have your stock level ongoing. That’s a time series representation. Now you’re thinking, “I have 10 units in stock. Is it good or not good? Is it enough or not?”
Let me look at two situations. Situation A: the 10 yogurts that you have in stock will expire a month from now. That’s good. Someone who walks into the store will find yogurts with a month of shelf life. It’s nice for yogurts. Now, situation B: the 10 yogurts expire tomorrow. This is very bad. Your clients are going to dislike fetching yogurts that expire tomorrow. Maybe one client will buy one just for the consumption of the next day, but any mother doing groceries for a family and wanting to organize things for the week is not going to buy yogurts that are going to expire tomorrow.
So, under the same representation, 10 units today, which is a stock level, you are missing a very important piece of information, which is the composition of the expiration dates. If you have a software system completely engineered around this idea of time series, this information will always be ignored by the system because the system cannot even see that. It is not part of the time series paradigm.
Conor Doherty: And again, just to be very explicit for anyone listening, saying, “Okay, I hear all that, I understand what you’re saying, I follow the examples. How does that influence AI? How does AI fit into that picture?” Even if you’re using time series or probabilistic forecasting.
Joannes Vermorel: If you have a paradigm where the key information is lost, that’s the case with time series, it doesn’t matter if the person looking at the time series is an AI or a very smart engineer or whoever. The key information is already lost. If you look at your sales data through the lens of time series, you cannot see this one versus many customers. You cannot see the expiration dates. There are plenty of things that you just do not see. If you do not see, whether it’s an AI, a smart engineer, or a program applying some rules, the central information that you would need has already been lost. It doesn’t matter how much technology you pile on top of this paradigm.
Conor Doherty: All right, well, moving forward just a little bit, we’ve covered the first two ways: RFPs and time series. The third and fourth can possibly be lumped together as metrics, which are safety stocks and service levels. Discussing those separately or together, what is your objection to these? Because these are quite common. Most companies keep pretty strict safety stock and service level policies.
Joannes Vermorel: The problem with safety stock, for the audience, is you assume that you have a time series demand forecast, and you assume that your demand is normally distributed, that your lead times are normally distributed, and then you pick your service level. That will give you a target stock quantity to keep on hand, and that’s called the safety stock. That’s what safety stock really is.
Technically, you have the working stock, which is the average demand, and then the safety stock is the extra component on top of the average demand. But that’s a technicality. Overall, if you sum the working stock plus the safety stock, that would give you the target stock quantity that you want to keep.
What is the problem with that? The problem is that it is the wrong way to look at inventory management. The goal of the company is to make profits. Safety stock is a non-economic perspective on those decisions. What does that mean? It means that it’s something that is not even making an attempt at optimizing the profits. The problem is that we have something that is not even attempting to optimize the profits. Why do you think this thing is going to be good profit-wise?
How do you actually optimize to make a profit? Well, it’s very simple. You look at, let’s say, a simple situation, a store. Let’s pick the first unit of inventory that will maximize your returns. I pick this one and put it in the store. That’s the thing that gives me the highest return. I pick the first unit that does that, and then I have to repeat the process with a second unit that maximizes the returns. Because it is a store, most likely the second unit that I will pick will not be the same product as the first unit.
The point is, I want to spread out my extra units to cover more demand. If I tell you that you can only order a first unit, you pick one unit. Now, if I say you can pick another second unit, most likely you want to take something else because, at a minimum, you want to increase your coverage in terms of demand for the store. If I tell you that you can pick a third unit, you will again pick something slightly different.
The point that I’m making is that the safety stock perspective adopts a perspective that is completely non-economic. It looks at a product in a store and just, again, for the audience, a mini market would have like 5,000 distinct products on the shelves. It looks at one product in isolation and then you decide in isolation whether you want more or less. I’m saying this is nonsense.
Again, just let’s have a look. If you have to do it manually, you are in a grocery store. You would not think in isolation if you need more or less of something. It’s a tradeoff. You have limited shelf space, you have a limited amount of money, so you would think, “Do I have enough of this? Should I reorder enough of this product, or is there anything else that I should be reordering first?” That’s how you think in terms of return on investment. That’s how you can think in terms of an economic perspective.
What I say is that safety stock is a non-economic perspective. It is a mathematically interesting perspective, at least from an educational perspective, maybe for applied math students just to give them a small exercise or something. But if we have to get to a real-world supply chain, and again, I’m taking a very simple situation like a grocery store, which is about the simplest thing that you can think of, we see that it’s a non-economic perspective. So we have a problem, Houston. This thing is just not even attempting to improve the bottom line of my company. This is just wrong.
The alternative that I’ve described is fairly straightforward. It is about picking the things that give me the highest returns. I pick the first unit and then the second unit, etc. We can go into the technicalities on exactly how we do that, but those are technicalities. My criticism with safety stock is that it cannot possibly be a reasonable approach because it is a non-economic approach. In practice, you very frequently end up with nonsensical situations. For example, you compute according to your safety stocks all the things that you should be putting in your store, and it doesn’t fit.
What you see is that’s where you end up with insanity. You end up with your safety stock telling you that all those products need all those units, and because everything is done in isolation, you have 5,000 products, and for every single product, you will get a quantity. When you do the sum of all those quantities, it does not fit.
If we go back to your AI, what is your AI supposed to do? Again, your paradigm says you compute your safety stock. Your AI can maybe help you to compute more precisely the safety stock. I’m not even sure exactly how that would help. But the reality is that you have a paradigm that is broken by design. Your AI, no matter how it computes your safety stock, will still end up with those weird paradox. What does that even mean to improve if you have a non-economic perspective? Your AI cannot conjure meaning out of something that doesn’t have an economic meaning.
Conor Doherty: Before we cover service levels, I do want to pick up on one point that you made there. You described safety stocks as a non-economic perspective. I understood that. You also talked about using SKUs in isolation, and that’s flawed. Well, then the opposite is presumably to look at things in combination. Could you sketch that out a bit more, that point about isolation versus combination?
Joannes Vermorel: Yes, I mean, again, supply chain is a system. That means you cannot disconnect the parts without changing their nature. A product sold on a shelf in a grocery store is not the same thing if I’m selling this product in isolation. People, when they go to a grocery store, expect a range of products, not just one product. And that’s the same thing for pretty much any supply chain that is non-trivial. A real-world supply chain is going to be like that. If you’re producing cars, you have to bring together all those parts to have functional vehicles at the end of the day. You cannot remove the wheels and say this is a car. A car minus the wheels is not a car; it’s just something else.
Fundamentally, you have systems where you have many distinct types of physical goods, and they only make sense when they are put together. It doesn’t mean, obviously, in a car, if you remove the wheels, then the car doesn’t work at all. In a store, you can decide that maybe you don’t want to have mustard on the shelf. Maybe clients are okay with you not having mustard, or on the contrary, you need to have three distinct types of mustard.
There is obviously a lot of subtlety depending on what you’re looking at. It’s not something that is black and white. But fundamentally, when you start selling mustard in a grocery store, it only makes sense with regard to what you’re selling that goes along with it. So what I’m saying is that when you adopt a perspective that puts those things in isolation, you’re missing the point. You’re missing the point of what makes the store attractive. You’re missing the point of the dynamics that go on.
People come into your grocery store and they purchase not one article. Some clients may walk in and purchase one item, but most of them will have a basket and many items. So what I say is that when you go for this safety stock perspective, you adopt a very strange, super simplistic mathematical perspective that puts your SKUs, your products, in strict isolation from each other. Even considering the simplest sort of supply chain that you can think of, like a grocery store, it already doesn’t make any sense. So why would you think that it’s going to make more sense in anything that is more complicated, like aviation MRO or something else?
Conor Doherty: Lokad has a specific term for that, like the basket perspective. I think we actually released a flash card on LinkedIn a couple of weeks ago or maybe a month ago describing that. As you said, people don’t typically walk into a supermarket and buy just one thing. They will buy with a list in mind, and the absence of one thing can lead to losses. If people buy multiple things, they come in and buy 10 items, and the 11th item that they wanted was not there and it is a critical item, you don’t just lose the sales of the 11th item. If the person leaves because of the absence of the 11th critical item, you lose the totality of the sales in that basket. So it’s the basket perspective. There’s a relationship between all of these things.
Joannes Vermorel: Yes, and the thing is that if we go back to safety stock and AI, once you have adopted your safety stock perspective, it doesn’t matter whether your AI is super smart or super dumb, cheap or expensive, or whatnot. It’s already stuck in a corner that is a bad place where there will be no solution whatsoever. That’s why I say natural stupidity always trumps artificial intelligence. It doesn’t matter the sophistication of the technology, its accessibility, its maintainability. All of that is made completely irrelevant by the fact that you’ve already framed the problem in ways that are nonsensical.
Conor Doherty: I would agree with you. I do agree with you on that, but what I would say is I think that’s a really good example of the distinction I mentioned earlier between natural stupidity and ignorance. What we just described there is a real phenomenon, but it is very abstract. It requires a degree of understanding about the relationship between things that are not immediately clear.
Joannes Vermorel: I disagree. Whenever you have a discussion with someone who is completely uneducated running a store, they will know that it’s not magic. We’re not talking about super advanced math. Just go to any shopkeeper who has been doing that for a week, and they will understand that assortment is something that matters. You cannot think of the right amount of stock for a product in complete isolation of all the rest.
In fact, it is a sort of very elaborate absurdity that takes a university professor to propagate. It is absurd, and the only way to successfully promote this sort of idea is to be in an environment where you’re completely protected from the real-life consequences of this very bad idea. If you were to manage a store, you would not think like that. You can do a test: just discuss with whoever in your neighborhood is running any kind of store. If they think like that, they don’t. If you talk to the person who is managing the inventory, who is passing the replenishment order, like Mom and Pop’s shops, they will obviously think in terms of the whole.
Conor Doherty: That’s actually a good point. There is a distinction to be drawn, and I’d like to get your opinion on it. The difference between enormous multi-billion dollar conglomerates with incredibly large supply chains passing orders, let’s say, in terms of supply chain decisions, it could be hundreds of thousands per day, and then you contrast that with Mom and Pop stores where it’s Joannes’s store and Joannes is taking the money out of his pocket to buy these things each day.
It reminds me of something Peter Cotton said when we spoke to him a year and a half ago. He said that you make very different decisions when it’s your money on the table. The way you think about the problem is very different when you have to take the money out of your pocket. So, I’m just curious, why do you see very large companies making bad decisions, but when you gave the example of just go next door to a store.
Joannes Vermorel: That’s where the insanity lies. Large companies do not make those bad decisions because, despite what they say, they follow safety stocks. The people they employ do not. That’s where it becomes insane. What is the actual landscape? The landscape is you have university professors that say you have to do safety stocks. You have supply chain textbooks that say you have to do safety stocks. You have supply chain AI-driven vendors who say we have AI-driven safety stocks. Great. Then you have companies that have safety stock-driven systems, or sometimes they are called buffers or whatever. There are various flavors.
At the end of the day, you have supply chain clerks called demand and supply planners, category managers, inventory managers—titles vary—who are using their spreadsheets to do something completely different. Usually, the typical reaction that I get is when I discuss with those people, they tell me, “Oh yes, the safety stocks, it’s part of our plan to use them. Next year, when we have enough maturity, we will use them for real. But for now, we had so many problems, we are doing something completely different. With my spreadsheets, I’m doing things differently. I know it’s a mess, but it kind of works. With more training, I will be able to use safety stocks one day.”
That’s insane because the reality is that whatever this person is doing is actually something that makes sense. This alternative recipe is just the thing that makes sense, and safety stocks are just the charade, the ambient charade, that is not working. It has not been working since at least 1979, as identified by Russell Ackoff. This is the reason why spreadsheets can never go away under those settings.
Whenever you say we are going to replace all those messy spreadsheets with software automation, it fails. It fails because safety stock is a bad idea. It doesn’t matter if you have AI-powered safety stock; it’s still a bad idea. It’s an idea that is so bad that it doesn’t work. Large companies try, fail, and go back to the spreadsheets. People go back to the attitude of, “I’m doing something a little bit my own way. When I get more training, I will use safety stocks, but for now, I need something that actually works.”
Conor Doherty: On that note, you have explained at length how safety stocks are flawed. I presume quite a bit of the same criticism applies to service levels. They’re not exactly the same, but in terms of decision-making processes, what is the policy that you’re enacting to arrive at a decision? Explain what your problem is with service levels, please.
Joannes Vermorel: My problem with service levels is that service level is a super defective proxy for quality of service. In fact, it has almost nothing to do with quality of service. What you want is to serve your customers well. That’s a given when you operate a supply chain.
Now, let’s consider a basic retail store in fashion. What does it mean to have high service levels? If you equate high quality of service with service level, that means high quality of service means high service level. If you say service level is a good proxy for quality of service, then high quality of service means high service level.
If you have a store selling a collection for your fashion brand, what does it mean to have high service levels? It effectively means that you still have every product, at least a few units, on the shelves till the very end of your collection. If you have high service levels, it means that your store is still full of stuff by the very end of your collection. How do you put in your store the next collection?
You need to make room by letting the old collection go, which means accepting that for those products, service levels will go to zero. Clients can still be very satisfied despite the fact that you have zero service level for many products. As some products are phased out, other products kick in, and your clients are still very happy. There is no correlation whatsoever between quality of service, which only exists in the eyes of the client, and what you measure with your numerical recipe, which is service level.
If service levels are a super bad proxy for quality of service, why do you think that an AI that is supposed to drive your service levels is going to do things that make sense for your company? Just like my criticism for safety stock, this is not an economic perspective. Here, you have a concept, service level, that is not a quality of service perspective. You give an instrument to your AI, so your AI has to deal with this instrument, service level, but it turns out that this instrument is completely inadequate to answer the problem, which is quality of service.
Conor Doherty: You’ve used a couple of nice phrases, one of which was “service levels are a super defective proxy for quality of service” and “quality of service only exists in the eyes of customers.” But then that leads to a two-part question. One, what is a good proxy? And two, if quality of service only exists in the eyes of customers, then how are companies supposed to actually know if they have good quality of service?
Joannes Vermorel: Those are very good questions. Let’s first start by looking at proxies. Let’s do some thought experiments. That’s a way to weed out the super bad ones. We don’t even need to do an actual experiment with an actual store; we can just do it as a thought experiment. That’s very cheap. So, the first thing is that we have to agree that if we take a store with the same products on the shelves, nothing has changed. Whatever we think is quality of service is not changing. If I look at the same store, same products, same time, and I don’t change anything, then whatever I think is the quality of service is not supposed to change.
Let’s revisit service level. Many companies measure service level by taking the percentage of products that are out of stock or not out of stock. If you have 97% of your products that are not out of stock, you would have a 97% service level. There are different ways to look at service level through stockouts. This is when you do a safety stock optimization, which is a slightly different perspective. But here, this is the way many companies operate with this sort of report, so that’s the one I’m going to use.
Now, let’s imagine conceptually that we have decided that the assortment of the store is doubled. So, we had a fashion store with, let’s say, 3,000 distinct articles. Now, we say that this store is supposed to have 6,000 articles, but in the store, we still have the exact same 3,000 articles. Conceptually, in the computer system driving the store, we have just declared the assortment to be twice as large with more variants, more colors, more sizes.
Did we change anything from the perspective of the clients? Obviously, no. It’s still the same store, same pants on the shelves, same colors, same sizes. Nothing has changed. But in the computer system, we have doubled the range of the eligible assortment. By doing that, we have divided the service level as measured by your computer system by two. We were at, let’s say, 97% service level, now we are at something like 48%, and we haven’t changed anything in the store.
So, that’s why I say through thought experiments, if you have a proxy that, when you tweak your computer settings while not changing anything in the store, you can bring arbitrarily large changes to your numbers, then your proxy is complete nonsense. Whatever you want as a proxy for quality of service should obviously not depend on technical details within your computer systems. It would be insane if a physicist says, “What is the weight of this bottle?” and the answer depends on whether the computer system is configured in Russian or French. That’s just insane. The answer is obviously completely independent. Or imagine if the weight depends on whether it’s a Linux machine or a Windows machine. Insane. So, you’re looking at characteristics that should be completely independent from your computer system.
What I’ve demonstrated with service level is that by toying with the assortment, you can have wide variations in the service level. This is a demonstration of how insane this measure actually is. My take is that if we have to go to the quality of service, we are back to the idea that if you have something that is fundamentally insane, you should operate without it. Even if you don’t have an alternative, it’s like having a tumor; remove the tumor, and you will be better off without it. Don’t think yet about what you should put instead of the tumor.
Can we have actual high-quality measurements for the quality of service? Yes, we can. This is a completely different track of discussion, and I would prefer not to go into this area. But you see my point. You cannot overcome natural stupidity with artificial intelligence. No matter how sophisticated your techniques, if your premise is very bad, it won’t solve that. If you start with a broken concept, a broken paradigm, it doesn’t matter how much instrumentation you bring on top; your paradigm remains broken.
Conor Doherty: Yes, okay, we can accept that. But the immediate response to that is, when you say these ideas are stupid and the paradigms are broken and they’re not going to lead to better decisions, the obvious response is a CEO who says, “What are you talking about? I did 10 billion in turnover last year using safety stocks, using service levels, using RFPs, using time series forecasting.” While there is no upper bound to how many things can be true simultaneously, and luck as if they conflict, you do surely understand that for certain people, hearing “you’re stupid for doing these things” or “you’re ignorant” or “these are bad ideas,” they will often just point to the bottom line and say, “But look, I’m doing really, really well. What are you talking about?”
Joannes Vermorel: Let’s restart from the beginning. Fashion stores. We have clients, and we’ve had discussions with prospects who became clients over the years. They were telling us they optimize service levels. That’s what they say, and if you look at the process, that’s what is written in the process. But when you start looking at what practitioners are doing, they’re not doing that. We are back to safety stock. It turns out that stores, pretty much, again, a fashion store, when the next collection is coming, suddenly they decide that they are not going to reorder nearly as much. They intentionally let the service levels drop quite enormously. Then, when it’s finally time for the new collection, you have a short sale period, and suddenly you have enough space to bring the new collection in.
So, we are in a situation where companies, especially top management, say they use service levels, but the reality is they don’t. People on the ground are doing things that are different. That’s why, once more, when you try to automate, it fails. When you try to automate, you are actually trying to enforce this dysfunctional idea upon your supply chain, and it conflicts with reality, and thus it fails. People go back to spreadsheets.
The interesting thing is that there is an enormous amount of cognitive dissonance in the modern supply chain world. Some of the major tenets, like time series, safety stocks, and service levels, are completely broken. People in practice do things that are completely different from the spreadsheets. Instead of taking service levels as something to be enforced, they just take it as an indicator and do things with a lot of leeway.
If we transform the question to “Is it fundamentally bad to have service levels as an indicator somewhere?” I would say no. It’s just one descriptive statistic among many other descriptive statistics. In this area, you can have plenty of descriptive statistics. They are neither good nor bad; they’re just more or less organized and give you more or less insight on what is happening. But supply chain theory tells you something very different.
They don’t say the service level is an element of descriptive statistics; they tell you it’s your target, and you should produce decisions that match this target. What I’m saying is that people in large companies are almost invariably not doing that, and they’re right. Just like safety stocks, if you ask them, they would say, “Oh yes, we have our targets of service level. We need more maturity, and one day we will do it. But for now, we need something that works.”
We are back to the sort of insane position where practitioners are aware that they are doing something else, and they think about this as something that they will do when they grow up, when they have more maturity, possibly when they have some AI to support them. But it’s not going to happen because the concept is broken. As a piece of descriptive statistics, it is okay. As a piece of policy making for your company, it is completely defective.
Conor Doherty: Well, I had to frame this. If the argument is, and you can correct me where I’m wrong, but if the argument is that you have companies where they have these policies, they have these metrics, and practitioners are just disregarding them, but there are some companies that are doing really, really well, are you saying that they’re doing really, really well by blind luck and instinct of practitioners just putting their fingers in the air and guessing and happening to guess correctly?
Joannes Vermorel: No, I’m just saying that many of those problems, you know, as long as you don’t use a completely defective approach, you can have crude solutions that still work for you. You see, the amount of skills it takes to properly manage a local grocery store does not require a PhD from Stanford. You can do that with much less. You can even incrementally discover what works and what doesn’t work.
So what I’m saying is that those companies can enjoy success, obviously not thanks to the supply chain theory. They have people with a little bit of experience who have uncovered some numerical recipes that just kind of work. They work decently enough. The proof that this theory is not working is that all those large companies have tried to automate the processes many times, like pretty much once every five years for the last three decades, and it failed every single time. People went back to spreadsheets every single time.
Why do you go to the spreadsheet? Safety stock formula is super straightforward. Steering inventory decisions to match service level targets is super straightforward in terms of coding. This is like a piece of cake, we’re talking about in total 50 lines of code, maybe less. So if it was working, it would already be deployed, and the work of all those people would already be automated.
My argument is that it’s not, it’s nowhere near being automated because those paradigms are broken and thus they cannot be automated as such. What those spreadsheets used by supply chain practitioners contain are alternative methods that are usually relatively simple, that happen to work, but they are incompatible conceptually with both safety stock and service levels.
Conor Doherty: Well then, what practical strategies do you think that supply chain practitioners can now use to try and make more economically sound supply chain decisions?
Joannes Vermorel: So you see, if we try to go back to AI, the thing is that you have to give up on this illusion that the concepts that you know, that have been taught at school or with some association for supply chain, those concepts are just dysfunctional. If you try to bring sophisticated instruments, maybe generative AI or deep learning or blockchain or whatever you can think of, to that, it’s just not going to work.
So the first step is to recognize that you have a paradigmatic problem. It’s a big word to just say we have this theory that is all wrong. It turned out that what we were doing pretty much instinctively is kind of the better way. Now, if you want to do it really the fancy way, you can try to formalize this economic instinct, which is just don’t do something that is super damaging and costly for the company. That’s just the more formal way of saying the same thing.
Then maybe once you have the right perspective, you can bring the fancy technologies, and that’s pretty much what Lokad is doing. But the bottom line is that it starts by properly framing the problem with a perspective that makes sense. As long as you’re stuck in a dysfunctional slash stupid perspective, being a virtuoso in technology is irrelevant. That’s the sad part. That’s why I can say with relative assurance that those AI vendors will fail. It doesn’t matter if they’re talented or not, it doesn’t matter if their tech is very good or very bad, if it’s cheap or outrageously expensive. All of that is completely inconsequential. It will not work because the premises on which they work are broken.
Conor Doherty: All right, well Joannes, thank you. I don’t have any more questions, but I will transition now to some of the questions from the audience. Thank you very much. So, in no particular order, referring to the four proofs, your four ways: RFPs, time series, safety stocks, and service levels. If these practices serve companies so poorly, then what, in your opinion, prevents management teams from simply discarding them?
Joannes Vermorel: Changing anything in large companies is difficult, but there are a class of changes that are even more difficult. As a rule of thumb, I have seen that in any company, no matter the size, removing anything is, let’s say, two orders of magnitude, so 100 times more difficult than adding stuff. To add a new process is easy, to add a new position is easy, to add a new piece of software is easy.
To remove anything is very difficult, particularly in France. But everywhere, you know, we can joke about France having the Banque de France, which is an institution dedicated to the management of a currency that did not exist since 1992. We have an anti-institution which is dedicated to manage a currency that didn’t exist for 30 years. And by the way, it’s like 14,000 employees in Paris. But you see, what happens on the grand scale in governmental settings just happens on a smaller scale in large companies. Bureaucracies have a tendency to grow on their own, that’s Parkinson’s law.
So, the question is, why do management not remove things that don’t work? The thing is that people are already doing something different. The official corporate policy is everybody’s using safety stock. The reality is that there are so many manual overrides driven by spreadsheets that effectively the company is using something completely different. That’s the state of affairs. We have the charade that is still playing that the company is safety stock driven. I say, well, you know, that safety stock is still an important feature of the supply chain of the company. It is not. But ultimately, the management would say, what do I have to gain by making it official that safety stocks are no more? Ultimately, it doesn’t change anything because people are already not using that.
So, and that’s the same sort of thing. Once you have a reporting for service level, it doesn’t really make any sense. But the upside of removing that in the very short term is limited. On the long term, the upsides are enormous because it paves the way for doing something that is actually much more sensical. But in the short term, there are limited benefits. Again, adding stuff is much easier.
If we go back to AI, that also explains why there is so much eagerness to embrace AI technologies. It’s just additive. We are going to add another class of stuff in the organization, and that’s very nice and easy, as opposed to saying we are going to remove one class of stuff that is just in the way to make the company more efficient, more profitable, serve the customers better. It’s much more difficult for a manager to say, I’m just going to remove people and things are going to work better.
Just imagine what happened with Elon Musk at Twitter when he said, I just fired 80% of the staff and Twitter, now X, is more fluid than ever. It has more users than ever, and overall they have added tons of features that the previous team, that was five times as large, could not do during the previous decades. That reflects the power of subtracting things, but it is extremely difficult. It is very, very difficult. So I would say those things are not moving because removing anything is extremely difficult, even if it’s critically important.
Conor Doherty: Thank you. Next question, it’s very well written. Considering your historically blunt dismissal of human override, do you consider that to be an example of natural stupidity?
Joannes Vermorel: Human overrides. I mean, it depends. If we are overriding a numeric recipe that is completely nonsensical, it is good. What I’m saying is that where things get even more crazy is that if you end up with these situations where your numerical recipes are nonsensical.
Conor Doherty: When you say numerical recipes.
Joannes Vermorel: It’s whatever computes your supply chain decisions, like how much you should order, how much you should produce, where you should allocate the stock, and whatnot.
So you have numerical recipes that are nonsensical, thus it is absolutely normal to manually override those insane outputs for the decisions. And now what happens is that you end up with many people in the organization who spend their entire days overriding decisions. As far as I’m concerned, this is necessary because otherwise the company would just run into a wall thanks to those completely nonsensical decisions.
Now what happens is bureaucracies expand always. That’s the rule of Parkinson’s law. Bureaucracies expand. If you have people who spend all their days overriding manually numerical decisions, you will have people that will spend their entire day gradually overriding numerical artifacts. So what is an artifact? An artifact is just something that exists in your system, like a service level, like a forecast, a daily forecast, a monthly forecast, a budget, or whatever.
Something that you can toy with. This number has no tangible effect on your business. It could have a negative effect if there are decisions that are derived from this artifact, maybe. But very frequently, decisions have no bearing on artifacts. So just think of it as toying with KPIs and whatnot. It is going to be inconsequential, except maybe in the eyes of the management because you have a number that looks better.
But again, bureaucracy expands. So you started with a situation where you had people that were manually overriding decisions that were necessary. And now bureaucracy expands. You have a lot of people who are overriding artifacts, numerical artifacts, so stuff that doesn’t matter. That’s going to be people toying with ABC classes, people toying with service levels, people toying with coefficients for safety stocks, people toying with seasonality coefficients, etc. The list is endless.
And what I’m saying is that yes, those numerical overrides are completely insane and useless. And by the way, the approach of Lokad, and that’s why people were mentioning I’m very dismissive, is that if you have a numerical recipe that is sane, there should not be any need for manual override. If you need to manually override your results, it’s because your numerical recipe is insane. I’m talking about a decision. So if the decision is insane, you need to fix the numerical recipe and keep fixing it until there is not a single line anymore that is insane.
As long as your numerical recipe is producing insane decisions, you need to keep iterating to fix it, no exception. And so that’s why, by the way, at Lokad, we are very dismissive generally of those manual overrides. The overrides of the decisions just reflect that you have a bad numerical recipe. And the override of numerical artifacts just reflects bureaucratic busy work that is completely pointless in the first place and that could be eliminated entirely and it would still not change anything for the company.
Conor Doherty: Yeah, it’s treating the symptoms and not the cause.
Joannes Vermorel: Essentially, yes, exactly. And also again, acting in the interest of bureaucracies. Again, that’s Parkinson’s law. Bureaucracies tend to grow. So if you multiply the number of clerks that you have to do those manual overrides by a factor of 10, you will have 10 times as many updates of those values. It will not make your supply chain any better.
Conor Doherty: Well, good enough for me. Thank you. Next question. It’s two parts. How have ERP systems made the problem worse and why can’t they handle probabilistic forecasts? You only implicitly touched on probabilistic forecasts earlier, but feel free to expand.
Joannes Vermorel: So ERPs have made the problem worse, I would say, thanks to market researchers mostly by making the situation very confusing. So first, an ERP, there is no P in ERP. It’s enterprise resource management. There is no planning involved. What you have is a system that is transactional. It is just to deal with a transactional flow. It’s pretty much the electronic counterpart in terms of flow to your physical flow. And that is good. That gives you the electronic representation of what is physically happening in your supply chain. That is good.
Now the problem is that planning is suddenly… So this is what I call a system of records. Planning suddenly we are entering the territory of system of intelligence, decision making. Now why have ERPs made the situation worse? It’s because vendors very quickly realized by the end of the ’90s that systems of records, also known as CRUD apps (Create, Read, Update, Delete), were already commoditized. It was already commoditized 20 years ago.
Nowadays, it’s even more insanely commoditized. And by the way, if you want to have a real application of generative AI as a productivity tool, it is superb to write code for CRUD apps. So now with basically ChatGPT, you can literally write apps that are ERP-like super, super fast because those things are simple. It’s a lot of boilerplate; you have tons of it. It’s incredibly repetitive. It’s not like fancy engineering.
So, those sorts of things as productivity tools, AI works incredibly well to deal with ERM, you know, enterprise resource management. Now, back to this situation that has been confused. What you expect from your computer systems to deal with a system of intelligence decision-making is completely different from what you expect from a system of records. An illustration is how many milliseconds can you afford to keep your system busy doing something. If it’s a system of records, the answer is less than a millisecond. Whatever you do, it should be over in a millisecond.
Why? Because your system, your ERM, let’s say, relies on a centralized database, and this is a shared resource for everybody and every single process in your company. So, everything converges to this one database. If you freeze that database for a millisecond, that means that everything else is going to be delayed by a millisecond. You would say, “Oh, a millisecond is nothing.” Yes, but now you have 500 people doing that. Okay, it’s not 500, it’s now 500 milliseconds of delay that starts to become noticeable.
Now, what if a few of those requests freeze your relational core? I’m simplifying for a second. Then suddenly, you end up with a system that is very, very slow. Suddenly, scanning a barcode may take several seconds for the system to acknowledge what you just did. And this is why a lot of companies complain, “Oh, my ERP system is so slow.” The answer is invariably, it is slow because you put stuff in this system that you shouldn’t have.
The ERM, you know, enterprise resource management, should only deal with stuff that can be computed in sub-millisecond time, so super, super simple. If you do something that is not extremely simple, it means that you’re going to freeze your system. You’re going to take resources that are going to somehow freeze your system for a measurable amount of time. And if you have enough people doing that, and guess what, we’re talking of large companies, so many, many processes, many people, your system is going to become incredibly slow. And that’s exactly why ERPs nowadays are still exactly as slow as they were 20 years ago. Although in terms of raw processing power, we have computers that are at least a thousand times better. The answer is, why is it still as slow? It’s because there is an equilibrium that appears.
If something is slowing the ERP so much that it takes multiple seconds for other people to get the system to respond, then the IT department is just going to shut down and prevent that. And you see that. So, they act as the police of the ERP consumption. And if there is someone who is excessive, IT is going to step in at some point and just prevent this person or this piece of software from creating so many problems for the rest of us. And so, there is this balance, and then it converges with an equilibrium, which is, it is slow but tolerable. Which is why most ERPs are super slow but not so slow it would be insufferable. Because if you go into the insufferable territory, then IT steps in and just kills the thing.
So, we are back to now systems of intelligence. On the contrary, if you think about it, if you want to think about how you should do a store replenishment, you are going to look at years of sales history. You want to look at what happens with thousands, possibly tens of thousands of clients. I mean, it’s obviously something that is going to manipulate a lot of data. That’s obviously something where you want to invest a little bit more than a millisecond of calculation. Calculation is cheap.
The problem is that if you have an ERM, your resources are shared with all the company. So, what you want is to have a system of intelligence that is outside the ERM, and then this thing can take as much time as it is relevant to invest to do those fancy calculations. So, if we go back to the initial question, systems of records need to deal with things that are transactional, that are very simplistic sorts of rules.
Probabilistic forecast is the archetype of the stuff that you don’t want to have in your system of records. I mean, again, as soon as we start, when you say probabilistic forecasting, we are discussing distributions of probabilities. Those objects, memory-wise, are fat. It’s going to take a lot of space to have all those probabilities. You can be very smart in various ways, but let’s be obvious. I mean, it is obviously introducing, compared to the raw data that you have, a lot of overhead. You’re macro-expanding your data to assess all those probabilities.
So, fundamentally, you have something that by design might be very powerful, yes, but pretty much by design, it’s not going to be real-time. If you start into fancy probabilistic assessment, you’re not into the territories of real-time calculation. You want something where you can allocate gigabytes of memory and spend, let’s be crazy, seconds of calculation. It’s okay. Most supply chain decisions can afford a few seconds of delay, but not your ERP.
Conor Doherty: Well, again, just to follow up on that point about systems of intelligence and the lack of necessity to have real-time calculations depending on what it is that you’re trying to calculate. So, just to give an order of magnitude here, if you give the example of an inventory replenishment order, if you were talking about a store or a client, let’s say 300 stores and, for round numbers, 50,000 SKUs, you would be talking about 10 hours, 12 hours, like overnight processing to arrive at these decisions as opposed to the system of records which would just be…
Joannes Vermorel: Yes, but you want to keep your calculation under typically, at Lokad, what we do is 60 minutes, but for a reason that is completely different. So yes, in theory, you could have a calculation that takes 10 hours. In practice, it’s a very bad idea because if your calculation crashes in the middle and you have to restart, that means that you are creating operational problems.
So, you want to keep your calculation sufficiently short so that when you need to redo it, there is still plenty of time. And the second reason, it’s even more important, is that this calculation, you’re not going to get it right initially. As I said, a numerical recipe, as long as it produces insane results, you need to modify and update until you have a numerical recipe that generates no insane decisions whatsoever, which means a lot of iteration.
If you have something where the calculation completes in under 60 minutes, it means that an engineer can do maybe five, six iterations a day. If you have something that takes 10 hours, it means one iteration a day. You really want to have something where an engineer can iterate many times a day. And frequently at Lokad, when we are in design mode, when we craft a new numerical recipe, we’ll try to keep the calculation under a few minutes so that we can have literally dozens of iterations per day.
Conor Doherty: There are examples though, again, to switch from, let’s say, retail to something like aerospace. There are examples where you would want the decisions to be generated in a handful of minutes rather than even an hour. Like 60 minutes could be catastrophic financially. So again, it’s not to say that the fastest we can do is 60 minutes. It’s rather depending on the context of the vertical.
Joannes Vermorel: Absolutely. But even, you see, you have to appreciate that between one millisecond, which should be your performance target inside an ERP, and a minute, we are talking of almost five orders of magnitude. This is very different. This is literally more than 10,000 times more, you know. Which means that you can do things very, very differently.
If you want to operate under a millisecond, it is very, very difficult. Plenty of things are just not even possible. Even the speed of light is quite slow. I mean, if you are talking of things that operate under a millisecond, that means the speed of light is only going to go for 300 kilometers. That may sound a lot, but if you want to think in terms of back and forth, that means that one millisecond is literally the speed of light. You can’t really go further than 150 kilometers if you need to go.
So, you see, it’s the sort of speed where suddenly any network communication is out of the picture. So, if you want to stick to sub-millisecond sort of performance, it is not allowed to do any kind of network communication. Even loading things from a spin disc is kind of out of the picture. A disc that rotates, a magnetic disc, the latency will be something like 10 milliseconds. So, even loading something from a disc is out of the picture.
With an SSD drive, you know, solid-state drive, you can do that, but even there, you won’t be able to do many accesses. You can do maybe a few. So, what I’m saying is that there is an enormous amount of difference between what you can do in a millisecond and what you can do in a minute. In terms of computer design, it’s completely different. If you have a minute, you can do a lot of network calls, you can do a lot of fancy calculations, you can load a lot of data. It is vastly easier to engineer.
Conor Doherty: Well, Joannes, thank you. There are no further questions. Thank you very much for your time. It’s been about an hour and a half, so I’ll give you a minute for a closing thought. Anything you want to say before we leave?
Joannes Vermorel: No, I wish I would say a lot of mental fortitude for all the people who are engaged in AI processes for their supply chain because, well, those processes will fail. I’m very sorry. I’m very sorry, guys. It just so happens. Don’t take it personally. I mean, I think you can take comfort. I think to those people, you can take comfort that your skills are irrelevant, you know. And by the way, the skills of your vendor are also irrelevant at this point. So, it doesn’t matter if you’re good or bad, you know. That way, you can think of yourself not too poorly when facing the failure. Don’t take it too personally. It was failure was guaranteed. It was doomed from the start.
Conor Doherty: Yes, okay. Well, on that cheery and festive note, Joannes, thank you very much for your time and thank you all for watching. We’ll see you in 2025.