00:00:00 Introduction to the interview
00:02:09 Quality vs cost in supply chain decisions
00:05:16 iPhone as a quality example
00:08:36 Decision-making and optionality in supply chain
00:11:47 KPIs for assessing supply chain performance
00:14:27 Service levels as performance measure
00:17:24 Importance of relevant quality in product design
00:20:42 Complexity beyond human mind in supply chain
00:24:11 Impact of AI and automation on supply chain
00:27:59 Use of large language models at Lokad
00:31:21 Speed of modern computers and AI cost
00:34:34 Sourcing analysis and AI’s impact on cost
00:38:17 Cost trade-offs in supply chain
00:41:38 Deciding number of competitors to monitor
00:45:10 Comparing software sophistication to headcount
00:48:26 Investing in understanding supply chain drivers
00:51:43 Market situation cannot be extended into future
00:54:23 Profit as a measure of decision quality
00:58:12 Complexity of supply chains due to digitalization
01:00:52 Amazon’s success and growth strategies
01:03:24 Encouragement against fearing supply chain complexity
01:06:01 Being approximately correct is better than exactly wrong

Summary

In a dialogue with Lokad’s Head of Communication, Conor Doherty, Lokad CEO Joannes Vermorel discusses the quality-cost ratio in supply chain management. Vermorel emphasizes that quality refers to decision-making, not product attributes, and that client-perceived quality may not align with optimal supply decisions due to cost. He criticizes traditional KPIs, arguing they don’t reflect genuine quality. Vermorel also discusses the role of Large Language Models (LLMs) in supply chain management, noting they can lead to smarter decisions but can inflate IT budgets. He suggests the quality-cost dilemma is a meta game, requiring software engineering to solve supply chain problems and assess trade-offs.

Extended Summary

In a thought-provoking conversation between Conor Doherty, Head of Communication at Lokad, and Joannes Vermorel, CEO and founder of Lokad, the duo delves into the intricate concept of the quality-cost ratio in supply chain management. Vermorel elucidates that quality in the context of supply chain refers to the quality of decisions rather than the physical attributes of the products. He emphasizes that the highest quality of service from a client’s perspective may not necessarily align with the highest quality in terms of supply decision for a business due to the associated costs.

Vermorel further elaborates that while investing more resources, people, and software can lead to better decisions, these decisions should not be confused with the quality perceived by the client. He acknowledges the subjectivity of measuring the quality of decisions, contrasting it with the more straightforward assessment of physical products. However, he argues that the perceived quality of a product often goes beyond its physical attributes, using the iPhone and its app Marketplace as an example.

In Vermorel’s view, supply chain management is a mastery of optionality, where the quality of decisions can be elusive. He suggests that some metrics, such as decisiveness, can be measured objectively. He criticizes the use of KPIs like service levels and demand forecast accuracy to assess supply chain performance, arguing that these don’t reflect quality in a genuine sense. He describes these KPIs as numerical artifacts that may not correlate with the quality and success of a supply chain.

Vermorel also discusses the role of Large Language Models (LLMs) in supply chain management. He explains that while LLMs are expensive, they can result in smarter, higher quality decisions. He warns that companies are spending large amounts on these systems, which can significantly inflate their IT budgets. He suggests that there are instances where it would be more cost-effective to use a cheaper version of LLM.

Vermorel believes that with LLMs, companies can engineer the quality of their decisions and manage the cost trade-off. He notes that this is a concept not often discussed in mainstream supply chain management. He explains that modern supply chains are executed by software, which can be engineered. He notes that there are easy metrics to measure the cost of running software, such as time, memory, and disk consumption.

Vermorel argues that the quality-cost dilemma is about engineering software to solve supply chain problems and assess trade-offs. The focus should be on creating software that can determine what better quality means for clients. He emphasizes that the quality-cost dilemma is a meta game that companies need to play to engineer superior supply chains. He likens it to a chess game that can only be won through software.

In conclusion, Vermorel advises identifying the decisions in the supply chain and assessing what quality means in a broad sense. He suggests identifying 20 dimensions to the supply chain as a more comprehensive approach than simplistic frameworks. This conversation serves as a reminder of the complexity and nuance involved in supply chain management, and the need for a more sophisticated approach to decision-making and quality assessment.

Full Transcript

Conor Doherty: Welcome to Lokad. Every business decision reflects a careful balance between quality, how good a thing is, and how much it costs. But does this quality-cost ratio extend to supply chain? Here to discuss is Lokad founder, Joannes Vermorel.

So Joannes, the quality-cost ratio, much like scarcity, I’m sure people have a general idea of what other people mean when they use it. But could you give a brief overview of what that means and then connect it explicitly to the topic of supply chain and why it’s important?

Joannes Vermorel: Quality is an attribute that you would apply much more to the physical product itself. For example, an iPhone is a high-quality smartphone. You can have a very cheap car, a very high-quality expensive car, and everything in between. That’s probably the easiest way to approach that. But when we go to supply chain, it turns out that supply chain is not engineering the products, it’s not producing them. So when we think in terms of quality from a supply chain perspective, it’s not exactly the physical attributes of the products that are of interest.

Although this may be a supply chain problem to some extent, we may revisit that maybe later, but fundamentally, this is not there just because it’s not the responsibility of supply chain.

If we include that into supply chain, then supply chain becomes so vast that it’s kind of meaningless because it becomes then supply chain becomes almost indistinguishable from the whole business. So, let’s, for the sake of clarity, we say that the quality, the physical attributes of the products, are not exactly a supply chain. That’s not what I mean by quality in a supply chain context.

By quality in a supply chain context, I refer to the quality of the decisions. The first gotcha is probably that the quality of the decisions is not the quality of the service as perceived by the client. For example, never facing a stock out would be the highest level of immediate service in very simple retail settings. This is the highest quality of service from the client perspective. But is it the highest quality in terms of supply decision for your business? Absolutely not, because it’s going to be unsustainably costly for your business. So the quality of the decisions, that’s what I mean by the quality versus cost.

In order to get a better decision, we can invest more resources, more people, more software, eventually invest in research and development to improve the whole thing. Those are all the elements on the cost side and then we will get decisions that have higher quality but higher quality from a supply chain perspective, which should not be confused again as the quality perceived by the client.

Again, supply chain is all about trade-offs. So a very high-quality decision is a decision that really carefully balances all those trade-offs. As I say in my series of lectures, supply chain is the mastery of optionality. So if we say super high quality, what we’re saying is super high quality would be to define that as a very successful execution, a high level of mastery of this game being played at making supply chain decisions over and over.

Conor Doherty: To immediately follow back on that, you give the example of the iPhone. If you talk about the quality of that and quality assurance and all the protocols that go into a quality cost assessment, you can point to the physical properties of an iPhone and say, here’s the quality of the chip, here’s the quality of the storage. I can measure that and I can say well that’s high quality. But when you talk quality of decisions, you’re traversing into very subjective territory.

Joannes Vermorel: On the surface level, yes, when you have physical products it’s more straightforward to assess quality. But only on the surface level. If we go back to the iPhone, which is actually quite a good example, the quality of the iPhone was actually, the first year of the iPhones, the sales were not that great. If I remember correctly, that was decent for Apple, especially since Apple at the time was very, very much struggling, but it was relatively modest.

The iPhone exploded a few years later when, after the introduction of the App Marketplace. And that’s where Apple decided that they were going to have this marketplace of literally one-click install for apps where you could just click on an app, pay $1, and then you have an app that is perfectly compatible, super easy to install on your smartphone. And that’s where the popularity of and the perceived quality of the iPhone exploded. If we look at the physical attributes, yes, that was a very nice device, but the reality is actually the smartphone one was actually quite crappy in terms of just being able to pass phone calls. It was not such a great phone just in being able to do mission number one, which was receive and call with this device.

But the perceived quality exploded when they introduced the App Store because suddenly this smartphone format made complete sense. You had indeed a mini computer, and you could do so much, and then the perceived quality is that it was just not a phone anymore, it was a smartphone. But people do not remember that it didn’t become something until after one year afterward. So the point that I’m making is that quality, yes, there is some very basic, very fundamental, better material, better tolerance to stress and fatigue so that the thing doesn’t break down, doesn’t degrade. It should be light. In general, anything that you want to move, if it’s lighter, it’s typically better, etc.

So yes, there are some very fundamental aspects, but also, quality when we go to physical products, it’s frequently more than meets the eye. There can be all the expectations that you can do more with the product, that you have an ecosystem, that you have all sorts of accessories that play with that, or even if it’s just decorative, that it will look good in many situations. You know, an object that is very decorative and it will look like a very nice piece in your apartment, even if it’s very different style of apartment, might be some sort of super intangible quality, but it’s still there to some extent.

Conor Doherty: But very little of that translates to supply chain decisions.

Joannes Vermorel: Quality, yes, it is, I would say, for physical products, at the very least, you have a lot of simple backup metrics that are very direct, but still, you have this depth that is difficult. And then when we go to supply chain, supply chain is a mastery of optionality, so it’s literally about observing options and, at some point, deciding among those options that you’ve actively cultivated that you’re going to pick one, and it’s going to be your decision.

Okay, that’s super abstract, so the quality becomes something that is very elusive. Although even if we are talking of something that is fairly elusive, like the quality of decisions, there are still a few metrics that are not that difficult. For example, in the military, they have this saying that, you know, the worst plan is no plan, and that there is nothing worse on the battlefield than a surgeon that is just indecisive. Indecisiveness is almost always wrong. That means that even if your decision is do nothing, wait for the enemy to make a mistake, that’s very different from being indecisive and not doing anything.

It’s, no, I have decided that the best decision that we have is we wait until the timing is right, and that’s very, very different from, I’m indecisive, I don’t know what to do, so I’m just panicking, and I do nothing. You know, that’s a completely different mental state, and I would argue that although the decision is kind of the same, you don’t do anything, the quality of one decision, like we wait purposefully, intentionally, knowing that we know what we’re waiting for, versus we’re just indecisive and it’s in a semi-state of panic, it’s very, very different decisions in terms of quality.

So even, you know, for example, decisiveness is something that you can measure in a way that is relatively straightforward. So in supply chain settings, it would be, are you able to make your decision promptly, or does it take forever for no good reason at all? You know, that would be some metric, and that’s, you know, measuring the time it takes to get to a decision can be something that is measured objectively. So, to some extent, you have some easy metrics, but I would say they’re not very, very good. Unlike, you know, the thing on the battlefield, if supply chain, there is rarely things that are like super urgent to the next seconds or minutes. So, it’s not actually that clear.

Although, obviously, if you go into it takes four months to take any kind of decision, then you’re probably very bad. But yes, you have those elements that are more difficult, it’s more abstract, and also it is incredibly open-ended. So, it’s there is no clear limit whatsoever into what you could look at to assess the quality of those decisions.

Conor Doherty: The analogy of the General on the battlefield and any decision is better than complete indecisiveness. Well, I mean there are already KPIs that companies will use to assess the quality or the performance. Let’s just say performance for now and then we can talk about quality to assess the performance of the supply chain. So, for example, service levels for any given reference or in some cases how accurate a demand forecast is. There are KPIs to tell you well that was 50% accurate, 60% accurate. Are you saying that that’s better than nothing?

Joannes Vermorel: Not quite. First, because those KPIs don’t really reflect quality in a genuine, deep sense. They are just numerical artifacts. Most of those KPIs are just numerical artifacts.

Conor Doherty: What do you mean when you say numerical artifacts?

Joannes Vermorel: I mean numbers that are defined according to some straightforward mathematical definition. But why would this mathematical definition have any correlation with quality in a genuine sense?

Conor Doherty: You mean they’re just numbers on a page?

Joannes Vermorel: Yes, they are just numbers. And not just any kind of numbers. Numbers that are typically derived from textbooks or formulas. For example, if I say mean square error for the forecast accuracy, this is a very popular metric. That’s a metric that you will find in many textbooks, mathematical textbooks. Why do you have this metric? Well, you have this metric in textbooks because you have a lot of theorems, that’s the norm two, that’s the mean square metric, you have norm one, norm two, etc., and you have a lot of theorems, mathematical theorems, statistical theorems that are associated with this metric.

The problem is that it is an inward-looking perspective. You have the mathematical world where people have said, “Why are you interested in this norm two?” The answer is because I have so many theorems that are associated with this norm two. I can play with that, I can elaborate many abstract constructions and do a lot of things mathematically speaking. Fine, that makes it an interesting mathematical object, just like prime numbers for example. Prime numbers are fascinating mathematical constructs. They are very real in a mathematical sense as well. But that alone doesn’t prove that there is any kind of correlation to the sort of quality and success that your supply chain will enjoy.

Conor Doherty: Tie that nice analogy then to service levels.

Joannes Vermorel: Service level, why should you have any kind of correlation? Yes, to some extent, if you have like 0% service level, you don’t sell anything so it looks kind of bad. If you have like 100% service level, it is also bad because it means that you always have inventory writeoffs. Because if you do not allow yourself to ever have out of stock, that means that you can’t ever liquidate anything. So, the extremes are pretty bad. But in between, anything goes. You know that the optimal is not at the extreme but in between, my guess is as good as yours. It’s just very fuzzy.

I’m very suspicious when people give me a percentage as supposedly a measure of performance or quality. Where is the reasoning? It’s just something that falls out of the sky. You just give me a formula and unless you have a very strong argumentation to back this up, I have no reason to believe that this numerical artifact is appropriate. It’s just something random that you pulled out of a mathematical or statistical textbook.

The interesting thing is that if we go to this sort of KPI mindset is that when it comes to quality, when you look for, let’s say, textbooks about design and manufacturing, this is something where there is a lot of nuanced discussion going on. For example, even if we go into the testament of a furniture dealer from the founder of Ikea, which is a beautiful document, super short, I really recommend to the audience. One of the points, I forgot if it’s 11 or 14, there are like 20 points and one of the points is of the Ikea founder is, “Do not be misled by some easy metrics about quality for the products.”

For example, he says the first thing is, if you want to have a nice surface, keep in mind that only the surface that people can touch and see matters. And he’s talking about furniture. He said, “Do not pride yourself into, for example, having a high-quality surface that is going to be super durable, super smooth, super good-looking over the years if people can’t see it and can’t touch it.” So, he was in the text referring to something like the back of the furniture or something that is under the table, something that you will never really appreciate. So, he was saying that when you invest in quality, make sure that it’s something that is truly relevant and not some kind of abstract measure of the quality like, “I have high-quality materials or high-quality surface everywhere,” including the surfaces that do not matter from the eye of the client.

And why, and he was mentioning that, he was saying because if you do that, and he mentioned that in his short point, is that if you do that, then customers end up paying for qualities that they will not enjoy. And from his perspective, that was kind of bad. They have to pay, every single cent that they pay is for the quality that they will enjoy. And that was a very nuanced, so I would say if we go to design, manufacturing, you have this sort of very nuanced discussion about, you know, quality versus cost, exactly how do you approach that?

But if we go to the supply chain world with decision-making process, this thing is absent, completely absent. The mainstream supply chain theory, and I would even say that the mainstream business theory, I would say, you know, MBA style business studies, it is very much absent when it comes to quality of decisions. People would have reasoning that are very, very binary. I’ve never seen, for example, in a textbook about supply chain or even, you know, business, general business studies, discuss in depth the spectrum of investment and resources that you can spend to make your decisions a little bit better until the diminishing returns just undermine your efforts and then the cost exceeds what you get from getting the decision better.

And again, if you think of this quality of chasing the right service level, for example, companies would just say, “We have those targets of service level.” But what about the investment that you make, you know, both capex and opex, into chasing the right service level? Again, the service level is not a great KPI, but just to keep it simple for the audience, I will refer to that as at least it’s something they’re familiar with. So, even if you pick a service level and you say, “This is my target, this is what should be the best trade-off for my supply chain and my company,” what is the quality of this assessment? Should you invest, you know, you came up with a number, let’s say 95%, but is it the best number that you came up with and should you invest more to refine this number further or not, you know, and why?

And it’s typically never discussed in supply chain textbooks. People just give you a recipe and we say, “You apply that,” and then you have like two situations. Are you compliant to your process or non-compliant? And this is it. The spectrum of refining what even compliance means and what quality means and what should be the direction you’re looking at and what should be even the journey is just absent.

Conor Doherty: I don’t disagree with that, but there are two points. The first one would be, isn’t it possible or reasonable to say that the existence of these sort of binary acceptable/not acceptable, good/bad demarcations that lack finesse, aren’t they a result of the fact that what you’re describing is a level of complexity that goes beyond the human mind? And that’s why there are these very crude characteristics like, yeah, go, no go, good, bad. And I agree, it lacks sophistication, but it’s a result not of stupidity, just how exactly can you balance, how can you spin millions and millions of plates?

Joannes Vermorel: I would agree that it’s the result of the human mind, but not necessarily the way people would see it. You’re dealing with humans and humans are incredibly complicated. It’s not that the human mind has some limitation, it’s just that you’re dealing with a cohort of people that are individually incredibly complex. So indeed, you have to resort to very simple criteria not because your mind is limited, it’s just that you’re dealing with people that are so incredibly complex that it’s not going to, you know, you get the diminishing returns like super, super fast just because it is incredibly difficult.

Let’s say you have like a hundred demand and supply planners. It’s an immense amount of effort to achieve those sort of very high levels of finesse and whatnot. So, I agree with the statement that it’s too much, but not so much because there is some limit in our understanding. We can do a lot. It’s just that you’re dealing with people that are so incredibly complex and nuanced, etc., that trying to engineer that, and plus due to the fact that they are complex, they have a will of their own, etc., most of the attempts at being hyper-rational are just going to backfire at you. You know, that’s the curse of, if you try to incentivize people, they will game that and respond badly. Usually keeping it super simple is a safe bet.

Another topic is that supply chain management in the 21st century should not be operated through people at all levels. Yes, at the top we have people, but the execution layer should be mechanized entirely.

Conor Doherty: My next question then, the absence of this perspective that we’re outlining in traditional textbooks is surely a consequence of the fact that we now live in an age where we have machine learning, AI, and automation on a scale and to a degree of granularity that didn’t exist five decades ago, even two decades ago. So, your response?

Joannes Vermorel: That’s where it gets very interesting. If you say that the decisions are going to be engineered, that’s going to be a machine that generates those smart decisions. Yes, it’s artificial intelligence. Not general artificial intelligence, but it is artificial. If it is repetitively making a series of good decisions, we can agree that at least it has a modicum of intelligence. It is intelligence in its own, albeit limited, way. It’s certainly not dumb.

Large language Models (LLMs) demonstrate in ways that are very straightforward and illustrative what I mean by quality versus cost. If you’ve played with, let’s say, ChatGPT-3.5 versus the paid version that is GPT-4, you will see that if you pay more, you will get something that is smarter. You have a spectrum of intelligence with these LLMs, from small models that are cheap and fast to bigger models that are slower, more costly, and indeed much higher quality.

You can experience this for yourself in ways that are very direct. You can interact with a sort of dialogue and try to get this LLM to solve a problem for you. You can try that with GPT-2, GPT-3.5 and GPT-4 and you will most likely be baffled by the level of intelligence that you get. It’s very granular. There are things that will work and that will confuse smaller models, and then when you ramp up to bigger, smarter models, you get discussions that have more depth, where the answers are more nuanced, where it captures the intent of your question better, and so on.

You can see for yourself what it means to have higher quality decisions. You ask a question to the LLM, it gives you an answer. That’s the sort of quality that is at stake here. Even if this perception of quality, for example, a very good plain text answer to a question, is very elusive, you can get the vibe of this spectrum of quality literally in minutes. It doesn’t require you to have a PhD in machine learning. You can play with GPT-3.5 for 10 minutes, play 10 minutes with GPT-4, and you will get it. You will see this sort of extra quality that there is in every single answer that the system produces.

This has been existing at Lokad for almost a decade, this sort of nuance. But due to the fact that we didn’t have any way to demonstrate this sort of thing, because supply chain decisions are frequently very abstract and somewhat opaque because they’re related to a supply chain you might not be familiar with, it’s not exactly something you can point and touch. And it’s not like the sort of things that are easy to demonstrate, and even if you can demonstrate, people are not going to perceive this sort of nuance between lower quality and higher quality just because they are lacking information and context and whatnot.

But LLMs were a breakthrough in that suddenly, you could just play with that and see, “Oh yes, for this amount of money, I get a decision that is that much better and it’s kind of obvious.” And then you realize that you have some situations that do not call for smarter decisions. It’s not always better. You have some stuff where just getting the answer faster is good enough. This is the right trade-off. And you realize that in terms of intelligence, higher quality is not always just better. You have a trade-off at some point. Fast is better, actually, rather than smarter.

Conor Doherty: How does, because I know that we now use LLMs as well as part of our offering, how exactly does the inclusion of LLMs change the quality-cost ratio, which is already quite abstract? Now we’re adding another layer of abstraction, which is LLMs, but pragmatically or practically speaking, in an example preferably, how does this influence the quality-cost ratio from the supply chain perspective?

Joannes Vermorel: It does change because LLMs are so expensive for now. I mean, the audience may not realize, but LLMs, they’re great, but they’re costly. As a rule of thumb, to process one kilobyte of data with an LLM costs you something like a million times more than doing pretty much any other sort of calculation with the same kilobyte of data. So, literally, LLMs are orders, and I don’t mean like one or two, I mean like six or seven orders of magnitude more expensive per kilobyte of data processing, and slower as well, compared to any sort of calculation that you have.

Conor Doherty: You’re talking quantitative or qualitative, or both?

Joannes Vermorel: Just factual metrics, how much time and cost it will cost you to process a kilobyte. The fact that you can see the text being streamed in front of you, people think it’s cool, but as a computer scientist, I think it’s,“Wow, that’s so 1950.” You know, computers are now so fast that normally you can display, I would say, thousands of lines in milliseconds. When you have a well-designed webpage, it will display an endless wall of text in milliseconds, and you will not see the text being printed one character at a time. Why? Because it’s so fast that it is below the perception threshold.

If you go back to the 1960s, you could see, in very old movies, the text being printed a character at a time. If you go back to the movies, you know, the James Bond of the 60s and whatnot, you will see the sort of ancient terminals where you could see the text being displayed one line at a time. And why is that? It was because those computers at the time were so sluggish that you could see that. And the reason now where you click a webpage and bam, it displays, and usually when the page is slow, it’s because you’re loading the equivalent of 10 or hundreds of thousands of pages of text in this webpage. It doesn’t make any sense, whatever, bad software engineering, but bottom line, text should be instant. I mean, it cannot be truly instant in the physical sense, but it should be, you know, walls of text should be so fast that it’s way beyond the perception threshold of the human mind. The fact that you can see the thing being displayed, it just tells you that this thing is impossibly slow.

So, back to your question, is that why should it impact is, well, LLMs are very costly. So, you will realize that you have to pay attention because Open AI has such a crazy valuation because, you know, investors are not idiots. They see, “Oh, companies are throwing money like crazy at those systems, like millions of dollars,” and say, “Oh, we want to save money on IT”, and bam, Open AI comes, and there are companies that say, “We are proud to spend a million dollars a month on those LLMs.” Congratulations, you’ve just exploded your IT budget. Maybe you have a solid reason to do that, but let’s make no mistake, it is costly.

And there are even situations at Lokad where we see where we say, “No, at this point, it’s actually cheaper to have an expensive white-collar in Paris,” because it’s cheaper. So, that’s, it’s not, you know, that’s, um, so LLMs are, again, I think it matters in this way just because they’re expensive, so you have to pay attention, and you will see that if you’re doing it right, there are places where you need to say, “No, we are not going to do that because it’s just too costly,” or “We need to fall back into some sort of cheaper version of LLM that is not as intelligent because if we do it with the most expensive stuff that we can get from the market, it’s going to be too expensive.”

Conor Doherty: You can correct the details where I’m wrong, but I remember one of the conversations we had that kind of inspired the idea for this recording was, you gave the example, imagine we’re a company, we place purchase orders, and our supplier is a bit unreliable. We suspect they’re unreliable, suspect there might be a better option on the market, we could source from somewhere else, but I mean, there’s only three of us on staff. Am I going to dedicate your time to doing an analysis to find others, to find other potential suppliers? So, you could use an LLM to each time you place a purchase order, perform an automatic sourcing analysis, and then at your leisure, review that, and that would be significantly cheaper than dedicating your mental bandwidth and your time and your effort, etc. And then that scales. So, again, that’s what I’m driving at in terms of how that impacts the quality cost.

Joannes Vermorel: It does scale, but it’s not free. If you decide, for example, at every purchase order you do that, what will be the cost? Just doing a sourcing operation, you are going to scan web pages with LLMs. You’re going to scan again, maybe up to a megabyte of text because you will analyze the web. It’s not cheap. So, you may end up having to compose emails and maybe start to have the LLMs with a few scripts to send an email, process the answer, do some back and forth just like a human does.

Yes, you may have something that is completely automated, but you may realize that every single time that you trigger that to do like a sourcing survey, it costs you like $5 worth of LLM. Yes, it’s cheap. It is certainly cheaper to have one person spend two days on the case, but it’s not exactly free. It’s cheaper than having one person spending two days, but it’s not exactly free. If you say that you’re willing to spend $5 every single time you pass a purchase order, you may realize at the end of the year that you spend a lot of money. Maybe you don’t want to do that always.

Conor Doherty: But that’s an extreme. Again, that’s another extreme. There will be a sweet spot.

Joannes Vermorel: Exactly, that’s where we have this spectrum. From the mainstream perception, when it has about people, you just think adequate or inadequate. You will go for a process and just enforce it. You may be aware that there is a spectrum, but you address the spectrum in a very crude way, such as sourcing operation. Every supplier is revisited once a year, period. That’s your process, keep it simple. But this is where instead of trying to have a nuanced approach to the spectrum, you just have the hammer of the process and you solve that in a very binary sort of resolution. But with the LLMs, you can engineer your spectrum. You can say, well, I can go from resourcing my options at every order, that’s one extreme, or just do it like once a year, that’s the other extreme. Anything in between is acceptable and you can do that and you can work with that.

It’s interesting because you can engineer the quality of your decisions, your quality of your optionality. And then you have a real cost trade-off to deal with. That’s something that the mainstream supply chain perspective just does not even remotely discuss. I’ve never seen any supply chain textbooks that say how you’re going to engineer the process that generates the decision so that per dollar invested, you get the biggest bang for your buck at the decision generation level.

Conor Doherty: How exactly do you delimit or identify the gradations along that spectrum from that’s an extreme to that’s slightly less extreme, all the way back down to the far end which is completely unacceptable? Is it possible to quantitatively identify the steps between each of these modules?

Joannes Vermorel: To some extent, it is. From our perspective, a modern supply chain is executed by software. This decision layer is a machine. It is a complex piece of software with a set of numerical recipes. You can reason about the limit of that. Even if you can reason about that, people who are dealing with the engineering of that are still human. There is a limit to the recursion, because at some point you have to decide how many supply chain scientists do you want and here, that’s a judgment call, but at least the base layer that generates the decisions is a machine and it can be engineered.

If it’s a physical product, you have some easy metrics. If it’s a piece of software that you have to run, you also have a series of easy metrics, especially on the cost side. How much time it takes, how much memory it consumes, how much disk it consumes, etc. So you have all of that and then you can actually see when you have a whole spectrum of things that are optional, that you may run or not run. For example, you can decide that you’re going to do your pricing analysis with competitive intelligence. So you will get the data of your competitor. But how many competitors are we talking about? Scanning the web is not free, it costs money.

For those who are familiar with the scrapping business, web scrapping, getting the websites of your peers, there is some substantial cost associated to that. If you want to rescan every page of your competitor daily, the cost is non-trivial, especially if your competitors have tens of thousands of products on display. So how many competitors are we talking about? Do you want to scan your number one competitor, your top three competitors, or top 20 competitors? The cost is pretty much linear. The more websites you want to monitor, your costs will pretty much grow linearly with the number of competitors. But the information that you will get is obviously with diminishing returns.

Your competitors also monitor their competitors. So if you monitor, let’s say, your top three competitors, it turns out those competitors also kind of monitor their top three or five competitors. That may include companies that you do not monitor. So in the end, it’s everybody monitoring kind of everybody. If you look at the graph of who is monitoring who, you will see that it’s a very highly connected graph where pretty much everybody is monitoring a bit of everybody. The big guys are going to monitor other big guys plus maybe one smaller competitor. Smaller competitors are going to monitor a few smaller competitors and just one big guy, just for the sake of it.

Although it is difficult, it is not impossible. You can get this sense of there is diminishing returns. When you run your algorithms with or without a data set, you can get a sense of whether it really improves or really nudges the results. For example, if you said that you want to do a pricing optimization and then you realize that up to three competitors, it really changes quantitatively the price. So adding one extra competitor really changes the price that you get at the end.

If I trust my numerical recipe and if I say that adding this third competitor to the mix changes my prices on average from let’s say 0.75% to less than 1%, but 0.75%, okay it’s not insignificant. But then you add the fourth competitor and then you see that it’s a 0.1% change. I don’t know if this 0.1% is super critical to my business, but still, it’s an upper bound to the amount of profit that this thing can generate. At best case, it’s 0.1% of margin. If the price was exactly nudged in the right direction every time, but still, that puts an upper bound to the impact it could have. And here you can say, well, it feels like really super small and so I would say, diminishing return, I do not do that because the cost for this fourth competitor may not be worth it.

So, you see, there might be ways, there are typically ways to approach that. And we can see this sort of nuance and gradation and it is typically something that they emerge when you start using numerical recipes that are more sophisticated, more intelligent, more, you know, that they can do more. If we want to use a human analogy, it’s like when you start using software that is very sophisticated and smart, it starts looking a bit like your headcount. How many people do I want to tackle this problem? Except that you have much more direct control and if you want to scale up or down, you don’t have to fire people or care about their ego. This allows you to engineer the whole thing as opposed to dealing with people who typically respond negatively to you trying to engineer their day-to-day process.

Conor Doherty: It occurs to me that in the classical perspective, we didn’t have the quality-cost dilemma because supply chain was governed by very simple heuristics. Now you’re saying that with the advancement of technology, we can actually quantify the quality-cost ratio of supply chain decisions to whatever degree of precision we want. This leads to the next question: when you break apart those two concepts, quality and cost, using software or AI to evaluate supply chain decisions, cost is understandable. But is quality still a subjective sentiment? Or are you talking about the return on investment for that cost?

Joannes Vermorel: First, I would point the audience, and I will jump to your question, but I would point the audience that most of the sort of trade-offs that are presented in supply, like, let’s say the typical triangle of cash, cost, and service, the sort of things, you know, this is what I really like about this quality-cost dilemma. It really elevates the topic. I believe that all those sort of trade-offs, first in terms of if we take this triangle, I think there is much more dimension than that. It’s not a triangle, it’s a dilemma with an n-dimensional dilemmas where you have hundreds of things that pull your directions in all sorts of constraints, drivers, and whatnot. So it’s literally trade-offs all over the place with one hundred dimensions and more. That’s what it looks like.

And the interesting thing about the cost-quality dilemma at this sort of playing the supply chain games is that it elevates the topic. It is not about those trade-offs. It is about the meta problem of how do you engineer a piece of software that solves this problem. That’s how you’re going to, because through this quality, you will do this kind of assessment of all those trade-offs. So when we say this quality versus cost trade-off in the master of optionality, what we’re saying is that we do we invest in just figuring out those 100 drivers and constraints that just frame your supply chain game that you’re playing. That’s a very meta perspective. Instead of thinking, “Do I have the right service level?” You’re thinking, “Do I have a software that is going to be figuring out what better quality means for my clients?” That’s very meta. That’s what we are talking about.

Back to your question, and I got distracted a little bit with your question. Sorry.

Conor Doherty: Do you measure quality purely in terms of subjective sentiment or is it driven by financial return?

Joannes Vermorel: I would say, the theory would be it should be purely quantitative returns. But, and that’s where in practice, it’s going to be purely subjective. So that’s very strange because you say, “Oh, you just told me it is in theory completely quantitative, but in practice, it’s going to be completely subjective. No contradiction there?” So the reality is, yes, from a very theoretical perspective, what you want is to engineer the long-term profitability of your company. So it is quantitative, kind of in essence.

Now, the problem is that when you look far ahead, all your quantitative indicators completely lose relevance. So my personal take is that if you have the belief that you can take the numbers that you have today and project that a decade into the future and that those numbers will tell you something of value, I would say that you’re deluded. So you see, it is an illusion. It is, and I say that as a professional capacity of a number cruncher. At Lokad, we do crunch numbers for a living, and we have been doing that for a decade and a half. Numbers become completely meaningless when you project that a decade into the future.

Why is that? It’s because supply chains are competitive. It is a game that is played against super intelligent entities. When I say super intelligent, it’s that your competitors, they are more than the sum of their parts. They are made of many employees, so the aggregation of that is that you’re facing an entity that is super intelligent in the sense of more intelligent than any human on earth. If you’re playing against a company like Apple, it’s a collection of experts, of very smart people, and the net result is that those people are going to do things that are going to surprise you. They are going to outcompete you in so many ways, plus you have new entrants, new rivals, and whatnot. So the bottom line is that you cannot take the situation of the market right now for granted and extend that 10 years into the future. That’s a very big mistake.

Conor Doherty: Just to drop a pin there, I want to make sure I, as well as anyone watching, understands that. Are you basically making the same argument about the quality-cost ratio as you are about forecasting in general? Like there’s a limited time horizon in terms of its validity?

Joannes Vermorel: Yes, exactly. And again, validity, statistical validity, you can do that. So you could say, you can make an argument that the consumption of milk, I take a very basic product, that fresh milk will be this 10 years from now in the French market and be fairly accurate because again, you have such a track record. Where I disagree is that you can’t base your business strategy on that. Why? Because a decade from now, maybe what will make a brand of fresh milk appealing to the product may be completely different from what it is now. Maybe there will be new labels, there will be new standards of expectations of what a truly high-quality organic product actually means. This is a game that is played very aggressively. So maybe the consumption of milk will be still pretty much the same, but it might or it might not be on a fundamentally different game just because the sort of branding that you will need to do, the sort of packaging will come with very subtle differences that make all the difference.

Yes, I expect that 10 years from now, it will still be mostly white bottles. Yes, okay, but that’s missing the point that you can have so many little nuances in that that can make all the difference in terms of capturing market shares, making a profit, and whatnot. So you see, it is not granted. And if you look at even very successful companies, let’s say Coca-Cola company, they have been consistently reinventing themselves in terms of image and branding, and there is both continuity and reinvention every decade. So this is not just doing and playing the same game. It is, and it is quite impressive when you look at companies like Coca-Cola. They have been successfully reinventing themselves for basically 100 years. So it’s very impressive.

And that is what I say where if we go back to the initial question, it is, at the fundamental level, yes, you’re chasing profit. Yes, and yes, this is something that in the future, the quality of your decision will be assessed into, let’s say, hard-earned euros or dollars. So ultimately, it will be quantitative and purely quantitative. If you’re very successful, it will show in monetary terms. But due to the fact that when you project yourself far into the future, those KPIs just cease to be relevant. Especially, it’s at the end of the day, it is almost entirely judgment calls and thus qualitative. Just because, generally speaking, they yield better results.

And I see that as an, again, as a professional data cruncher. I don’t, I say that’s something that I say to the Lokad clients is that do not let your 10 years ahead strategy be dictated by the numbers that you see right now. It is misleading. It is a mistake. The market will evolve in a way that will make those numbers irrelevant in a way. Even if the projected numbers happen to be true, like future consumption of milk, there will be other things that will make those numbers irrelevant just because your competitors will find their ways to outcompete you in surprising ways. That’s what the competition is doing at a broader level.

Conor Doherty: If I could summarize our discussion in a question: If the true quality-cost dilemma in supply chain is as complex and expensive to resolve as you’ve described, and has such a limited time horizon, much like forecasting itself, then why exactly should people shift from the very convenient, very discreet, very understandable simple metrics of good and bad service levels? Why do it despite or in light of everything you’ve just described? What’s the incentive?

Joannes Vermorel: The incentive is similar to becoming a master at chess. It’s very difficult, very expensive, very time-consuming, but you do it to win. So, you see, the thing is that, again, you have competitors and, people say the supply chains of today have grown incredibly complex. So, there is a potential to make it better that is also kind of huge. Again, that’s the interesting thing is that supply chains have grown in complexity, uh, enormously over the last five decades because companies have, I attribute that to the digitalization, companies have ERPs, they have WMS, they have e-commerce platforms.

So, they have gained the possibility to just execute super complex supply chains and they do. And when I talk to a lot of clients, there is very few clients that told me, you know what, we want to go back to the more simple, less products, longer lead times. So, let’s keep it simpler, you know, things like longer lead times, like for example, we produce after the fact that we have received the purchase orders, you know, made on order.

There is very few companies that say, you know what, we want to go back to that made on order, because it was making everything so simpler. No, it’s not exactly the sort of direction we’re talking out. So, the bottom line is that supply chain through digitalization, and make no mistake, digitalization is ancient, it’s something that happened three decades ago, have become massively more complex.

And so, the capacity to really optimize this game that has grown massively in complexity, it’s like five-dimensional chess or whatever, has not progressed nearly as fast. So, that’s the interesting thing is that although, you know, if I again, I go back to this anecdote, both my parents started at Procter and Gamble, four decades ago, more than four decades ago.

And, at the time, they had literally something like 200 products for the world company and the French market. So, it was a game that was very simple to play. And this has grown, let’s say, two orders of magnitude in terms of complexity, at least if not three. And it is still the same sort of naive recipes and whatnot.

But so, there is like a big potential and yes, it is very difficult, I agree, it is challenging, I agree, but if you don’t do that, somebody will do it for you. And people, you know, observe Amazon. Oh, Amazon is such a massive company and it is so profitable and it is still growing.

And people say, yeah, but you know what. I would say when you observe Amazon growing so large and so fast, my reaction is also that pretty much there is like an entire class of competitors that do not, that fail at challenging Amazon at embracing this super aggressive way to engineer a supply chain.

And the sort of things that I’ve described here, it’s the sort of the games that have been played for more than a decade at Amazon. And yes, people see that it’s still, it’s an absolute giant that is way past these economies of scale. I mean, the game that Amazon is playing for nowadays is, Amazon is so large that what people don’t really realize is that Amazon operates with a massive disadvantage.

They have like a massive handicap. Think of it like, you know, a huge handicap as if you were playing golf with somebody who has like a blindfold on their eyes and they have to play against you. So, they’re so large that they have this absolutely massive handicap and they still grow and they still outcompete lots of businesses.

And I see that to a large extent to, you know, to a reflection that many, many companies have failed to improve their supply chain game. I mean, just look at that, Amazon is now managing something like 300 million listed products. That’s, you know, that’s literally almost two orders of magnitude more than pretty much any other giant company. That is very, very impressive.

So again, my take is that, would be concluding, you know, this dilemma of quality of decision versus investment, it is the sort of meta game that is being played and it is above the sort of usual traditional dilemma, trilemma, you know, cash versus cost versus service and whatnot.

This is the meta game that is being played and I would say for you, if you don’t start playing this sort of meta game, well, you’re just going to lose the game because you, you, you don’t realize what it takes to really engineer, a superior form of supply chain for your company. You’re stuck into trying to address the game itself, but now it’s a meta game.

Just like if you want to really win at chess nowadays, you can only win through software. It has been two decades since the machine beat the worldwide champion of chess. So now, beating, if you want to play chess, you know, in order to win, you know, in absolute terms, it is software all the way.

It is only about a team that is engineering a piece of software versus another team that plays, that engineer a piece of software. If you think that you can win chess through your direct action, you’ve lost. It is now a battle that is only waged between teams engineering pieces of software.

People would say, oh, you lost it, it’s not as interesting. I would say, as far as I’m concerned, it is fascinating. I mean, it’s still seeing those teams of engineers come up with better ideas and engineering different ways to craft, to be honest, I’ve never been super interested into chess.

I’ve always been much more interested into the sort of engineering the software that plays chess. And my take is that even if you’re kind of scared because you’ve transitioned from playing chess to, you know, sort of meta game of what do I do to have this piece of software, overall, it makes the game much more interesting.

You know, don’t fear that, overall, it is much more interesting, much more satisfying. And, supply chains are so complex that, you don’t have to be afraid if you’re not like a programming wizard. The problem is so vast that you have ample areas of, where you will be able to develop your skills and find your path in this journey.

Conor Doherty: As a concluding thought, we often say “seek progress, not perfection”. So, as an actionable next step, if someone was trying to pivot from the classic binary perspective of good or bad service level to moving in the direction of Amazon, what is a simple next step?

Joannes Vermorel: First, identify the decisions that are taking place in your supply chain. Spend genuine time to assess in a very broad sense what quality means in your supply chain. And I’m always surprised, when people say, oh, improvement of supply chain, it’s better service level. No, it’s not. Or it’s about the cost. No, it’s not. It’s just a fraction of that. And just think of, for example, having this trilemma of cost versus service versus quality.

And again, it’s quality defining quality of service in a very specific way, like service level. Where would say, my challenge I would pose to the audience is, find 20 dimensions to your supply chain. You should be able to find 20. It’s not that hard. You will see, when you really think hard, that there is at least 20 dimensions that are just pulling.

I mean like drivers, constraints, considerations that are different. And don’t be seduced by those super simplistic frameworks that promise you to solve your supply chain with a trilemma. That’s instead of having like a dilemma and two things that pull in different directions, identify those 20 dimensions, brainstorm.

And then you will start to comprehend that it is a very complex game being played and that deserves an answer that kind of embraces that. And again, approximately correct is better than exactly wrong. Yes, your answer might be kind of crude, but at least it is a lot more all-encompassing as opposed to be like, I have this optimal model that is just looking at two dimensions among 20.

And that gives you an illusion of optimality because it’s an optimality but in an incredibly narrow, simplistic way that doesn’t even closely acknowledge what it generally means to have like this high-quality supply chain execution that your company really needs.

Conor Doherty: All right, Joannes, I have no further questions. Thank you very much for your time. And thank you very much for watching. We’ll see you next time.