00:00:00 Introduction to computational resources in supply chain
00:02:21 Importance of computational resources in supply chain
00:07:04 Mechanical sympathy in supply chain context
00:09:38 Decision making with computing hardware
00:12:42 Illusion of expertise without depth
00:13:59 Modern supply chain dependence on computing
00:18:32 Hardware speed impacts on decisions
00:21:40 Software inefficiencies increase costs
00:24:42 Properties and limitations of transactional databases
00:27:59 Rising cloud costs from inefficiency
00:30:09 Simpler and cheaper software recipes
00:32:40 Extreme waste in computing resources
00:36:14 Hardware advancements vs software lag
00:40:48 Importance of vendor selection knowledge
00:45:15 Theoretical vs practical knowledge
00:50:00 Orders of magnitude in computer efficiency
00:54:33 Performance considerations in inventory replenishment
00:56:18 Iterative process for result quality
00:58:50 Disruption requires reengineering
01:00:18 Next steps for practitioners
01:02:17 Paying for vendor inefficiencies
01:05:04 Financial impact of decisions
01:07:16 Competitors’ lack of understanding
01:08:40 Closing remarks
Summary
In a recent LokadTV episode, Conor Doherty, Head of Communication at Lokad, conversed with Joannes Vermorel, CEO of Lokad, about the critical role of computational resources in supply chain optimization. Vermorel emphasized the necessity of understanding both hardware and software to make informed supply chain decisions. He likened this foundational knowledge to basic geographical awareness, essential for preventing problems and ensuring effective decision-making. Vermorel highlighted that while computers are tools for mechanizing decisions, a grasp of their capabilities and limitations is crucial. This understanding extends to programming paradigms, ensuring practitioners can optimize resources and drive better outcomes.
Extended Summary
In a recent episode of LokadTV, Conor Doherty, Head of Communication at Lokad, engaged in a thought-provoking discussion with Joannes Vermorel, CEO and founder of Lokad, a French software company specializing in predictive supply chain optimization. The conversation delved into the intricate world of computational resources within the supply chain, a topic that extends far beyond the mere use of computers. It requires a nuanced understanding of how these machines operate optimally, a concept Vermorel refers to as “mechanical sympathy.”
Doherty opened the discussion by highlighting the broad scope of computational resources, encompassing both hardware and software. He sought a working definition from Vermorel, who explained that computational resources include all classes of hardware that constitute a modern computer. This classification, though somewhat arbitrary, has evolved over the past 70 years, resulting in distinct categories like CPUs and memory, each serving specific purposes in the computational ecosystem.
Vermorel emphasized the importance of these resources in the context of supply chain management. He argued that if we accept the premise that supply chain decisions are best made with the aid of computers, then understanding the hardware that facilitates these computations becomes crucial. This understanding is not just about knowing the physical components but also about grasping the broader classes of devices and their computational capabilities.
Doherty then sought to distill this information for supply chain practitioners, asking how they should integrate this knowledge into their daily operations. Vermorel clarified that computers are not inherently good at making decisions; they are simply the best tools available for mechanizing decision-making processes. This mechanization, which has driven progress for centuries, is now extending to white-collar jobs through the use of computers.
Vermorel likened the foundational knowledge of computational resources to basic geographical knowledge. Just as knowing the locations of countries on a map is considered essential, understanding the basics of computing hardware is foundational for supply chain practitioners. This knowledge helps prevent a range of potential problems and ensures that decisions are made with a clear understanding of the underlying computational infrastructure.
Doherty further probed the depth of this foundational knowledge, asking whether it involved knowing simple things like the location of a USB port or more complex concepts like the workings of an SSD drive. Vermorel responded that it is more about understanding the abstractions and stable classes of concerns that have persisted in computing for decades. These include memory, storage, bandwidth, arithmetic calculation, and input/output processes.
The conversation then shifted to how this foundational knowledge translates into better decision-making. Vermorel explained that without a basic understanding of the hardware, decision-making processes can seem like magic, making it difficult to assess whether a method is suitable for the available hardware. He used the analogy of selecting a car to illustrate this point. Just as choosing a car requires understanding its intended use, selecting computing resources requires knowledge of their capabilities and limitations.
Vermorel also touched on the importance of programming paradigms and how they fit into the decision-making process. He noted that while specific use cases might not always be apparent, having a foundational understanding of concepts like static analysis, array programming, and version control is crucial. This knowledge helps practitioners avoid “bumbling in the dark” and ensures that they can make informed decisions about the computational tools they use.
In conclusion, Vermorel stressed that modern supply chain practices are heavily dependent on computing hardware. Even companies that consider themselves low-tech rely on computers extensively, whether for complex algorithms or simple tools like Excel. Therefore, having a foundational understanding of computational resources is not just beneficial but essential for effective supply chain management. This knowledge enables practitioners to make informed decisions, optimize their computational resources, and ultimately drive better outcomes for their organizations.
Full Transcript
Conor Doherty: Welcome back to LokadTV. Today, Joannes and I will be discussing computational resources in supply chain. As you will hear, this is much more than simply knowing how to use a computer. Rather, it requires a decent understanding of how it operates best. This is called mechanical sympathy, and as we’ll discuss, good mechanical sympathy can translate to better computational resource use and ultimately better decisions. Now, as always, if you like what you hear, consider subscribing to our YouTube channel and following us on LinkedIn. And with that, I give you today’s conversation.
So, Joannes, computational resources in supply chain, that’s a very large concept. It covers both hardware and software. So, for the purposes of today’s conversation, and bearing in mind it’s a supply chain audience, what’s a good working definition of computational resources?
Joannes Vermorel: Computational resources is a general term that covers all the classes of hardware that make up a modern computer. Nowadays, the separation between these classes is a little bit arbitrary, but only a little bit. There is nothing in nature that says there is a class of things we should call CPUs (central processing units) and another class of devices that we should call memory and whatnot. It is a co-evolution of computer design and the role of the market that has shaped certain niches to have companies that turn out to have really competitive devices for specific purposes. That’s how there was this evolution. Now, 70 years after the introduction of computers, we have very clear classes of computing devices that don’t do everything end-to-end. They are like components in the calculation.
Now, why does it matter to have that? When I refer to computational resources, I refer broadly to the hardware but also implicitly to the class of devices and what they give you to perform computations. Why does it matter for supply chain? Because if we take supply chain as a decision-making exercise and if we proceed with the act of faith that these calculations will be better done with computers, then this is literally the physical layer that will carry those computations. This act of faith is only a modest one after all. Computers are relatively proven to be quite capable nowadays. But nonetheless, it starts from this vision that all these decisions, those millions of decisions that a sizable supply chain needs to make, will ultimately be done with a computer one way or another.
Thus, if we start thinking about that, then we should start paying a little bit of attention to this hardware layer. The situation has become a lot more complex over the last four decades. Computers are still progressing but in ways that are a lot more complex and not so intuitive compared to what was happening until the end of the 90s.
Conor Doherty: Okay, well, again to summarize that, computers are good at decisions. But how does a supply chain practitioner listening to this fit into today’s conversation? What’s the take-home or the top-level view for them?
Joannes Vermorel: First, I would say computers are not especially good at decisions. They are just the tools that we have, and right now we don’t have any viable option to mechanize the decision-making processes. This is kind of an act of faith. Why do we want to mechanize? Because mechanization has been driving progress for the last two, maybe even three centuries. In the 20th century, it was the mechanization of blue-collar jobs with productivity improvements that were absolutely staggering, like 100-fold. Now, in the 21st century, we are seeing the exact same thing but for white-collar jobs, and this is happening thanks to computers. We could think of a parallel universe where it would be happening with other things, but for now, the best shot we have is computers.
Now, why does it matter? I would say we have to treat computational resources and computing hardware as part of foundational knowledge. When was the last time it was useful to you to know where Canada sits on the world map? When was the last time it was useful to know that Russia doesn’t have any border with Brazil? These are the sorts of things where it is not very clear to you on a day-to-day basis that, for example, having basic knowledge of world geography is of any practical use. Yet, if you were to ask the vast majority of people in this audience, they would say it is important. What would you think of someone who could not place either China, Canada, or Russia on a world map? That would sound very strange, and you probably would not trust that person with many roles in your organization.
So, you can think of it as a little bit of trivia to some extent, but it is also foundational knowledge. If you know nothing about it, that will create problems. What sort of problems? It depends very much on the specifics of the situation, the company, and the vertical. But you can expect a whole series of problems. I believe that knowledge about computing hardware and computational resources is very much in this class of foundational knowledge that supply chain practitioners should be aware of. They should have quite a mechanical sympathy, a term taken from Formula One, about these things.
Conor Doherty: Well, I like the analogy that you use, and I’m going to try and use that to tease this point apart. If you say foundational knowledge is knowing that Brazil and Russia don’t share a border, that’s one granularity of geographical knowledge. Another is knowing how many capitals South Africa has. These are qualitatively different layers or granularities of geographical awareness. To take that differential and apply it to hardware or computational resources, when you say basic knowledge, are you talking about knowing where the USB port is for my mouse, or are you talking about knowing how an SSD drive works? What’s the order of magnitude of knowledge here?
Joannes Vermorel: I’m talking more about the abstractions. There is an endless trivia about computing hardware. It’s not about knowing every single device and its price points. If you’re a geek, you can enjoy reading about that, and I do. But fundamentally, it is more about those very big, very established classes of resources. This is a little bit architecture-dependent, but these architectures have been very stable for at least five decades, so you can expect that to continue.
What are we talking about? We are talking about things like memory, volatile memory, persistent storage, bandwidth, arithmetic calculation, input and output (I/O), throughput, latency. All these sorts of things have been concerns and have had classes of concerns that have been stable for many decades. That’s what I mean by having this basic knowledge to see what the classes of concerns are and the corresponding computing hardware. How does it all fit together to do anything with a modern computer?
If we step back in terms of layers, you ultimately want to have your decision-making processes computed thanks to this computing hardware. If you have no knowledge whatsoever about what is happening at the hardware level, it’s completely magic. What are the odds that you can even comprehend if a method is a proper fit for the hardware that you have or not? I’m not talking about super granular detailed understanding, just a super basic understanding of whether it will even work at all.
Conor Doherty: For example, you use the phrase… Sorry, let me go back one step. You named a few programming paradigms. I think it was from one of your lectures. You talked about programming paradigms, static analysis, RA programming, differential programming, version control, persistence, all of these concepts. My question is, how do those fit together into making the better decisions that you’re talking about?
Joannes Vermorel: This is foundational knowledge, so do not expect very specific use cases from me, just like basic geography. When was the last time you absolutely needed to know that? Probably never. It is ambient. The problem is that if you have layers of foundational knowledge that are missing, you’re bumbling in the dark. You don’t even see that you’re in the dark. You don’t even realize that there is so much that you don’t comprehend. That’s really my point.
Let’s step back. You want to generate those decisions with a computer. That means you’re going to select vendors, probably quite a few. You will buy or rent computing resources from the cloud. You can entirely delegate the thing to your IT, but why would the IT be super good at picking hardware for something they don’t know anything about? For example, if I tell you, “Dear IT, please pick me the best car,” unspecified. Okay, fine. So, IT says, “Okay, then I take you a Formula One.” And you say, “Well, but in fact, I want to drive in dunes along the beach.” Then the Formula One turns out to be a completely crappy vehicle because it’s absolutely not designed to drive in the sand.
If all you tell me is to take something good, they will take something fundamentally good, like a Formula One. Is it a good car? Yes, it is a good car for a specific use. But if you say, “I want a car where I can park my family of eight,” that’s going to be a very different definition of what is good. We have this illusion that when it comes to IT, computing hardware, and computing stuff in general, this is a question of specialists. Just like picking a car, I’m not a car specialist, so I will just tell the car department to pick me a good car and deal with it. Those people have so many options of what good even means that they pick something at random. Then you can complain on the receiving end, “Oh, but the cost of this Formula One is extravagant. I can’t even put a second person in the car, and where I want to drive, which is in the sand, it’s not even going to do 10 meters before the wheels lose traction due to the low clearance.” If it was a car, people would agree that it would be ludicrous.
But when we talk about computer stuff, in most companies, people find it completely acceptable to be disinterested in the case. Although, again, I go back to the practice of supply chain. A modern practice of supply chain is extremely dependent on this computing hardware. Supply chains have been digitalized decades ago, and even companies that think they are low-tech rely on computers enormously, even if it’s just for Excel.
If you depend on those tools on a daily basis, you depend on them in a very elaborate way. For example, I depend on the availability of water, but I don’t need to know anything about water supply. That is correct because water as a product is extremely simple. It’s chemically simple, and when you say tap water, you expect 99.99% H2O plus a tiny bit of minerals and a little bit of chlorine for sanitary reasons, and that’s it.
So you see, it’s and the temperature should be something like between 10 and 20 degrees, and that’s it. So it is something that is extremely simple. So that’s why you don’t have, you, the abstraction layer that is, “I get tap water and it’s good to drink.” I can afford to not know anything about what is upstream. But the problem, and that’s where I get to the point of computing resources, is that computing resources are many-dimensional. You know, it is not something that is simple like water. It is much more like a car. There are so many different types of cars, so many different ways that you could say a good car.
If I say, “What is good water?” you know, except if you’re doing very, very specific experiments, you know, industrial processing that requires ultra-pure water and whatnot, for virtually all the situations that you will encounter in life, basic tap water is just what you need. So you don’t need to know anything about that because, again, you’re dealing with a product that is extremely simple. But if you’re dealing with a product that is many-dimensional, like a car, then you need to know one thing or two about the car if you want to buy that.
So again, if we go to supply chain practitioners, well, it turns out that you are extremely dependent on computing resources to do tons of things. Those things will become even more prevalent in the future. What makes you think that you can be completely ignorant of the physical layer of that?
Conor Doherty: Well, so there’s a few points there, one of which is supply chains are obviously very complex. You’re trying to solve many, many things, and that depends on the context. For example, maybe you want the car to drive in the desert, you want to drive it on hills, you want to drive it in the city. These are all different contexts, but there are still shared properties in terms of what at least we think companies ought to be trying to do with their computational resources. So can you please expand a bit on that?
Joannes Vermorel: Yes, I mean, that’s here the thing is that, okay, you want, let’s say, to analyze as a baseline your transactional history. That would be something. Okay, so that means that this data needs to be stored. So where will it be stored? What sort of hardware? What will be the characteristics of this hardware? If you want to store the data and then access the data, does it have any impact? The answer is yes, it does. Just to give you some very simple idea, let’s consider that you want to store this data on a spinning disk.
It doesn’t matter if it’s your spinning disk or it’s something that you rent from a cloud computing platform. If the data is stored on a disk that is spinning, it means that on average when you want to access a bit of the data, then on average the disk will have to spin for half a rotation so that you can access the area. You know, that’s just because the data can be anywhere on the disk. You want to access a piece of data, on average the disk will have to spin for half a rotation.
Okay, fine. What does it, what are the consequences of that? Well, the consequence is how fast can the disk spin? Well, a disk will typically spin at something like 7,000 rotations per minute, and if it’s like a very fancy drive, it will maybe go to 11,000 or maybe 12,000, but that’s it. So, rotations per minute. So that means that, you know, in terms of latencies, you should expect something like 20 milliseconds or something, you know, to access any piece of the data.
So you would say, “Well, 20 milliseconds looks short.” But is it? Because 20 milliseconds means that every second you can only access, if you want to jump across your disk, only 50 different pieces of data per second. If you have to jump around, 50 per second is not that much. If you have millions, tens of millions of records to retrieve, you see that very quickly this thing is going to escalate into crazy, crazy delays. Now you could say, “Okay, but my disk can store terabytes of data.”
Yes, but if retrieving the data, due to the fact that you have to jump around the disk so much, takes days, it is not very, very good. So maybe I can, you know, take many more disks that have smaller capacity, and I will have, you know, more throughput in accessing the jumping. Or maybe I can even, you know, use another class of storage entirely and go for SSDs, solid state drives, which provide much, much better latencies for those random accesses.
But you see, that’s the sort of thing where, again, if you have no knowledge whatsoever about computing resources and the classes of computing hardware that provide those resources, those sorts of questions would never even occur in your sort of thinking. And can it hurt you? You know, again, that’s the question. What you don’t know, can it hurt you? I would say yes, because again, you’re going to buy a lot of those things either directly or indirectly.
You will buy them directly if your IT department just goes and buys cloud computing resources, but you will also buy those things indirectly if you pick a supply chain software vendor. Because you see, if you pick a vendor, you’re picking a specific way to consume those computing resources to get the result that you want. And here my message is that if you think that the average software vendor has any competency on the case, you’re completely deluded.
The vast majority of, obviously this is an opinionated take, but I would say the vast, vast majority of my competitors, Lokad competitors, have, when you look at the management and their interest, generally speaking, they have zero interest for, zero mechanical sympathy for computing hardware. And as a consequence, it should not be too surprising that their software, as a result, is horrendously inefficient. And why is that? Well, it goes back to, if you don’t pay any attention to your hardware, why would you think that at the end of the day the software you’re going to build on top is going to make very good use of this hardware?
You know, again, that would be just like picking a Formula One irrespective of the road you want to drive, and then you wonder why on the beach this is such a crappy vehicle. You know, surprise, surprise, that’s what happens when you don’t pay any attention to the computing hardware.
So again, if you could trust in a perfect world, you could be trusting, you know, consultants, software vendors, and those people would have made all the right decisions for you. But it turned out that due to the fact that the vast majority of supply chain practitioners are completely ignorant, then software vendors can also afford to be completely ignorant. Why shouldn’t they be, after all, if clients can’t tell the difference at the time they purchase the software or the solution? Doesn’t matter, I mean, doesn’t matter until they are hit by the consequence of this ignorance.
Conor Doherty: Well, okay, so first of all, nothing wrong with having an opinion to take, that’s just what we do here. But when you say, ultimately, when companies purchase software from a vendor, I think you said consuming resources to get what they want or to get what you want. Ultimately, let’s be practical here, we’re talking about making decisions. So you’ve given a bit of theory there, but can you be a bit more concrete for people who are curious? How does better computational resource usage, as you’re describing, influence or translate to decisions, choices taken in the real world?
Joannes Vermorel: So when you have decisions, you have many, many ways to craft numerical recipes that will ultimately generate this decision. The thing is that if the way you consume your computing resources is fantastically inefficient, and let me give an example. If you start using, let’s say, a relational, a transactional database, a SQL database, same thing, nothing to do with money. Indeed, so if you use a transactional database and you want to carry out analytical recipes, numerical recipes, just do some kind of number crunching, you will pay an overhead tax by probably a factor 100, two orders of magnitude at least, if not 300, three orders of magnitude.
And why is that? It’s because this software layer, the transaction layer, gives you some very interesting properties, but they have nothing to do with analytical calculation. They give you essentially the four properties known as ACID: atomicity, consistency, isolation, durability. Those things are very nice for transactional processes. They guarantee things like, for example, if you want to declare that a supplier has been paid, you can never end up with a situation where the money has been sent, the order to the bank, but the invoice of the supplier has not been cleared just because, for example, the computer system just crashed halfway through the operation.
So you could, in theory, end up with a situation where you have already emitted the wire transfer order but did not record the fact that this invoice of a supplier has been settled. So the next time you restart the system, you will issue a second payment and effectively pay the supplier twice. That’s the sort of thing that you can get with a transaction layer. It is very, very important for transactional stuff, where there is essentially an account that is incremented and another account, in the sense of accounting, that is decremented. You want those things to be happening at the same time logically so that you never get those things out of sync.
Fine, but if you use this sort of software paradigm to build your analytical resources, you are insanely inefficient. And by the way, surprise, surprise, this is exactly what 99% of my competitors are doing. What does that mean in terms of decision-making? Well, if the way you use computational resources starts by having an overhead of a factor 100, it means that you are restricted to very, very simplistic numerical recipes. Just because as soon as you have a modicum of complexity, you’re completely out of budget in terms of computer resources. That means that the prices get really extravagant super fast.
You see, this is not an ad piece of element. If you do not put your computing resources budget under control, you can end up with crazy levels of spend. Just to give a price point, many of my peers, not competitors, peers that would be software as a service company dealing with heavy analytical workloads, when I look at the S1, so the S1 is a document that you need to publish when you want to go public in the US. It’s very interesting because that’s pretty much a report to the investors, to the future investors. Here you can look at the decomposition of the spendings over the last three, four years.
Most software companies that were analytical, like Lokad, not actually supply chain, can be anything, you know, that can be fraud detection, can be processing system logs, whatever. They were typically spending half of their spending oriented toward cloud computing resources. So the amount of spending is very, very significant. Despite paying, you know, having on the payroll extremely expensive engineers and extremely expensive salesforce, they still manage to send up half of their spendings toward cloud computing providers. So you see the idea that the cost of computing resources is negligible is complete nonsense for most software vendors that are of the analytical class like Lokad.
Systems of not nearly systems of intelligence, but either systems of reports or systems of indigence, those spendings can be very, very significant. When I say that if you’re inefficient, you spend 100 times more, you can see that if you’re already spending half of your revenues on computing resources, spending 100 times more is just not on the table. It’s not even something remotely possible. So that means that in order to stay in budget, what do you do? Well, you just increase the price. That’s what they do, but even that, you have limits. You can double, maybe quadruple your price, but you can’t multiply your prices by 100 fold.
So what most software vendors do is just go for simpler and cheaper recipes, even if they are extremely simplistic and to the point that they do a disservice to their clients. The reality is that they can’t afford, as a vendor, something that could be less dysfunctional because it would be way too costly. And why can’t they afford that? Because they are absolutely insanely wasteful with their computing resources.
Conor Doherty: Well, again, it occurs to me that when you speak and when you explain how you think computational resources ought to be allocated, it’s in pursuit of fundamentally better decisions. In your mind, that’s the problem that ought to be solved. But that’s not necessarily the same paradigm that all companies, or rather not all companies, apply that same paradigm. For example, you might be a company that prioritizes something like pursuing service level or pursuing forecast accuracy, and that’s the goal, that’s the pie in the sky that you’re going for. How would computational resource allocation differ in that situation? And feel free to comment.
Joannes Vermorel: So, okay, you set a goal for yourself. Here, I’m not challenging that part. When I say better decisions, I mean according to whatever metric, whatever goal you set for yourself. So it doesn’t matter. If you want better service level, fine. This is your goal. Now you have set a goal for yourself, and now you have processing power at your disposal, computing resources that you can use to get decisions that will be better according to whatever goals you set for yourself. Fine.
Now let’s clarify what is the ambient paradigm for pretty much all my competitors. The ambient paradigm is you will have engineers that will start working on something, and then whenever this something is compatible with the best hardware that money can buy, they stop working and they start selling the thing to the clients. So what does it look like? It means that, okay, I want to do inventory replenishment for a retail network. So I have, let’s say, 20 million SKUs. Fine. First, I try various things, it doesn’t work, so I fall back to, let’s say, safety stock analysis, which is exceedingly trivial in terms of computing resources.
And then, because my system is so inefficient with extremely expensive computing hardware, I can make it work. And then I stop and I sell that to the client. So what was the sort of thinking in this paradigm? Because it was actually in the software industry, I think, the dominant paradigm until the late ’90s of the 20th century. This paradigm was pretty much that computing hardware is progressing exponentially. So the idea would be you get the best hardware that money can buy, and as soon as it is working, that you have something that is kind of working within those limitations, even if the costs are insane, even if you’re not making really a good use of your computing resources, it doesn’t matter.
Why? Because you have an exponential progression of the computing hardware on all metrics. That was what people refer to as Moore’s Law, but in fact, there were so many other laws for everything. All the computing resources were progressing, all the metrics were progressing, and that was very much one of the sort of ideas that made Microsoft extremely successful again in the ’90s. The idea is that if it works, it doesn’t matter how terrible the performance is because five years from now, the computing hardware will have progressed so much that those computing resources will be trivialized.
This was working until the end of the ’90s, since essentially the year 2000 in this area, we have entire classes of metrics that have not improved. For example, the latency between CPU and memory has pretty much not changed for the last two decades. Due to the fact that we are now limited by the speed of light, it’s not going to change in the foreseeable future.
Another element is, again, the speed of light. Packets over the Internet of a long distance now travel at roughly two-thirds of the speed of light, so there is not that much leeway to improve the speed of packets over the Internet because we are already very, very close to the speed of light. We can have more bandwidth, so we can push a lot more packets, no problem, but in terms of raw speed of the packets themselves, we are now very close to the limits of physics, at least physics as we know it.
So, that’s the sort of thing where, again, this paradigm which was very prevalent in the software industry at the end of the ’90s, which was “just do something that works and then sell it and then don’t worry about performance because it is like a fool’s errand.” The industry, the hardware industry, will improve everything so much that all those performance concerns will be made irrelevant within just a few years. That was the mindset.
Interestingly enough, we still have very nice progress from the computing hardware, but the progress has become very subtle. You have still exponential progressions but among very specific lines, not all metrics, some metrics. The interesting thing is that most of the B2C software industry has been paying a lot of attention to that. For example, the video game industry is paying a huge amount of attention to these sorts of details. But when it comes to enterprise software, they are still living, 99% of them, in the late ’90s where they don’t pay any attention and they just operate as if five years from now the progress of computing hardware would have made the cost of their system trivial. It is not the case.
In fact, due to the fact that the amount of data managed by companies keeps increasing, we end up in a situation where year after year the cost that you need to spend to keep your systems running tends to, for most software vendors, increase faster than the price is decreasing from the computing hardware. So year after year, you end up spending even more to pretty much keep functionally the same level of quality of your decision-making processes or support for your decision-making processes if it’s not completely automated.
Conor Doherty: Well, hardware progress or computing progress has become subtle, I think was the term you used. It’s subtle, it’s no longer exponential jumps.
Joannes Vermorel: It’s across specific directions, not dimensions. In the ’90s, the interesting thing is that everything was improving on all fronts. There was not a single metric that would not improve. Nowadays, you have many metrics that have not been moving for literally a decade.
If you look at, for example, the amount of heat that you can dissipate from a computer, your computer needs to get rid of the heat. You can have copper wiring, you can have fans, you can do various things to just extract heat from inside the computer so that those things do not overheat. But we have already reached the limit of what is feasible with air. There are limits that have been reached two decades ago. You can use water to make it a little bit more performant. If you want to go super fancy, you can go for liquid nitrogen. It is quite impractical but it is possible for nice benchmarks, etc.
So, we have hit the limits. We don’t have any magical materials that will let us evacuate twice as much heat. I mean, we could use maybe diamond. Diamond is a fantastic heat conductor, but the idea of having kilograms of diamonds to evacuate heat is still a long way off. Even that will only give us a modest boost compared to copper, which is already an excellent conductor.
Conor Doherty: Well, that actually demonstrates my point a little bit more. So, to finish the thought, if…
Okay, actually I’ll take the example you just gave. So, you were talking about the difference between copper wire and diamonds as heat conductors. To squeeze a little bit more performance out of the heat escaping properties of a computer, that’s going to require quite a nuanced and specialized understanding of computer engineering. So then, to come back to the main topic, how does increasing your ambient foundational knowledge translate to greater supply chain performance when the margins are as thin as you’re describing?
Joannes Vermorel: No, I think again that’s the sort of thing with foundational knowledge is that it clarifies the picture of everything. Is your IT buying the right sort of stuff even directionally? Do you have any idea about that? Can you even discuss the matter with your IT? If you cannot, why should you even expect that what they’re buying is remotely making sense?
Again, go back to the Formula 1 to go on the beach. It doesn’t make any sense, but that’s exactly the sort of thing that happens when people have no knowledge whatsoever of what is at stake. When you want to pick a vendor, can you even have an intelligent discussion about the way they are consuming the computing resources to give you either better decisions or better support for your decisions? Are they consuming those resources in ways that are appropriate with regard to the computing hardware that we have? Does the architecture make sense or absolutely not?
Again, if you think in terms of a car, you have so many things that you intuitively know. Aerodynamics, for example. If you were to look at a car that would be massively violating the laws of aerodynamics, you would think, “Okay, this thing is going to have such immense drag in terms of air, the consumption is going to be horrible.” There is no alternative. So, you see, it is the sort of thing that, again, just due to the fact that you have foundational knowledge, it is instinctive. You don’t necessarily need to be told when you see a car that is very low that the aerodynamics are going to be good and that most likely this car can drive faster.
That’s the thing. You don’t have to even think about what are the fluid dynamics at stake and whatnot. It is intuitive. That’s the sort of thing that if we are looking at better decisions, can you spot intuitively things that are wildly dysfunctional? My point is that due to the fact that 99.9% of the clients or supply chain practitioners are entirely blind to the question, if there are a few geeks around, you are like a tiny, tiny minority. If you are blind, then that means that, again, the response from the ecosystem, the software vendors, the solution vendors, the consultants, is that they don’t need to pay attention. Their clients are not paying attention. Why would they?
If you were living in a country where gasoline is free, why would automakers pay any attention to the consumption of their cars? For them, if gasoline is free, it means that it’s mostly an irrelevant concern for the customers. If the customers don’t pay attention, the vendors of cars don’t pay attention. If we go to software vendors, if customers are clueless and don’t pay attention, why should enterprise vendors pay attention? The answer is, well, they don’t. Effectively, they do not pay attention.
That’s why nowadays people are always surprised when they look at enterprise software. You click, you want to just see a report, you just want to do something minimal, and it will take seconds. In terms of snappiness, enterprise software on average is very poor. It is very slow. If you compare, let’s say, to web search, you want to do a web search on Google. Within something like 50 milliseconds, maybe 100 milliseconds, Google is able to scan the web and give you a digest of stuff that matches your query. This is extremely fast and snappy.
Conversely, you just want to do something super basic like, “I want to check the state of this SKU,” and it takes seconds. The interesting part is that it takes seconds on the computing hardware that we have in 2024. It was already taking a second 20 years ago despite having hardware that was 100 times less capable. What happened? Well, what happened was the extra computing hardware, the extra capabilities of this hardware, have just been wasted through inefficient software.
Conor Doherty: Thank you. When you talk about foundational knowledge, just as I was listening, it occurred to me that there’s a bit of a… you can split what you’re describing there, and I want to get your thoughts on this. To take an analogy from earlier, you said if you take a car and you go into the desert, is that the best vehicle for the desert? It occurs to me in terms of foundational knowledge, there’s both theoretical and practical.
So you can have a theoretical foundational understanding of how an internal combustion engine works, like that’s how the car goes. That is a theoretical understanding. There’s also a practical foundational knowledge, which is, well, if that tire goes flat, do I have the skills and the basic knowledge to change that tire? If the engine doesn’t start, can I do a basic repair? And if you’re going to drive in the desert where you’ll be alone, you will fundamentally require both a theoretical and at least a basic practical foundational knowledge.
So to come back to the topic at hand, you’ve described in some depth the theoretical foundational knowledge that people should have. In terms of practical foundational knowledge, is there any skills that you think everyone should have in this space?
Joannes Vermorel: Yes, again, when you look at the practical, it would be to have some ideas of the sort of price points we’re talking about. You know, what is the cost of a terabyte of storage? What is the cost roughly of a terabyte of memory? What is the cost of a CPU that gives you 2 GHz of throughput of computation? You know, again, just having some very, if you can guess a number that is not off by an order of magnitude, that’s already very good. You know, that’s the thing with computer stuff is that usually if people had to guess, their guesses would be off by many orders of magnitude.
Again, if I tell you what is the weight of a car and I show you the car, your guess will maybe be 50% off. You know, you say okay, one ton and a half, it turns out that it’s an electric car and it’s two tons plus something. Okay, but that was, you know, you were still within the same order of magnitude. You know, you don’t see a car and say it’s 20 kilograms or it’s 500 tons. But the thing is that when you ask people what is the cost for a terabyte of persistent storage, the cheapest that you can find, some people would tell you prices that would range from, I don’t know, 10,000 to 2, and whatnot, and nobody would have any clue about that.
And same thing if I say to you what is the cost for a chip that can do the order of magnitude of 100 billion arithmetic operations per second, what would be a chipset that can do that, what would be the cost? People say, I don’t know, 100,000 euros or maybe $50. Again, that’s the sort of thing where I say the practical knowledge is to have some ideas on the price points. Again, that’s not precise. If you can have the sort of order of magnitude, you’re already in the ballpark of making things that make sense.
That’s the thing which is very weird with computing resources is that you have literally 15 orders of magnitude. It is quite, I think it is quite unique. There is no other field that I know of where the order of magnitude is so incredibly spread out. 15 orders of magnitude means that we have on one hand, we are talking units, you know, one addition, one multiplication, that would be a unit of computation. And then on the other hand, we are talking of essentially billions of billions. That’s really quite a spectrum of orders of magnitude.
And that’s difficult for the mind to comprehend. And that’s why, by the way, when I said that mistakes can be made in terms of wasting computing resources, is that again, where the analogy of the car is misleading, is that even the worst car is only going to be like 10 times worse in terms of consumption than the best. Let’s say, you know, I have like a, if I want to do 100 kilometers, if I have like a super super efficient car, it’s going to give me five liters of gas to do 100 kilometers.
If I go for like a super heavy SUV, inefficient and whatnot, it’s going to be, let’s say, 50 liters. Factor of 10. With computers, it is not like that. It would be like the most efficient thing would be consuming 5 centiliters for 100 kilometers and the least efficient would be consuming five cubic meters for those 100 kilometers. So the orders of magnitude are just wild. And that’s where you need to, so again, in terms of practicality, it would be to have a few ideas of the price points. Also have a few ideas of what is moving and what is not moving.
Conor Doherty: What do you mean, what is moving and what is not moving?
Joannes Vermorel: For example, GPUs have been progressing in terms of network trends. GPUs have been progressing like crazy over the last 5 years and they are still expected to progress like crazy for the next five. So this is a class of computing resources that is swiftly improving. They’re improving in terms of number of operations per second. They’re improving also in terms of memory. So that’s very, very good. CPUs are on a similar track. They’re improving maybe not as fast, but they’re still improving very fast in terms of number of cores and the size of L3 memory, which is the memory that lives inside the CPU.
It is still improving fast. Correspondingly, if we look at DRAM, DRAM is what is used for the main memory of the computer, volatile memory. So if you shut down your computer, you lose that. That has barely been moving for the last decade. There are very few manufacturers. There are like four factories in the world. So this is a market where you should not expect things to really change in terms of price drops. You should not expect too much change in the short term, etc. I mean, I can, so I would say in terms of practical, it’s having some orders of magnitude, knowing a little bit the price points, knowing a little bit what you can expect from, and also just the intuition of will going for, let’s say, professional grade hardware give you anything that is very different from consumer grade?
Depending on what you’re looking at, sometimes the best you can get is something that is just consumer grade hardware and the best that you can buy as a company is just marginally better. You see, in some other areas, it’s not the case. In some other areas, what as a company you can buy is many orders of magnitude better than what is typically considered as suitable for the consumer market. Again, that’s the sort of thing where having this knowledge really helps you to navigate the landscape, pick the right vendor, or rather eliminate the incompetent ones. You know, that sort of thing that if you start paying attention, you will be able to just filter out the incompetence, you know, both from consultants, from vendors, and also for in-house projects as well. You know, that matters.
Conor Doherty: Well, you touched on cost there a number of times, but you spoke mostly in terms of the direct cost of actually physically procuring hardware. In terms of the direct and then indirect cost of not being more savvy with one’s computational resources, what are the orders of magnitude we’re talking about here? And for that, assume a large retail store, excuse me, a large retail company, large retail network, omnichannel, paint whatever picture you want, but what’s a reasonable order of magnitude in terms of here’s what you stand to lose by not being more savvy with your computational resources?
Joannes Vermorel: Well, I would say as a conservative estimate, over 90% of the supply chain optimization initiatives fail. And quite a big percentage of this 90%, so it’s pretty much all software initiatives fail, and a big percentage of that fail in big part, although it’s not the only reason, but in big part due to abysmal performance. Now, when we have to think performance, we have to think of it from different perspectives. People would think, okay, I need to do my inventory replenishment every day, so the calculation for everything needs to be able to be done in less than 24 hours.
Okay, that’s a given. You know, if you want to run the computation every day, if you cannot complete the computation in 24 hours, you’re kind of screwed. So that’s like the upper limit of how much time you can dedicate. Now, you would think that if you buy twice as much computing resources, you can do it twice as fast. Well, not naturally, because it really depends on the software architecture that you adopt. There are many architectures and design patterns that don’t lend themselves to such what is called, the technical term is called, “scale-out.” So there are many approaches where if you throw more computing hardware at the case, you just do not get a speed up.
Conor Doherty: Diminishing returns essentially.
Joannes Vermorel: Yeah, and sometimes no return at all. You can literally throw more resources and have exactly zero speed up. It really depends on the way you have architected your software to take advantage of those extra computing resources. Now, let’s move forward. Now you have something that can compute your replenishment in, let’s say, four hours, and you say, “Great, for your retail network, it only takes four hours.” So you have 24 hours in the day, so it looks like you will run the computation during the night. Is it satisfying?
Well, my answer is no, absolutely not. Why is that? Because here you’re just looking at the thing once in production. You don’t take into account that your numerical recipe will need to be tuned, and you will need to iterate many times over to converge to the final one that is satisfying. This experimental process is going to be much more costly.
So for many reasons. First, when you do your experiments, you don’t start by being very cost-effective. You will only be cost-effective once you’ve identified something that, in terms of quality of decision, gives you satisfying results. So it is very normal to expect that when you’re just prototyping new recipes, you will be a lot less efficient. The efficiency will come afterwards when you start to really optimize the consumption of computing resources.
So that’s one thing. But then the other thing is that you have to pay for people to just wait till the calculation is done so that they can look at the results and make the assessment and carry the next iteration. And here the big problem is that if I go back to the software supply chain initiative that fails, this process can become dramatically slow. I mean, we were discussing like four hours. Let’s say again that the experiments are going to be twice as slow due to the fact that you’re in experimental settings that are not quite as optimal.
So that means eight hours. That means that you can only do one experiment a day. That means that if you need to do 500 iterations, this thing is just going to take two years. It’s going to be so slow that you’re going to start having other problems, such as the people doing the experiments will start to rotate to another job. The engineers you’re working with are not with you for 40 years. So at some point, you will have people that have started to work on the project, and then they quit, and you have to bring new people, and they don’t naturally remember all the experiments that have been done.
So you see, this sort of problem creates so many issues. Even if you do converge to a satisfying numerical recipe, you’re just one disruption away, you know, lockdowns or whatever, and you need to reengineer the recipe again. If your iterative process is incredibly sluggish, you will systematically fail to cope with all the disruptions. By the time you have finally engineered the solution for the disruption, you already moved on to something else. So you need to have something that is extremely performant so that your iterations are very swift.
And that gives also another twist, which is that the performance when you go from one iteration to the next, you can cheat. Because maybe from one experiment to the next, you’re going to redo most of the very same calculations. So do you need to resend all those resources if, in fact, you’re just doing almost the exact same numerical recipe with a tiny few divergences? Maybe if you have a smart approach, you would recycle most of what you’ve already computed so that you can iterate much faster without redoing everything every single time.
But again, that will only work if you can sort of understand where are my computing resources going, what am I wasting, what am I doing twice or ten times in a row, and I should only be doing that only once.
Conor Doherty: Okay, well, Joannes, people listening today, and I’m sure they’re all saying, “I agree with this man. I trust him. He’s trustworthy.” What are the next immediate steps for, let’s say, the average supply chain practitioner and C-level people who are listening to you and think, “Okay, I want to be more savvy about these things”?
Joannes Vermorel: I would say start reading introductory materials about how computers work. There are plenty of books that will tell you how computers work and start trying to get some understanding, for example, of what is being sold by cloud computing providers. You know, you can look it up. All the prices are public. So you can go to Microsoft Azure and see, “Okay, what is the price point for storage, for CPUs, for virtual machines, for bandwidth, and whatnot?” Again, that’s a few hours. You can have, I would say, elementary books. I mean, there are even books that are intended for high school or middle school, and it’s okay. You know, it’s the idea of trying to get this knowledge.
And then whenever the topic of a technological evolution arises, ask the vendor, ask the consultant, “Okay, what is your take on computational resources? We want to have the very best decisions according to whatever metric and goals you set for yourself.” Engage the discussion on how those raw computing resources that I’m buying on one hand translate into the very best decisions. If the people you are about to buy a lot of stuff from have no clue about that, then you should run away. You know, that’s my takeaway. Use that as a litmus test to detect so-called experts that should not even be experts in the first place, that are absolutely, I would say, incompetent.
Because ultimately, if you go with those sorts of vendors, you will end up paying the price of their inefficiencies. And the price will be twofold. One, you will pay a lot more for computing hardware to the point that it’s extravagant. As a rule of thumb, Lokad is typically, when we sell a subscription, we are way below the cost of our competitors just for the computing resources, not even taking into account the manpower, the engineering manpower to set up and maintain. Just Lokad tends to be below just the hardware cost.
So that’s one point. But then you will have something that is even bigger, which is your vendor will corner you into super simplistic recipes, trying to convince you that it’s the best that science has to offer and whatnot, while in fact, it is just a reflection of their inability to properly leverage the computing resources. That’s why you end up with uber simplistic things like safety stocks that are still prevalent. I mean, vendors deep down, they know it’s complete crap. But the problem is that they are so inefficient with their use of computing resources that for them, it will be wildly impractical to even consider things that would be better.
So you see, twofold cost: direct cost, which is extravagant spending on computing resources that don’t make any sense, and then you have the second order of cost, which is you are cornered into simplistic recipes where ultimately you will be forced as a practitioner to step in with your spreadsheets to manually fix all the insanity that comes out of those systems. Why? Because the system is using overly simplistic recipes that pretty much delegate all the subtleties to you because the recipe is simplistic and does not deal with any kind of sophistication.
Conor Doherty: There’s actually, there was an analogy or not an analogy, an actual rule of thumb you used earlier, which will, or question, excuse me. You said, “If you don’t know how or how much does a terabyte of cloud computing cost or something like that, if you don’t even have the order of magnitude, that’s basically a problem.” And it’s interesting when you say that because most people in their personal lives, even people listening, if they walked into a cafe to order a coffee and they were told, “All right, that’s €45,” they would balk. They would be shocked and like, “Okay, well, that’s not right. I don’t know how much you personally should be charging me, but €45 is a bit steep.” And they would presumably leave and go somewhere else.
Even if it’s in a touristic area, you’d still say, “45, no, that’s incorrect.” But nothing ultimately hinges on that. I mean, you’re not going to ruin yourself, presumably not ruin yourself financially. You’re not going to lose your job because of that.
However, the same sort of instinct, survival instinct, or just overall savviness might be entirely absent when it comes to, “Okay, I have to make very expensive decisions over what kind of computational resources or software the company is going to use.” The direct and indirect long-term costs of that, the short-term indirect and long-term costs of that, have no idea. Like, “Oh, it costs 555,000 per second of computing whatever.” Again, that actually might be right. I don’t know, probably not. But the point being, if you can’t answer those questions, I would agree there’s an enormous hole in your own professional knowledge that you should seek to fill.
Joannes Vermorel: Yes, again, it is maybe a little bit demanding for supply chain practitioners, but what is at stake? Big lines of budgets. I mean, large companies are willingly spending millions and sometimes tens of millions of Euros or dollars a year on those systems. And I’m always completely baffled when you can spend that much and have, like, nobody, vendor included, has any idea about those basic problems.
Again, that would be like buying a building because that’s something very expensive. So it’s like buying a building and you have an architect that has no clue what concrete actually is. You would say, “You know what, I’m not sure. Maybe the building is made out of cardboard or concrete or maybe wood or maybe marshmallows. You know, I don’t care, just put paint on it and it all looks the same.”
You know, again, I think computer stuff and software, you know, the software industry is very special in this regard. There is tons of money involved and there is this implicit agreement that having everybody completely ignorant about that is just fine. And this is, for me, very intriguing as an industry.
And I’ve been talking with most of my competitors, and when I say my competitors, I mean management teams. I’m always baffled when nobody has any clue about this sort of mechanical sympathy where you have some basic understanding of what goes into that.
Again, that could be like, think of it as a Formula 1 driver that says, “You know what, I have four wheels. What is happening between the pedal and the wheels? You know, it’s magic, magic. You know, there’s stuff. It does make a loud noise. I know it’s loud, but other than that, there is just stuff.” You know, it’s my vision of the car is stuff. That’s the level of granularity that would be, you know, people would think this is insane. You should know a lot more if you expect to make good use of the car.
And my way, I think supply chain practitioners are using tons of digital tools every day, just like a Formula 1 driver is using a Formula 1. And so they need to understand a little bit to have this mechanical sympathy of what is going on, how this stuff is working, so that they can make informed decisions. At least so that people don’t sell them stuff that is completely nonsensical and does end up failing for reasons that are entirely preventable.
Conor Doherty: I could not have put it better myself. Thank you, and thank you very much for your time. I don’t have any more questions, and thank you very much for watching. We’ll see you next time.