00:00:00 Introduction to the interview
00:01:41 Ian Wright’s early career and founding Logistics Sciences
00:05:33 The concept of optimality in supply chain
00:10:06 Optimization, uncertainty, and real-world disruptions
00:18:18 Traditional optimization limits and pandemic impact
00:25:27 Lokad’s response and adapting supply chains
00:32:45 Challenges of deterministic models and tradeoffs
00:41:09 Service levels, financial models, and sanity checks
00:50:48 Human expertise, heuristics, and iterative modeling
00:58:39 The cost of human intervention in supply chains
01:06:24 Strategy as engineering and decision automation
01:14:06 Walmart’s decentralized model and breaking silos
01:21:39 Feedback loops and continuous supply chain improvement
01:29:18 Achieving optimality and navigating vendor hype
01:35:42 Final thoughts on supply chain technology trends

Summary

In a recent LokadTV interview, Conor Doherty hosted Ian Wright, founder of Logistics Sciences, and Joannes Vermorel, CEO of Lokad, to discuss the notion that there are no optimal decisions in supply chain management. They challenged traditional views on efficiency, highlighting the complexities and uncertainties that defy textbook ideals. Ian and Joannes emphasized that different stakeholders have varying definitions of optimality, and practical solutions must align with business realities. They discussed the limitations of traditional optimization methods and the importance of human judgment in strategic decision-making. The conversation underscored the need for models that handle uncertainty and focus on true economic outcomes.

Extended Summary

In a recent episode of LokadTV, Conor Doherty, Communication Director at Lokad, hosted an insightful discussion with Ian Wright, founder of Logistics Sciences, and Joannes Vermorel, CEO and founder of Lokad. The conversation revolved around the provocative idea that there are no optimal decisions in supply chain management, a concept that challenges traditional views on efficiency and decision-making.

Conor Doherty opened the discussion by highlighting the common belief in optimal decisions as the epitome of efficiency, where resources are perfectly allocated, costs minimized, and profits maximized. However, he noted that such textbook ideals often crumble when faced with real-world complexities. Ian Wright, with over 40 years of experience in supply chain and logistics, shared his journey from academia to the oil industry, and eventually to founding Logistics Sciences. His career has been marked by a focus on problem-solving within logistics and operations research, emphasizing the practical application of planning and execution.

Joannes Vermorel echoed Ian’s sentiments, pointing out that while the intentions behind operations research post-World War II were correct, the field has faced similar challenges to those experienced by artificial intelligence, with periods of inflated expectations followed by disappointment. He noted that many methods from operations research failed to deliver actionable benefits for companies.

The conversation then delved into Ian’s paper, “Why There Is No Such Thing as an Optimal Solution in Supply Chain Planning and Logistics Network Optimization.” Ian explained that different stakeholders have varying definitions of optimality, often leading to conflicting ideas. Practitioners focus on the mathematical aspects, while business leaders are more concerned with practical, implementable solutions. He emphasized that models and tools are just facets of a broader solution that must make sense for the business.

Joannes expanded on this by discussing the limitations of traditional optimization methods, which often lack the ability to incorporate the dimension of time and handle uncertainty. He highlighted the importance of quantitative improvements in business optimization, contrasting it with the more static, mathematical perspective of traditional operations research.

The discussion also touched on the role of uncertainty in supply chain decision-making. Ian described various sources of uncertainty, from predictable variations to Black Swan events and unknown unknowns. He stressed the need for models that can handle these uncertainties and provide contingent solutions.

Joannes shared Lokad’s approach during the COVID-19 lockdowns, where they managed supply chain decisions for clients whose white-collar workers were on leave. By injecting a massive dose of uncertainty into their models, Lokad was able to make more prudent decisions, demonstrating the effectiveness of their optimization systems.

The conversation then shifted to the role of trade-offs in decision-making. Ian emphasized that trade-offs often boil down to financial considerations, balancing costs against service levels and other factors. Joannes argued that many companies focus on optimizing percentages rather than true economic outcomes, leading to suboptimal decisions.

Both Ian and Joannes agreed on the importance of human involvement in strategic decision-making. While automation and optimization tools can handle many tasks, human intuition and judgment remain crucial, especially in areas where mechanistic input is insufficient.

In conclusion, the interview highlighted the complexities and challenges of supply chain optimization, emphasizing the need for practical, implementable solutions that account for uncertainty and involve human judgment. Both Ian and Joannes provided valuable insights into how companies can navigate these challenges, stressing the importance of aligning models with real-world operations and focusing on true economic outcomes.

Full Transcript

Conor Doherty: Welcome back to LokadTV. An optimal decision is often viewed as the pinnacle of efficiency, a situation where resources are perfectly allocated, costs are reduced, and profits maximized. Now, this sounds great in a textbook or in a classroom, but often such ideas falter upon contact with the real world. Today’s guest, Ian Wright, is going to talk to us about this very quest for optimality. Ian is the founder of Logistics Sciences and has over 40 years of supply chain experience.

As always, if you like what you hear, please subscribe to the YouTube channel and follow us on LinkedIn. And with that, I give you today’s conversation with Ian Wright.

All right, great. Well, Ian, thanks very much for joining us. For people who might not be familiar with you, I mean, I introduced you earlier, but for anyone who’s not familiar with your work, could you give a brief introduction, please?

Ian Wright: Well, I think you mentioned I’ve been around for 40 years. I’ve actually been around for a lot longer than that, but my career spans 40 years. Academically, my background is basically after an interest in economics and geography, I brought that together in studying what was at the time just known as transportation or transport where I come from. And that was basically economics, geography, business, and that sparked a great interest in problem-solving, specifically in what is now called logistics and operations research. So I then went on to do operations research but still focusing very much on transportation problems, logistics problems, and now what we all know as supply chain. So that was some 40 plus years ago.

And then moving on to actually having to earn a living, I entered the oil industry as a management scientist working for Castrol. I was almost thrown in at the deep end as it were because I got straight into some very high-level strategic planning projects. I wrote a number of preventative maintenance systems for the distribution of the company, and I got to know planning software from a network standpoint and from a fleet planning standpoint. Then I moved on to actually join the company that provided those systems, that company being one guy at the time, so there were two of us, and helped him build that. Then I moved over to the States with a client of the company and got involved in GIS and the use of GIS as the visualization of what we were doing on the planning side. So that was an early intro to what is prevalent today around GIS and visualization back in the early 80s.

From there, I got into third-party logistics initially through a software development project. But I’ve been aware of 3PL in the UK throughout my career, but it was actually in the early 90s it was quite new in the States, and they were only just developing the idea of putting solutions together to sell them to customers. Those solutions being where should we put your warehouse, how should we operate your transportation assets. That was a great application of my background, but importantly for me, it was a great learning lesson in terms of planning for implementation and execution and not walking away from it, being part of having to operate the solution that you put together, which I think is a good lesson for everybody who’s involved in what we do.

Eventually, I moved out of the actual planning. I put a couple of solutions groups together and ran the solutions groups. Then I moved on and became more and more responsible in the organizations that I worked for. But eventually, having gone through a period of working in consulting, which I didn’t enjoy too much, I decided I should form a consulting firm, Logistic Sciences. And if you want to know what Logistic Sciences is, it’s basically me trying to get back to what I enjoy doing, which is problem-solving, particularly focused around supply chain and logistics issues, and using what limited knowledge I have and what limited tools I have to actually help people solve problems in that sphere. So I don’t know if that helps you know where I’m coming from. I have no idea where I’m going to, but…

Conor Doherty: Well, thank you, Ian. And actually, Joannes, I’m sure a lot of that resonates with you. I mean, the idea of problem-solving and reconsidering the problem of supply chain decision-making, that’s something that resonates strongly with you, no?

Joannes Vermorel: Yes, I mean, in terms of intentions, the intentions laid by operations research after the Second World War were very correct in the sense of let’s try to engineer those management methods into something that is numerically sound and improvable. That was, I think, one of the intents that was correct and is still very relevant today. The challenge is that it’s very interesting. People very frequently talk about the various winters that AI, artificial intelligence, went through with inflated hopes and then disappointment in the fact that it wasn’t working. I believe that operations research went through similar phases, and certain series of waves of methods that were known at the time just didn’t manage to turn into real actionable benefits for companies.

Conor Doherty: Well, actually, that kind of transitions basically to the topic of the conversation today, which Ian was sort of inspired by seeing your work on LinkedIn. You actually publish a lot of papers. I actually have one of them here in front of me upon which I have made notes. Hopefully, the camera can get that. So I have read it, we’ve all read it. But that paper was, and I read this, I’m the guy, I’m the guy. Yeah, it was free, thank you. So the paper in particular, the one that sparked the interest in the conversation, “Why There Is No Such Thing as an Optimal Solution in Supply Chain Planning and Logistics Network Optimization.” Now, it’s about 13 pages. For anyone who hasn’t read it, an executive-level summary, please.

Ian Wright: It’s basically trying to get across the notion that different people have different ideas about what optimality is. And generally, what I find is they’re opposing ideas or not opposing so much as they just conflicting ideas in the sense that the practitioner’s idea of optimality is often very much more focused on what he’s doing in the tool and or with the technique that’s being employed. And it’s quite often, which comes back to something that Joannes was saying there, it’s quite often focused on the mathematics, whereas the person who is the one, the victim or the recipient of the optimization is the business guy.

I’m assuming we can focus on business and private sector, although there obviously a great deal more that we can do around supply chain. But the business guy is not at all concerned or shouldn’t be at all concerned about the mathematics or the methodology or the tool or the model. And I get focused on, when I work with my own clients and in projects, I get focused on trying to make sure they understand that the tools that we employ, the models that we build, are only a small facet of driving them to a solution that they can use in making a decision and implementing something. So the basic premise of the article was to get across this idea of the model’s not the important piece, it’s the solution. And there’s so many more components, so many more facets to a solution that makes sense for the business.

Conor Doherty: Just on that, and Joannes, I’ll come to you in just a minute, but the way you framed that, again, when you’re explaining it to your own clients, you’re trying to, and I wrote it down, make essentially making sure people understand. And on that point, I think a key word to clear up immediately is when you say optimality, again, you made the distinction between the practitioner and the mathematician. Often certain language can mean slightly different things depending on where it’s being used. Joannes and I recently did a discussion on heuristics, and again, a heuristic in a mathematical sense versus an economic sense may be slightly different. So when you talk about pursuing an optimal decision or presenting optimality, what do you mean exactly, please?

Ian Wright: So generally, I think of optimality not in the sense of a mathematician, because to my mind, that’s a wonderful notion to focus on if you’re living in the world of mathematics. But what we have to live on is what is the best solution in the circumstances that prevail. So what’s really going on? What’s really going on in the world? We need to find out what’s going on, and then we need to present a solution that says the best we can come up with in these circumstances that will alleviate or mitigate most of the issues that we find in the circumstances. That’s the solution that we’re seeking, that we want to present.

Conor Doherty: Joannes? Oh, yeah, thank you, Ian. So again, the idea of being the best available does not mean perfect in absolute terms. Anything you want to add to that or you agree?

Joannes Vermorel: Yes, I mean, to bounce on the characterization of the optimization perspective in mathematics being beautiful, I agree. It is something extremely simple. I can summarize it for the audience. It’s the idea that you take a function that is going to score what you want, and then part of the input of these functions are your variables, what you can decide, what can vary according to your will. So that’s going to the input, and then the function gives you the score. And fundamentally, the optimization is seeking this one combination of input that is the formalization of your decision that extremizes the result. Extremizes like minimize if you’re trying to diminish your cost or maximize if you want to maximize the returns, something like that.

And the interesting thing is that this simple problem comes with a nice clean mathematical characterization. Then you can say all sorts of interesting things about your inputs, you can say all sorts of interesting things about your output, how it behaves, and what classes of algorithms exist to seek a solution, and whether you will be able to, in mathematical terms, say that under those assumptions, whether your method is the best that it can be or not, etc. And by the way, this field of research is now pretty named OR. It used to stand for operational research, but nowadays it’s just mathematical optimization. And they’re not even concerned anymore about whether they are talking about a business problem or not. Their concern is the development of solvers, which is a class of software that is designed to perform those optimizations in a mathematical sense.

When we think in terms of optimization in the math, I think it is the sort of most, I would say, crystal-clear understanding of what optimization is. It doesn’t mean that by being, you know, crystal clear, it doesn’t mean that it’s the most relevant. It just means that it’s the sort of purest, as in, you know, crystal purity. It doesn’t mean that it’s the applicable tool for all situations. And when we think in terms of optimization in a business context, what we mean is we want to improve things but with a quantitative edge. You see, that’s the difference.

Because I can also improve a business by, for example, having a better culture where people are more dedicated, but it’s almost impossible to quantify anything about that. So when we say optimization, what we mean is that we mean to improve with quantitative instruments and ideally quantitative results as well. That would be, you know, sort of, and that’s when we, I go back to your, I would say, optimization as you understand it, I would describe it as mostly a process of quantitative improvements. That would be, you know, and that is completely, that would say the, the business perspective of optimization.

Ian Wright: I think I, no, I fully, I fully agree with Joannes. It’s one of the things that we have to understand is also related to optimal is there are dimensions involved in the problems that we’re looking at, and quite often those dimensions are ignored or left out. And some of the more basic ones, well, in fact, the perhaps most basic dimension is the dimension of time.

That has a massive bearing on what you can do with the model or the technique and or the technology, and what you have to do in operation in the real world and what you’re able to do in those circumstances. And it changes, it changes the nature of what you can consider as optimal.

Conor Doherty: Well, actually, and again, that’s a perfect phrase again, what you’re able to do. And that again transitions to a discussion of what I think is, and I know for Joannes is certainly a key element of any discussions about optimality or basically decision-making, is the nature of uncertainty when trying to make those decisions.

So in your paper, you talk about uncertainty and the real complexity that exists in supply chain. So could you comment a bit further on the sources of uncertainty that actually influence one’s pursuit of optimality in whichever way one wants to optimize?

Ian Wright: There are lots of flavors of uncertainty and, uh, even to the extent of there are flavors that you can’t taste because you don’t even know they exist. So there’s what most people focus on as uncertainty, which to my mind is simply a reflection of the dynamic nature of the field of supply chain operations. They’re just dynamic, so there is uncertainty related to those dynamics, and that is open to analysis and quantitative analysis and probabilistic analysis, which I know you guys are very much focused on.

But then you do move beyond that to certain areas of uncertainty which move more into the area of risk. So there are small risks and extremely large risks, and that’s also mirrored by the fact that you go beyond a predictable or a probabilistically predictable context to the point where you’re actually talking about, which I think I talked about in the paper, you’re talking about Black Swan events. And just, boy, I just lost the whole thing.

So, so sorry, you might have to edit that, but you move from the, you move from a scale of the small world model, which is predictable, has elements that you can predict from data that you can acquire quite easily. You move then on to Black Swan events, which essentially, you know, can happen, but the ability to predict them is much more remote and in fact, eventually, certain Black Swan events you simply can’t predict them. You just know that they can happen. And then I think even more catastrophically, quite often in lots of circumstances of what I, what I, what I term in the paper by borrowing a phrase, the unknown unknowns.

Donald Rumsfeld well, it wasn’t actually Donald Rumsfeld, it was a guy before that, he pinched the idea just like I did, but anyway. So, and then, then you get, then that leads you on to, well, how far do we really have to go to understand not only the unknown unknowns, which we can’t, we can’t allow for that, even the Black Swan events we can’t necessarily allow for that in general operational terms and in planning, but the predictable that’s based on probability, we can and should allow for.

And what I would also say is you can move into a different dimension of operation where you actually, and I think I talked about this in the modeling, you don’t just look at a solution, you look at a solution that is comprised of many contingent elements that you can switch to or can be switched to and executed as required. But the focus is to stay as close to what you have termed optimal in your preferred solution.

Conor Doherty: Well, actually, to piggyback on the non-Donald Rumsfeld inspired quote, but other sources of uncertainty that people think are known knowns would be, as you said in the paper, in stable demand and predictable supply chains. Joannes, these are known knowns or known unknowns or unknown unknowns?

Joannes Vermorel: Yeah, I think this typology is nice, but again, if we go back to the basic instrument that we have to do those quantitative analyses, if I go back again to the sort of things that have been developed as part of operation research, the time dimension was absent. The first reason why it is absent is like super mundane is because then you increase the dimensionality of your problems and those methods behave very poorly when you try to deal with more complex methods. They are not very scalable, at least not in the way we refer to scalable solutions nowadays, especially if you look in the light of the recent development of what happened, let’s say, on the deep learning front.

So the first problem is that we had this super basic problem of dealing with scalability, no time dimension. And then once we start considering the time dimension, the future is not perfectly known, thus we have to deal with a variability of some kind. And here that would be just known unknowns. You know, it is a very mild case of uncertainty. It is very much expected that lead times will vary, it is very much expected that demand will vary, etc. So those cases are relatively easy.

And then we are entering the territory of what is called stochastic optimization because suddenly your decision might reveal itself as good or bad depending on future circumstances that you do not control. So, there are alternative futures where this decision looks good, but there are certainly possible futures where it will reveal itself in time as being a poor decision. So, that’s, I would say, the sort of very mundane problems that we have before jumping into the unknown unknowns and all those wild varieties of uncertainties we have more basic problems still, and that’s where I think this idea of facets is very interesting.

We just do not really know how we should score anything. It’s not obvious. When we say we want to optimize profits, there is an indefinite number of ways to count profits. Should we include the second-order effects, the third-order effects? What do I mean by second-order effects? You give a discount of 10% now, the customer expects the next time they walk into the store to get a similar discount again. This is a second-order effect. You just gave a discount, but it cost you more because you inspired the expectation. So, again, that should be scored.

And then if you do that, your competitor might aggressively decide to compete even more on price, or they might eventually completely give up on competing altogether, leaving you alone or at least with fewer competitors. So, you see, all of that is very mundane aspects of what am I quantifying exactly. These are difficult. I think another facet that is not really addressed in the classic optimization literature is that they think as if the problems were well understood right from the start.

Conor Doherty: Ian, in your paper, you mentioned a lot of concrete examples of companies that have succeeded or failed in addressing the kinds of uncertainty that we just talked about, be it lead times, erratic demand patterns, whatever. Could you share some more details of these case studies, please?

Ian Wright: Yeah, so a lot of the projects that I work on are more strategic. Some are tactical. I generally don’t really work any longer in the realm of planning for execution. So, most of the examples that I would think of in this regard relate to companies that fail to plan tactically or strategically by not addressing these issues around predictability or lack of predictability.

Just recently, in the last three years, there was an event a year before that I think nobody would say they had predicted. Certainly, I believe no systems for planning in any company could have conceived of and incorporated elements of planning that would account for the impact of the pandemic and what happened to stocks and the implications of inventory drawdown, the sudden decrease in demand, and so forth. So many different widely spread implications. The classic example is around semiconductors.

My experience was twofold in that so many companies coming out of the pandemic in food manufacturing and not just pharmaceuticals but in medical appliances, in the healthcare logistics sector as a whole, suddenly realized that they had to plan for something they hadn’t anticipated. They were working against their intrinsic in-house systems that manage the business, that manage their supply chain, because those systems no longer provided them with data that was capable of building the basis of models to understand what they should do next.

So, I worked on a lot of projects for food manufacturers who were trying to catch up with the immense explosion in demand in places that they didn’t have capacity, and they needed to understand very quickly where that capacity should be placed and why it should be placed there. There were so many fundamental problems with trying to work out how you go about doing that because it was very akin to saying, how do you build a supply chain for a product that doesn’t exist today? How do you plan for that? And then the whole notion of how you then go on to execute is the next stage.

Conor Doherty: Ian, that’s a nice transition then to Joannes. I mean, this is very much your metier, executing solutions to situations that are rife with uncertainty. Any examples of successes or failures with companies when it comes to the kinds of uncertainty we’re discussing?

Joannes Vermorel: Yes, I think, you know, if we go back to the year of lockdowns, 2020, 2021, the interesting thing is that Lokad had, I would say, very nice operational successes, but I think precisely because we were doing optimization.

Let me describe what most companies are doing nowadays through essentially an ocean of spreadsheets. They are not optimizing anything, neither in the mathematical sense nor in the way that we just described. What they are essentially doing is largely reproducing what has been done before. They are pretty much pattern matching their own previous decisions. They are not even really following the demand or anything; they’re just largely reproducing what they have done before, which means that the budget is sliced and diced pretty much the same way it was done last year, that safety stocks are again adjusted minimally compared to what was done last year, etc. So, everything is done incrementally versus the status quo. There is no optimization taking place. We are just mirroring the status quo, steering it a little bit but not quantitatively, a little bit in the direction that seems appropriate.

It kind of works, but that’s the thing: there is no optimization process taking place, which means that if you widely change your operating conditions, you don’t have any mechanism in place to reflect those new conditions. Let me repeat, all your spreadsheets, all your processes in place are designed to replicate what you’ve done before. In contrast, at Lokad, we were having optimization systems in place. What happened when we had unprecedented situations? We pretty much manually injected a massive dose of uncertainty into our models.

We didn’t know what would happen. We just said, “Okay, demand is normally what we call the shotgun effect.” You see the demand future that just goes like that, you know, possibility. Well, if you have a situation like lockdowns, you just increase the angle of the shotgun so that the future becomes very fuzzy. Same thing for your delay, same thing for your prices. You just assume that you suddenly know a lot less about the future. But you can do that, and if you assume that you suddenly know a lot less, you can rerun your optimization logic, that’s stochastic optimization, to get decisions that are more prudent with regard to the risk that you have.

You kind of slightly take into account the worst that can happen in terms of delays, prices, demand, etc., and you make your decisions a lot more conservative with regard to those risks that have quantitatively exploded. My takeaway is that it works. It does work very nicely, but the problem is to have more optimization, not less. Although it’s not the sort of operational research static perspective, nothing is moving sort of optimization.

The second thing, it’s an extra facet that I think was almost never discussed during the age of operational research, probably 1950 to 1980, those 30 years, which was the quality of your instrumentation. How fast can you move from one instance of your modelization to the next instance? That’s a really practical operational thing.

Ian Wright: I think there were practical issues as well related to that because the technology wasn’t sufficient. There was a lack of data because the technology related to that wasn’t sufficient. But certainly, the technology to enable the more rapid execution of planning was just not there. I can tell you that from having watched optimization models run for 24 hours, as opposed to today where when I work with guys, I think, “Well, it hasn’t finished, it’s been five minutes already, what should I do?” So, I don’t mean to interrupt you, Joannes, but I think a lot of that was because we have much better technology today.

Joannes Vermorel: I completely agree, and that’s a separate concern, but they are really practical concerns. If you have an optimization technology but rerunning takes 24 hours and you need 20 iterations to converge to something that is relatively satisfying with regard to the new state of your supply chain, it’s never going to happen. People just revert to spreadsheets. There is just no time to go through all those hoops. You go back to your spreadsheets that may not deliver you this sort of optimization, but they will give you an answer at least within a reasonable time frame.

I think that was also the sort of thing where Lokad did well in this period. We had optimization, but we had optimization tools that were sufficiently agile so that they could be tested repeatedly dozens of times per day until we had something that was actually working. Otherwise, our clients would have just given up on the sort of services that Lokad was offering at the time.

Ian Wright: Interesting because I’ve always struggled with what I call the snapshot optimization. Particularly, supply chain planning and network models have always been snapshot integer programming. Solvers are all snapshots, and this whole issue of timing, I’ve always struggled with how we could take the benefits of simulation-type approaches where we are able to incorporate the dimension of time a little bit better and how we can somehow meld an approach.

For example, there’s a company in Russia, a simulation company, that came up with combining optimization. I thought that was great at the time. Unfortunately, I’m not very familiar with their implementation of the optimization side of things because they’re a simulation company. The time issue is one thing. The other issue, I think, in the determination of a solution with probability also involves a technological issue that we’re more able to face today. It involves the amount of data, the scope of the data that you can incorporate in deriving the solution.

A lot of things are outside the realm of the corporation or the company or the division that you’re optimizing for and not accounted for when you’ve got a new product or when you’re coming into a brand new world out of a pandemic. The only thing you can rely on quite often is data that has nothing to do with the history of your prior operations. You have to look to a much broader scope of data so that when you’re coming up with probabilities, for example, you need to incorporate exogenous variables in addition to all the traditional variables related to the activity you’re trying to continue.

Joannes Vermorel: Conceptually, yes, although I slightly disagree on this one. The thing is that data beyond transactional data is very costly for companies. Acquiring data on competitive intelligence is kind of okay, that’s not too costly, but if you go beyond that, just scraping the prices of your competitors, it becomes very quickly very complicated.

Our approach is that usually, first, you need to have models where you look at your data in a way that is more informative. An example of that would be you launch a new product, you don’t have any sales history, so the traditional time series perspective says you have nothing. But if you give up on the time series perspective and embrace an alternative vision, you might see that your product launches have a hit-or-miss sort of pattern and that the successes you can expect behave according to some distribution, and the failures as well. So yes, you can use your historical data to say things about the product.

Again, because your launches, if a no-name studio launches a movie, the odds that this no-name studio will produce a movie that will do 1 billion in movie theaters is super low. But if it’s Disney or Warner Brothers, then the odds are maybe at something like 5%.

So, first, using the transaction data that companies have, you can usually tell a lot more than what people think because they are entrenched in the time series perspective. There are other ways.

The second thing is that if you admit that you just don’t know, let’s realize that people who are going to take those decisions as humans don’t have a secret source of information as well. There is no crystal ball inside the human brain that lets you peek into the future or anything, especially when we’re talking about supply chains where we have tens of thousands of products that you only know by the fact that they exist. Many people who would be a supply and demand planner would not even know exactly what their company is selling or producing.

So, back to that, I would say, first, we have our transaction data that can be exploited in more ways than meets the eye as soon as you give up on this time series perspective. But then you also have the fact that this extra information is very difficult to get. So maybe what we should instead accept is to have a lot of uncertainty.

The traditional tools do not even accept dealing with uncertainty at all. When I say traditional tools, I mean all the solvers that deliver mathematical optimization of the market. All the solvers that I know that are established are just deterministic solvers; they cannot deal with uncertainty. We just received on this channel one pioneer who is trying to establish their prototype stochastic optimizer InsightOpt, Meinolf Sellmann, who had his Seeker instruments. But that’s really one of a kind, and that’s pretty much the only one that I know that is trying to pursue this from a commercial perspective.

So back to the case at hand, my take is that if you do not have any instrument to deal with uncertainty under any shape or form, the idea that you will just deal with this situation by just inflating uncertainty and letting it be is not even thinkable. But if you do have those instruments, then it becomes the very natural thing. You try something unprecedented, uncertainty is through the roof, and your optimizer just lets you act accordingly.

Ian Wright: I think where we are sort of getting out of alignment here is because there’s a difference in focus between us when you’re planning strategically and when you’re planning particularly the closer you get to execution, where options diminish dramatically. I’m coming from a predominantly strategic planning sphere. When you say, for example, a lot of this additional scope data for a new product is expensive, that may be, but there are lots and lots of different kinds of data that you can deploy in modeling before you get to optimization.

You can model the correlation between lots of different exogenous aspects of economic data and demographic data related to the kind of product and market that you want to serve that product to. That’s where I’m coming from, Joannes, when I talk about adding more data elements. I’m talking about looking at correlation with what is reasonably accessible data generally related to demographics and market penetration.

Another aspect of this, which I think is ultimately what we should always think about as technology providers and practitioners in this field, is that businesses are ultimately about finance. A major element of what we have to do in planning is to boil it down to cost and minimization of cost, depending upon the circumstances. Cost data, to my mind, has been inadequately employed in, for example, network models for supply chain optimization. People have been happy to accept assumptions around cost as they put costs into models, as opposed to going out and actually finding much more concrete expectations around cost, which is very doable. I think that’s just something that, with the technology we now have, is so much more ripe for focus and more understanding of what we can do around bringing in data to understand more of the scope of the context that we’re working in.

Conor Doherty: It’s a perfect point to push forward a little bit because once you have all the data, you then have to arrive at a decision eventually. Something that you talk about in the paper as well is the role of trade-offs in making those decisions. Once you have your model and all the data, you are still presented with a number of decisions, often just decision optionality. How do trade-offs fit into the pursuit of the optimal decision?

Ian Wright: I’ll make one point quickly. You never have all the data. You have the data that you’ve got, obviously, but it’s always flawed. So you have to work with what you’ve got. I’m a cynic at heart, you can tell, right? As far as trade-offs, there are the obvious trade-offs in supply chain. Your trade-off is basically financials. Do I want to spend the money to provide the service and the product that my customer wants? I want to provide the product in the way that the customer wants me to provide it, and that means I have to spend money to do it. How far down that path am I willing to go?

The trade-off is inventory versus transportation cost, for example, as a basic one. But there are trade-offs related to how many contingencies do I put in place to mitigate risk? How many potential operational paths do I create for my business so that I can execute a probabilistic plan which comes up with something that isn’t my normal path of execution? A trade-off is, do I look at short-term implications around the models that I pursue and the plans that I put in place, or do I involve the long-term, which can quite often mean a financial trade-off because I’m investing now for something that’s not going to happen until a later period?

Trade-offs, to me, is kind of a euphemism for, I’ve got to get the money right. How do I balance all those things? I’m not sure if I’m answering your question, Conor, but it boils down to what am I willing to balance in my model, given that I know I’m limited in the way I can scope my model? What am I willing to balance to get that dollar sign or that euro sign at the right place?

Conor Doherty: Thank you, Ian. And Joannes, I’ll come to you now because again I’m basically teeing you up for something I know you like to talk about. I made the point that, at the base, what people are trying to optimize for explicitly, correct me if I’m wrong, is in fact cost or finances. But the thing is, a lot of the time when we talk about decision-making in supply chain, people or companies are trying to optimize things like service levels. I think you’ve made the point before that what people think they’re optimizing is cost, but really that’s just a numerical artifact. So the question, if you could comment on, is when people focus on those traditional goals in supply chain, are they in fact optimizing for cost or are they looking in the wrong direction?

Joannes Vermorel: So if we look at the dominant practices of supply chain nowadays, on PowerPoints they would say that they focus on the economically viable. In practice, they do not. It’s like percentages all the way in terms of service levels, inventory returns, and whatnot. Those things are loosely correlated to your bottom line, but only loosely.

To assume that your profitability is correlated in any shape or form to your service levels is just insane. It does not work. It’s a very simplistic take. The first thing would be to state that the dominant practices are, in fact, people know intuitively that they cannot convince anybody if they say that they want to optimize percentages. So in the slides, they will say we optimize those dollars, but in practice, in their software systems, they have rules that are absolutely not aligned in any shape or form with those dollar modelizations. I would say only the ones that I’ve seen in the wild, putting Lokad aside, were strictly non-financial, non-economic perspectives.

Now, if we get to an economic perspective where we start to have those dollars, I completely agree on the front that it is very difficult to get it right. It is difficult, and in fact, you have plenty of horror stories very frequently told in Hollywood movies where the finance guy is the bad guy who is doing incredibly stupid short-term thinking at the expense of something that would be a little bit further away into the future.

The financial perspective has a bad rap, and indeed, the sort of perspective that operations research emphasized 40 years ago was a very simplistic take. They were really going for a very short number of basic variables: costs—cost of stock, cost of this, cost of that—and bam, you’re done, job’s done, let’s now let the magic operate with the optimal solution that will pop out of the model.

At Lokad, we noticed that and realized that we had a real problem, which is how do we get to the knowledge of whether our scoring function, our economic scoring function, the one that is counting the dollars, is telling an approximate version of the truth that is good enough. It is a very difficult question, and what we discovered was a methodology documented in my series of supply chain lectures called experimental optimization.

The way you know that your economic model is correct is when it generates sane decisions. It’s very strange. In the end, people were thinking you need to have the correct scoring metric so that it gives you the optimal decisions. What we do is pretty much the opposite. We generate the decisions, and then out of those generated decisions that have been extremized according to this metric, we look at whether they’re sane or not.

When we see obviously dysfunctional decisions that are blatantly insane, very frequently we come back to the economic modelization and realize something is wrong, something we missed. So we have this very iterative process where we pick our dollars, we optimize, we get decisions, some of them are insane, we revise the way we count the dollars, and rinse and repeat.

With many iterations, we finally converge on something where nobody anymore has any doubts. That’s what we call the zero insanity principle. We want to converge to a setup where the system does not generate any lines that are obviously patently insane right out of the box. That’s actually the point where we at Lokad believe is necessary before getting to production.

But you see, the point is that we completely reverse the sort of perspective that operations research had. Instead of saying that the scoring function is a given, it is something that we are going to discover through an incremental process. It’s very strange because that goes very much against, at least for French people, this Cartesian perspective of bottom-up thinking and applying principles and just unrolling them. It is a much more empirical sort of process.

Ian Wright: I have to confess, and I apologize for this, but I have to confess my relative ignorance of Lokad. But I’m very intrigued by your definition of sanity in the context that you’re talking about. What constitutes a sane decision?

Joannes Vermorel: Ian, to give an example that I gave in my series of lectures, I’ll start with an analogy and then we’ll go back to supply chain. There are classes of problems where if you want to solve the general problem, it is impossibly hard, but particular instances are very easy.

An example of that would be, let’s say I give you a movie to watch and tell you this is about a Roman gladiator or whatever, and I ask you to spot if there are things that are completely out of context with regard to the historical time period, like a plane in the background. There is a famous movie where they are fighting in the arena and there is a plane in the sky in the background.

If I ask you to find a general algorithm to tell me all the things that can go wrong in a movie that do not reflect the age or time period, it is a completely daunting task. You would need an encyclopedia of all the stuff that was not invented, even the terms, the mood, the attitude, the sort of thinking. It is just an impossibly complicated problem. But in practice, if you put an intern watching the tape, they will tell you, “Oh, there is a plane here, it’s bad.” I can’t give you the list of all the things that are bad, but I can spot this piece of insanity.

Supply chain systems are very much like that. It is very difficult to give you a general rule to establish exactly what counts as insane or not. That’s a problem of general intelligence, not something you can just condense into a simple algorithm. But it turns out that people are actually pretty good at spotting those problems.

An example would be, you have a series of stockouts in your historical data, they are not properly factored in, and suddenly your estimation of the future demand drops to zero because you had stockouts, so you didn’t sell, and your model stupidly forecasts zero. Then you end up suggesting zero replenishment as a good policy. It says, “What is your target stock level? Zero, because we observed very little demand, so let’s keep it at zero.”

If you start thinking about that, yes, my forecast is going to be 100% accurate because I’m forecasting zero, I’m replenishing zero, and all is well. But no, it is not well. This problem is called an inventory freeze. This is a piece of insanity, and you have plenty of situations like that where when you look at decisions, you can identify things that are dysfunctional, where numbers are implausibly high or low, or things just don’t make sense.

An example we had historically at Lokad, for one of our first aviation clients, we started to look at inventory replenishment and suggested buying some parts. The client came back to us and said, “Oh no, we are not going to buy those parts. Those parts will go on a Boeing 747, and 10 years from now there won’t be any Boeing 747s flying above Europe. Those parts have a life expectancy of four decades, so if we buy them now, we are only going to use them for 10 years, and then those aircraft will be gone.”

That was something obvious where we forgot to take into account the fact that the usefulness of a part cannot exceed the lifetime of the aircraft it serves. This is the sort of thing where, depending on the verticals, reality will give you an endless stream of stuff that just falls at your face as manifestations of those insanities. Although I cannot give you a general rule or an algorithm to detect that, in practice it works very nicely because people can spot those things.

Ian Wright: We are now violently on the same page, strangely enough, because I know we want to discuss some things coming up. My major premise in my career, in terms of having worked with all this technology and thrusting technology into the victim’s company, has always been you cannot exclude the human. You have to account for and utilize the human in the process of deploying and using the technology.

Because right now, and for my foreseeable future, we don’t have technology that can replace many of the aspects of the human that you’re talking about, in terms of recognition of the absurd, for example, or recognition of the insane. It just doesn’t exist yet. The only way it will come to exist is by somehow trying to incorporate aspects of the human in the process. Today, it is just not feasible.

Joannes Vermorel: Yes, I would agree with you. There are two angles that I would like to respond to your comments. First, sometimes insane decisions can only be known to be insane after the fact. You have to make the mistake to realize that something unexpected happened and it was bad. But more than human, information has to come back from the world. You need real-world feedback to get this information. So, it’s a matter of high-level intelligence. Even if we had an artificial intelligence just as intelligent as a human, there are limits. To some extent, the only way you know the world is by giving yourself some leeway to experiment. That would be the first angle.

The second one is on the role of people. The way my peers have engineered systems is that they use humans as co-processors. Your system generates decisions, numbers, allocations of resources, and whatnot. Then you have all those lines that are insane, and you expect to have an army of clerks to manually step in and fix all of that. For the audience, all the systems that have alerts and exceptions are doing just that. Alerts and exceptions are just another way to say we have human co-processors who are going to process the stuff that my system does not process.

My problem with that is that people are quite expensive. This is the cost. So, the way I see it, it’s not a very good use of their time because you are going to have those human co-processors endlessly cycle through the same nonsense of the same alerts and exceptions.

That’s why at Lokad, we look at it a completely different way. We say whenever a piece of nonsense is detected, like an alert or an exception, someone at Lokad, the supply chain scientist, needs to step in and adjust the implementation of whatever is doing the predictive optimization to fix it so that this problem does not happen again. No exceptions. Every single piece of nonsense that is addressed is assessed. Is it an actual piece of nonsense or a very clever optimization? If it is indeed nonsense, then the optimization logic itself must be fixed. You do not want the same employee reporting the same problem again the next day.

Ian Wright: I think we’re still on the same page, certainly on the same chapter. I’m coming more from a strategic and tactical scope, where I’m not worried about going out and looking at a room full of Big Brother people on computer screens correcting things. I’m talking about what’s necessary in the deployment of operations in a strategic or tactical sense. It means involving the experienced stakeholder to maintain the sanity in the direction you’re taking and the solutions you’re driving.

When it comes to the whole idea of where I think you are framing your argument, Joannes, as we move forward with the kind of technology you are developing and have developed, and with the general move towards more capability in AI terms, the ability for a system to self-correct in an event management context is going to become more feasible. We will move away from the expensive room of human computer operators. But it isn’t today, so you have to work within the constraints of the capabilities you have at the time.

Conor Doherty: If I may, just because it sounds as if Ian, you were commenting more on the role of the human in the strategic sense, and Joannes, you seem to be commenting more about the decision-making in the mundane day-to-day sense. Are these non-overlapping magisteria?

Joannes Vermorel: That’s because, you see, my perspective, and maybe it’s a little strange, is that if we are going into the realm of strategic consideration, then your focus on operating a supply chain should be very much on how do I engineer the machinery that generates the proper decisions. People think there are strategic decisions, tactical decisions, and whatever. My take is that you have decisions that are repeatable. Some are repeated every day, some every hour, some every month, some once a year. When it comes to mechanization, you want to mechanize everything that is reasonably repeated frequently enough. You let yourself deal with the other ones in a completely ad-hoc fashion.

The strategy, if you start thinking of this approach, is not so much about deciding something at a certain level and then letting other layers of your organization do their thing at other scopes. It is more like the strategic vision is about what do I do so that the engineering culture of my company, from this engineering culture, emerges the mechanized decision-making processes that really improve my bottom line. That’s a completely different way to think about strategy.

Ian Wright: Completely in agreement with you. The way I’ve often viewed it is the role of the architect in designing the concept of a building, and then it’s handed over to the engineer who says how it will be put together, and then to construction who actually put it together, and then to people who work in and maintain the building. At all those levels, the architect shouldn’t be putting something together that can’t be engineered, built, or maintained. That’s my high-level analogy of the process we’re involved in.

In supply chain, though, it’s a little different because you might create a strategy today, but you have to do the same again next year. The issue with supply chain is it’s dynamic and adaptive. We have to respond to the changing world and its needs. You repeat your strategy process, but you have to do it in a feasible manner that’s pragmatic and will allow you to implement an operable solution.

Joannes Vermorel: Just to give you some perspective, during the lockdowns in 2020 and 2021, we had quite a series of clients, more than a dozen, where their white-collar workers went away for 14 months. Lokad was left alone taking all the supply chain decisions for companies where the blue-collar workforce was still operating. The white-collar workforce was on government holidays, subsidized. They were paid, but European governments were also imposing that people do not work from home, otherwise, they would not be paid by the government subsidies. So, they were effectively on leave.

We manage for a dozen clients to have over a billion euros worth of inventory operated entirely for 14 months. That represented over a thousand employees in aggregate. And that really asks the question of what those supposedly strategic supply chain processes are delivering.

When I look at most S&OP meetings, you will have lengthy discussions to decide how much budget we allocate for purchasing for various departments. All of that can be replaced by a formula. If we disagree with a formula because it gives insane results, then we fix the formula. But we don’t need to meet 12 directors and all the expenses to get to this budget calculation. It can be automated.

In terms of strategy, the question would be, how do I make sure that the engineering that goes into this formula that allocates my top-level resources is done in a way that is aligned with the interests of my company? That’s a very interesting problem and yes, this should capture the interest of the management who want to think strategically. The idea of cherry-picking a few decisions and saying, “I am going to be involved in that,” is not really adding much value.

In a lot of companies, what is happening in those supposedly strategic meetings is a lot of time being wasted. Yes, they do generate decisions, but with a productivity that is absolutely abysmal. I think we had a previous guest discussing S&OP, and he was telling me they were usually ending up with like four decisions per hour.

Conor Doherty: That was Eric Wilson, yes, in an S&OP process.

Joannes Vermorel: Yes, and I was thinking, okay, we just have hundreds of thousands of decisions to come up with, and now we have a pace of four decisions an hour. It is obvious that when you have this sort of situation, the operations are always going to be well ahead of your plans.

By the time you come up with your decisions, they are completely obsolete, and people have done something else because they could not wait so long for those decisions. We end up in this sort of situation where it’s more like a masquerade. People take strategic decisions for stuff that already happened like two years before.

Conor Doherty: Well, that interests me. Just to tee you up for the follow-up, because I know in the paper you talked about more decentralized supply chain decision-making, and you gave the example of Walmart.

You can describe it better than I can.

Ian Wright: Doing that properly in an effective way means you decentralize the decision, but that decentralization and the decision-making is still taking place in a context that has been designed effectively and properly. So that you’re not moving far away from the corporate central strategy. There’s almost like an escalator of strategy down to operations.

In that case, we’re talking about decentralization of what I would term more tactical decisions. But the whole comes back to Joannes. I don’t disagree with you at all. What we’re talking about is people not only working in silos within organizations, but they also plan and function in silos. The supply chain guys go off and do their strategic supply chain plan, then they think about the transportation plan, and then the warehouse plan.

All these plans are interdependent and unfortunately quite often executed independently. We can’t basically come up with an optimal strategic supply chain solution unless we incorporate a network plan, a transportation plan, and an inventory plan in an operating model.

The whole situation with Lokad operating without the white-collar folks in the building is a great example to me of having an operating model which means you can sustain operations and not move too far away from the plan you thought you needed to operate to six months ago despite disruptions. You have built together the right people, the right processes, and you’ve got the technology and programs in place to help that execution.

I really maintain much more so than this whole idea of getting the optimality right. You can have an optimal plan, but you need to be able to execute it and maintain it as closely as you possibly can. Without that operating model, and I go beyond the traditional people, process, and technology, you need that in place. That’s really your strategic corporate plan, and then all these other strategic plans around supply chain have to work within the context of that. If you don’t match the operating model that you have with the plans that you’re coming up with, then that is going to be a recipe for disaster.

Conor Doherty: Ian, if I can summarize that in a quote, you said earlier that you cannot exclude humans. So, Joannes, do you agree that you cannot exclude the human, particularly in the strategic decision-making that Ian’s talking about? Is that something that could be absorbed into an automated framework that you’ve already applied to the more mundane day-to-day running of business?

Joannes Vermorel: On the question of whether we have artificial general intelligence, we don’t. We are getting closer, agreeably. LLMs exhibit sparks of general intelligence, but just sparks. So, I would say Lokad right now, we certainly have no claim that we have software so sophisticated that it can bypass the need for the human mind. In fact, at the core of our practice, we have what we call supply chain scientists who are engineers coding the numerical recipes. That’s a very human thing that we are not delegating yet to machines.

Although algorithms can help to code faster with autocomplete and whatnot, the real question is when you have human intelligences, are they put to a task that really adds value from the fact that they are general intelligence as opposed to being like pattern matchers or something that can be mechanized?

My counterargument would be that many companies, especially those operating supply chains, are not making very good use of the white-collars they have. They are still pretty much in the mindset of having hordes of corporate clerks who are going through a process, and compliance to the process is their goal.

I see a lot of those companies operating supply chains treating the bulk of their white-collars exactly like they treat their blue-collars. There is a process, and adherence to the process is defined as excellence.

For blue-collars, that’s clear, that’s what you want. But if we go into the territory of white-collars, that becomes very strange because information is orders of magnitude easier to mechanize than the real world.

Dealing with physical stuff, for example, if you want to have a robot that will be able to weld in all situations, that’s extremely difficult. Just moving a hand, holding a tool, supporting something heavy, and being in an environment with dust or contaminants, we are talking about extremely advanced robotics just to be able to do something that someone could do within a few months of training.

Now, if we go into this world of information, you know, on paper, the constraints are not nearly as demanding. We can move gigabytes of data around with no problem. People who are doing those white-collar jobs are already working with computer systems. All the information they get is through a computer, and all the information they produce is already entered into a computer. So, we have a framework that is already entirely digital.

What I’m saying is that companies are using most of their white-collar workers as co-processors. They have what the processor of the computer can do with the software we have, and then we just have someone in the middle to fill the gaps. But are we really using the intelligence of those people? My argument is no. If there is a question of strategic importance, it is to make sure that all the white-collar workers are contributing to things where only general intelligence can deliver. If it’s something where you don’t need general intelligence, then it should be mechanized.

Ian Wright: I agree. Your focus on the mechanistic approach is what defines what is automation and what you need the human for. The moment where the human really supplies value, and as you say, Joannes, this probably isn’t deployed correctly, is in the intuitive areas where you cannot mechanistically provide input. For example, considering that a plane will be obsolete in 10 years, so why would we do this? That’s something you can’t mechanistically construct.

Where you need the human is where they need to provide an organic kind of input to a problem or a situation, whether it’s event management or supply chain operational management. You can have diagnostic mechanisms relatively easily. One area that is still ripe for work is in the employment of feedback loops that help generate proactive solutions within a mechanistic context. This includes the accumulation of information from a wider variety of data origins into that proactive mechanistic operational management. But you can’t beat the intuitive side of things. There’s an emergent aspect in what a human brings to a context where they’re trying to look at a problem or, more importantly, anticipate a problem.

Joannes Vermorel: I very much agree. Here, I would blame the time series perspective. The mainstream practice of supply chains nowadays is all around time series. But if you look at companies that are very good at what they do, they are very good at doing something intelligent with the feedback they get, like Amazon. Amazon is very smartly using the feedback from its customers to sort out most of their supply chain and logistic issues systematically.

If a delivery guy is routinely flagged for losing parcels, Amazon will stop using this provider and switch to another. If a vendor causes problems, they will kick the vendor. They make reasonable use of the feedback data they collect. They need humans to imagine what sort of feedback they can collect and engineers to craft the numerical recipes that decide when to kick a vendor or notify a logistic provider.

They likely do smart optimization, such as noticing that a transporter is reliable under certain conditions but not others, and using this transporter only under those settings. This requires a vision about what is relevant data, not just time series about demand. It requires an engineering mindset to provide deep fixes for problems, not just firefighting emergencies. Most companies go from one emergency to another, consuming all their bandwidth and preventing improvement. Amazon, on the other hand, engineers deep fixes for any situation they encounter, eliminating classes of problems and moving on to the next.

Ian Wright: Unfortunately, that comes back to finances. If you’ve got the deep pockets to have the kind of thought process you’re alluding to, that’s one thing. But most supply chain managers don’t work in an environment where they’re flush with cash to address problems in that fashion. They’re playing catch-up, putting out fires, and in a vicious circle.

If you get an opportunity as a practitioner to work on a strategic project, don’t put the model first. Understand the supply chain manager’s world as it exists today, then think as though you’re Amazon and work out how that supply chain manager’s world could work so they aren’t playing catch-up. Unfortunately, most supply chain managers pursue strategic projects in the same manner as their day-to-day work, which is just another fire to put out. People from both sides don’t approach it right, but it could be approached differently by thinking differently about the role.

Conor Doherty: Gentlemen, I’m mindful of time, so I want to come back to you, Ian, and ask about practical optimality. As a means of steering us towards a conclusion, what are the practical steps people can take in the pursuit of optimality?

Ian Wright: Again, I’m coming at it from the strategic end, not being the guy on the shop floor trying to get the product in the hands of the customer. What you have to do in looking at optimality is think about it with the view of how that execution will actually take place. Make sure you’re focused on bringing a feasible, operable solution to the table, one that fits the way the company operates today.

If you have the capability and the freedom, come up with a solution that achieves optimality in a context that can be executed optimally. Understand the stakeholders’ true objectives, the sponsors’ true objectives, and the company’s true objectives, not just their observed or espoused objectives. To the extent they’re willing to listen, try to produce a solution in that vein. At all times, make sure you’re working with humans, not just the model.

Conor Doherty: Thank you. Joannes, anything to add to that?

Joannes Vermorel: No, I think it’s a good point. From a software vendor perspective, I would say when it comes to optimality, do not trust software vendors too much. Yes, obviously, except us. In particular, take into account that there are classes of software like systems of records and systems of reports that do not deal in decisions and thus cannot deal with optimization at all.

Systems of records, like ERP, CRM, WMS, and systems of reports, like business intelligence, are frequently advertised as bringing optimized decisions. By design, these classes of software do not even touch the problem. They don’t optimize in the first place. So, my message would be, do not try to find your path to optimality in your next ERP upgrade. By definition, an ERP is a system of records. It does not deal with decisions and cares even less about whether those decisions can be optimal in any shape or form.

Conor Doherty: I’ll make sure to put that really nice little article—well, a short article, as what I meant there. In it, you talk about systems of record, systems of report, and systems of intelligence. But it is customary here to give the final word to the guest. So, if there’s anything else you want to mention or something we didn’t say, you can close uninterrupted.

Ian Wright: Yeah, I like that. From a software vendor, do not trust software vendors. I really like that because, over 40 years, one of the things that has bugged my mind is the extent to which I’ve witnessed hype around technology. Hype in the whole idea of a supply chain, for the longest time, to my mind, is a type of hype. And I’ve actually written on this, Conor, which you won’t be surprised about. But I think what we have to do is just learn to live in a world where you know how to work your way through the hype, work your way through the weeds, and understand what really works. That’s the key—what is real.

Conor Doherty: Well, on that note, I shall say I have no more questions. Joannes, thank you for your time. Ian, thank you very much for joining us.

Ian Wright: Thank you, guys. It was a privilege for you to invite me, and I really look forward to learning more about Lokad and whether I’m insane or not. That’s the key.

Joannes Vermorel: Yes, one of the keys. We’ll send you a diagnostic.

Conor Doherty: Thank you, and thank you for watching. We’ll see you next time.