00:00:00 Introduction of the interview
00:00:42 Meinolf Sellman’s career and InsideOpt’s decision-making
00:03:47 Disruptions and over-optimism in optimization
00:06:18 Vermorel’s stochastic optimization discovery and influences
00:08:10 E-commerce fulfillment and supply chain forecasting
00:09:56 ‘Predict then optimize’ approach and consequences
00:11:41 Operational results improvement and business costs
00:14:08 Unpredictability and chaos in supply chain
00:16:16 Appeal of forecasting and rational decision making
00:18:43 Rational decision making and overbooking game
00:21:55 Supermarket product example and supply availability
00:24:27 Stochastic optimization and seasonal sales variability
00:28:53 Price changes impact and joint posterior distribution
00:30:39 Problem-solving heuristics and tackling complexity
00:33:10 Challenges with perishable goods and posterior distribution
00:36:01 Reasoning difficulties and creating solution awareness
00:38:40 Coffee Roastery problem and production planning
00:42:20 Business modelization and complex variables reality
00:45:34 Ignored concerns in optimization and silver bullet seeking
00:49:00 CEO advice and understanding business processes
00:51:58 Warehouse capacity and supplier delivery uncertainty
00:54:38 Perception of service level and briefing exercise
00:57:33 Airlines’ financial losses and technology adoption
01:00:10 AI-based search benefits and hardware compatibility
01:03:05 Convexity in optimization and usefulness over proof
01:06:06 Convergence of machine learning with optimization techniques
01:09:34 Runtime features and broadening search horizon
01:12:22 Micro adjustments and warehouse operation risks
01:16:09 Finding good compromise and insurance against uncertainty
01:19:11 Expected profit rise with stochastic optimization
01:22:23 Aerospace industry example
01:24:30 Accepting good decisions and damage control
01:25:19 Supply chain efficiency
01:26:22 Client feedback and importance of technology
01:26:56 End of interview

About the guest

Dr. Meinolf Sellmann is founder and CTO at InsideOpt, a US-based startup that produces general-purpose software for automating decision-making under uncertainty. He is the former Director for Network Optimization at Shopify, Lab Director for the Machine Learning and Knowledge Representation Labs at General Electric’s Global Research Center, Senior Manager for Cognitive Computing at IBM Research, and Assistant Professor for Computer Science at Brown University. Meinolf architected systems like the trade-settlement system of the ECB, which handles over 1 trillion Euros per night, has published over 80 articles in international conferences and journals, holds six patents, and has won over 22 first prizes at international programming competitions.


In a recent LokadTV interview, Conor Doherty, Joannes Vermorel, and guest Meinolf Sellmann discussed the role of stochastic optimization in supply chain management. They highlighted the importance of considering variability and uncertainty in decision-making processes. Traditional deterministic methods often fall short in real-world scenarios, leading to over-optimistic optimization plans. Both Vermorel and Sellmann criticized the “predict then optimize” approach, suggesting that companies can achieve better results by accounting for forecast variability during optimization. They emphasized the need for executable plans and measurable effectiveness in any optimization model.

Extended Summary

In a recent interview hosted by Conor Doherty, Head of Communication at Lokad, Dr. Meinolf Sellmann, CTO at InsideOpt, and Joannes Vermorel, CEO of Lokad, discussed the complexities of decision-making under uncertainty in supply chain management. The conversation revolved around the concept of stochastic optimization, a method that takes into account the inherent variability and unpredictability in supply chain processes.

Dr. Sellmann, an award-winning computer scientist and AI researcher, began by sharing his professional journey through IBM, GE, Shopify, and now InsideOpt. He highlighted how machine learning has increasingly become a part of his work, and how traditional optimization methods, which are deterministic, often fall short in real-world scenarios. He emphasized that decision-making under uncertainty is a necessary aspect of supply chain management, and this is the focus at InsideOpt.

Using the airline industry as an example, Dr. Sellmann illustrated the challenges of optimization under uncertainty. He explained that while optimization plans may look great on paper, they often fail in practice due to unforeseen circumstances. This leads to the realization that optimization suffers from over-optimism.

Vermorel agreed with Dr. Sellmann’s perspective, sharing his own experience discovering the concept of stochastic optimization. He noted how the idea of uncertainty is often missing in traditional optimization literature. Vermorel also discussed the idea of mastering the future to remove uncertainty, a concept that has been appealing for nearly a century. He mentioned the Soviet Union’s attempt to forecast and price 30 million products five years ahead, which was a failure. Despite this, the idea still appeals to academics and certain types of management due to its top-down approach.

Dr. Sellmann criticized the traditional “predict then optimize” approach, where one department makes a forecast and another uses that forecast for optimization. He argued that this approach ignores the variability in the forecast and suggested that companies can achieve significantly better operational results by taking the variability in the forecast into account during optimization.

Vermorel used the example of airline overbooking to illustrate the nonlinearity of certain problems, where slight deviations can quickly escalate into significant issues. Dr. Sellmann used the example of a supermarket selling butter and sunscreen kits to illustrate the importance of variability in demand. He argued that it’s crucial to have the entire supply available at the right time, especially for seasonal products like sunscreen.

The conversation also touched upon the disconnect between common sense and the use of software in supply chain management, the importance of forecasting potential scenarios for all products, and the complexities of production planning. Dr. Sellmann explained that while perfect accuracy would be ideal, it’s not possible due to inherent uncertainties in forecasting. Instead, the next best thing is to learn how forecasts err and use that information to make better decisions.

In conclusion, the interview highlighted the importance of stochastic optimization in supply chain management. Both Dr. Sellmann and Vermorel emphasized the need to account for variability and uncertainty in forecasts when making decisions, and the importance of not oversimplifying models. They suggested that any optimization model can be thought of as a simulation of what would happen under certain conditions, and it’s crucial to ensure that the plan is executable and its effectiveness can be measured.

Full Transcript

Conor Doherty: Welcome back. Uncertainty and stochasticity are the very nature of supply chain. Today’s guest, Dr. Meinolf Sellman, is no stranger to this. He’s an award-winning computer scientist, a decorated AI researcher, and he’s the CTO at InsideOpt. Today, he’s going to talk to Joannes and me about decision-making under uncertainty. Meinolf, you’re very welcome to Lokad.

Meinolf Sellman: Thank you so much, Conor, and very nice meeting you, Joannes. I look forward to the discussion.

Conor Doherty: Well, good and thank you very much for joining us. Apologies for the brief intro. I do like to get straight to the guest, but the consequence of that is I don’t do justice to the background of who we’re talking to. So, could you please first of all forgive me and then fill in some of the blanks in terms of your background?

Meinolf Sellman: Sure. I think you’ve covered the gist. I’m an optimization person at heart. That’s kind of what drove my diploma thesis. The German system is very similar to the French one. My diploma thesis consisted in building a mixed integer programming solver for a computer algebra system. So, even since my early student days, I’ve been on this decision-making side, exploring how we can use computers to arrive at better decisions.

I was a postdoc at Cornell, a professor at Brown, then a senior manager at IBM, a technology director at GE, then a director at Shopify, and now CTO at InsideOpt. Through this journey, you can see that more and more machine learning has crept into the mix.

Traditional optimization is deterministic. You have complete knowledge of everything that is going on, you’re just trying to find the best course of action. The moment you actually get in touch with practice, you realize that isn’t so. You need to bring in more and more technology that allows you to make decision-making under uncertainty, and that is really what hypes us here at InsideOpt.

Conor Doherty: Thank you. Again, you mentioned a lot of big names there in terms of your background at IBM, General Electric, and Shopify. Without violating any potential NDAs, what details or experiences might have most greatly influenced your perspective on forecasting and decision making now that you’re at InsideOpt?

Meinolf Sellman: Look at an industry like the airline industry. Traditionally, super high spend on optimization. It’s one of the areas or one of the industries that probably has invested the earliest and also the most over the decades in optimization technology. And then look at how much fun it is to run an airline. They get awesome plans, right? They get crew plans. They need to decide which pilot is on what plane, what flight attendant on what plane, what planes to use for which legs. They also need to decide what kind of legs to offer, what direct flights to offer and then how to make them into routes, they need to do revenue management. For all of these decisions, operational decisions, they’re using optimization and on paper, those plans look fantastic. They might usually come if not with provable optimality, then with some performance guarantee.

But then if you operate an airline, you know that you’re losing your shirt on the day of operation because then things are slightly different. The weather isn’t what you expected it to be, the air traffic controllers in France feel like they are not being paid enough, the gate is full, some equipment breaks. All of those things that can go wrong. And if you’ve ever flown, you know that the airline’s motto is “If today is screwed, then today is screwed. Let’s make sure tomorrow isn’t screwed.” And that’s how they treat you. They don’t care that you’re going to get where you need to go today, they want to be back in the plan tomorrow because if they come with a bad situation into tomorrow, then tomorrow will be screwed also.

What does it tell you? It tells you that optimization suffers from over-optimism that everything will go according to plan. And that’s what we want to change.

Conor Doherty: Thank you. Joannes, does that align with your take?

Joannes Vermorel: Absolutely. For me, it was very intriguing because I only discovered relatively late the very notion of stochastic optimization. I was very familiar in my 20s with the regular optimization, you know, convex optimization, read entire books on these sort of things. And so the sort of classic optimization that starts with linear algebra and things like the simplex algorithm and whatnot, those things are literally taught not in high school but just afterward.

And then, I had studied for a few years when I was a student, operations research, that’s the traditional name given to the topic. And again, you can go through literally hundreds of pages of cases where you have factories, planes, all sorts of allocations of assets, machines, people, and whatnot. And yet, at no point do they ever discuss the elephant in the room, which is stuff may go wrong. You just have a modelization of the situation that might be incorrect and then all the stuff that you optimize ends up exceedingly fragile.

The moment where I realized how deep the rabbit hole went was reading the book “Antifragile” from Nassim Nicholas Taleb. That was quite a while ago, but then I realized there was a missing paradigm that was really ubiquitous. And then I started to get generally very interested in this sort of optimization. For me, the thing that is most surprising is how absent it is from literally entire bodies of literature that treat as if this idea of uncertainty, of not knowing perfectly your loss function, is literally a missing dimension. There is a missing dimension and it’s harder to see what you don’t see. It’s not like it is wrong, it is more like there is an entire dimension that is absent to a very large, very extensive, very old field of studies.

Conor Doherty: Well, actually, if I can follow up on that. When you mentioned the idea of missing paradigms and things that are entirely absent, it juxtaposes nicely with one of the reasons why we actually reached out to Meinolf. Your perspective on what we might call mainstream or traditional planning, forecasting, inventory policies typically falls into a sort of “forecast first, make decisions second” approach, which is paradigmatically quite different to what I think everyone in the room would advocate. So first, I’ll throw it to you, Meinolf. Could you sketch out the differences there, the traditional approach, and then the missing paradigms that you and Joannes see?

Meinolf Sellman: Yes, so I mean, as you can imagine, if you run a fulfillment system, you have an e-commerce store and you need to put somewhere in your warehouse the products that you’re hoping people will buy. The inherent problem that you run into is that you don’t know how much will be bought where. So you need to formulate an expectation, let’s put it that way. So you need to make a forecast or prediction more generally. And there are people who do that for you, and this is typically your machine learning department. These guys know everything about, “Oh, there is missing values over here.” Right, so say you had stockouts at some point, so which means that you don’t really know how much would have been sold because you were out. So, you don’t actually know how much you could have sold if you had had more. They deal with outliers, with missing values, with noise, and everything that’s uncertain and out of this, they make a forecast, a prediction.

And then you have that second department, which, as Joannes was saying quite correctly, these folks don’t generally deal with uncertainty. They go like, “Oh, great prediction. Let’s shove that into my optimization model as if it were given by the Oracle of Delphi or something.” So it’s as if you had perfect knowledge of the future. You just treat these numbers that you have there and you say, “Oh, my demand for sunscreen next week is 20 tubes. So let’s put them on the shelf,” not looking at any variability that is there.

This “predict then optimize”, this is what it’s called, is partly due to the fact that you have two different departments that have very different skill sets. And it would be very difficult for you to say, “Oh, the machine learners now need to learn everything about optimization,” or maybe the optimizers need to learn more about machine learning. So this is typically what companies would shy away from. That’s one reason why that separation exists.

The piece to this is though, that if you’re handing uncertainty from one department to the other, it doesn’t go away. So just by ignoring it, you’re actually leaving a lot of money on the table. And that is the second reason why people don’t look into this more closely because it sounds to them like the machine learners did their job. They came back, they do things like after they build a model, they test their machinery through something that is called, for example, a cross-validation. So you would go into known data and you would say, “Hey, if only had this snippet of data and I would have had to make a prediction for the other part of the data, how well would that have done?” Like this, you can kind of convince yourself that you’re going to get very good predictions out of the machine learning department.

And they do that and you control that and you go like, “Oh, this is awesome. They have good predictions.” And then the optimizers come back anyway and they will say, “Either I have a performance bound or hey, I have a probably optimal solution over here.” So you wouldn’t expect, if you’re running a company, that there’s any room for improvement by making these departments work better together. And in reality, you can very easily get 15%, 20%, 25% better operational results if you actually take the variability in the forecast into account when you’re doing the optimization. But people don’t see that.

So partly, it’s structural that this “predict and optimize” persists so much that you don’t want to mix skills. The other part is that you don’t see that by intermingling these things more closely, how much you’re actually leaving on the table. Because it sounds like, “Hey, awesome forecast, provable optimality, great. The rest is just the cost of doing business.” And it’s not. This is, I think, what Joannes and I are here to tell the audience today. This is not the cost of doing business.

Conor Doherty: Well Joannes, is that the cost of doing business? Is Meinolf correct?

Joannes Vermorel: Yes, and also I think there is another dimension. The idea of mastering, conquering the future so that you remove the uncertainty entirely, it has been for almost a century a very seductive idea. The Soviet Union collapsed, but the idea of making a five-year plan and having everything orchestrated didn’t die with the Soviet Union. At some point, I think they had something like 30 million products that they had to price and forecast five years ahead. It was a complete failure on a pragmatic level.

The appeal of this idea didn’t die with the Soviet Union. It still has appeal, especially for academics. The idea that you could frame the future of the world in a way where you have your forecast and that’s going to be the truth, and then it’s just a matter of orchestration. It also resonates with certain types of management because it has this very top-down approach.

It has this appeal of simplicity. Obviously, this is a fallacy because you’re not in control. Your clients have their own agendas, they can decide other things. Your supplier tries to do their best, but sometimes their best is still not very good. Plus, you have shocks. Sometimes it’s something very dramatic like a war, sometimes it’s something very dumb like a ship that is being stuck in the canal of Suez, and all your imports are delayed due to a dumb event. But whatever the cause, the future is messy.

It’s very hard to rationalize this sort of chaos. It’s harder to reason about. Reasoning about the perfect future is simple. That would be the sort of feedback we were getting in the early years of Lokad. “Mr. Vermorel, just give us accurate forecasts. Stick to a 3% error and be done with it.” And obviously, if we had been able to deliver that, then there would not have been any true upside into combining the forecasting and the optimization.

But here we are, 15 years down the road. Even if Lokad is doing very well forecasting-wise, for most businesses, 3% inaccuracy is just ludicrous. We are not even close to that. It’s never going to be close to that at the SKU level.

Meinolf Sellmann: Yes, it sounds harsh to compare industrial practice with the Soviet Union, but I saw an ad of an MIP solver the other day where they said, “Using our MIP solver, this airline now optimized its five-year plan.” And I think I left a comment that Khrushchev would be so proud. It is true, it has a lot of appeal to say, “I can forecast the future, AI is awesome, and then I optimize for it and now I’m good.”

Joannes Vermorel: I think that the appeal of the ideology is strong. I think people would dismiss that, “Oh no, I’m pro-market, I’m not communist.” But they miss what was making this sort of ideology so appealing. Even if you go into academia, you will frequently find people who are proponents of those visions. The idea of being in control of your future is very appealing. The idea of being able to apply top-down some kind of scientific methods and rationalize from the top with a big plan, it’s fully rational top to the bottom. On paper, it looks like modern management. It turns out that it’s not actually modern management but rather modern mismanagement, but I can feel the appeal and the appearance of rationality.

But it comes with side effects that are iatrogenic, things that are unintended but fundamentally undermine those plans. You end up with supposedly optimal decisions that turn out to be impossibly fragile, where the slightest deviation just blows up in your face in ways that are quite surprising.

Meinolf Sellmann: This is probably the most common fallacy. People think, “Maybe I can’t forecast the future perfectly, but even if there are slight deviations, my decisions will likely more or less be the same.” That is exactly what isn’t true. This kind of continuous change that you would expect just isn’t there in practice. That is why, even though it sounds so rational to make a forecast and then base a decision on it, it’s actually the most irrational thing that you can do. You should expect that you don’t have access to all the information that you should be having.

Actually, the rational approach is to do what Lokad does, what we build our software for you insideOpt, is to take the variability that you have to expect in your forecast into account when you’re making your decisions.

Joannes Vermorel: Yes, and just one example for the audience. If you want to play the game of overbooking in airlines, it’s fine. There are always a few passengers that don’t show up so you can sell a few more tickets than seats that you have in the aircraft. But the problem is that at some point you’re really short of seats. You had only 200 seats, you sold 220 thinking that there will be 20 people that don’t show up, but actually 205 people showed up. So you have like five people that no matter how you do it, they won’t fit in the aircraft. Yes, you can give them a compensation and play all sorts of games, but at the end of the day, you have five people that will have a terrible quality of service for the flight that they purchased from you.

So it’s a very nonlinear thing where the first few seats, yes, you can overbook the aircraft, but then there is a limit and hitting the limit is brutal, especially for those people if they had something really important to attend to. It is absolutely not like a gently linear problem where it’s just a little bit more of the same. No, there is like a cut-off and then it becomes like a real problem, real fast.

Conor Doherty: To follow up and then kind of stitch together a couple of thoughts, because you both said really interesting things that lead to the next point. Joannes, your example there of overbooking and Meinolf, your example of measuring demand. Like, I sold 20 units of skin cream last month. Well, you did, but you had a stock out, so you don’t actually know what demand would have been.

When you think rationally about the problem, it naturally leads you to stochastic optimization, to embracing that uncertainty. There is no perfect answer, and I think you have a phrase in your YouTube lectures, something like “I’m going to butcher it now, because a good solution now is better than the perfect one too late” or something to that effect.

Meinolf Sellmann: Yes, that’s a different point also when the time you need to find a good answer kind of plays into the quality of the answer itself. Yes, you definitely need that. But to your point, why does the variability matter? Let’s explain that with an example. Say you’re running a supermarket in Paris and you have different products that you put on the shelves. There’s butter and there’s sunscreen kits. Two very different products. If you have a forecast that you’re going to sell 300 of those kits in the next 30 days, should you go and say it’s 10 per day? No. With the butter, you can do that because butter has a constant demand and it’s basically your forecast is around its average all the time and it deviates a little bit left and right. But with the sunscreen, it’s more like right now weather is bad, weather is bad, weather is bad, and then comes this one weekend where the sun is out and everybody prepares and buys the sunscreen basically for the whole summer. If you don’t have the whole supply available in the supermarket at that time, you just missed it. It’s not like, because you only had 10 of those kits lying around there today, you’re going to make up for the other 290 tomorrow. No, starting Monday, you’re not going to sell any of those anymore.

And that is kind of the difference, right? So the expected value can be the same, but it matters a lot on whether the variability is closely distributed around that expected value or whether there is basically a huge discrepancy where you would say, well, it’s either nothing or this big value. And if you don’t take that into account when making your decisions, you’re just missing it, right? And you’re leaving a lot of money on the table if you treat products, for example, that way. I hope that that kind of exemplifies what we’re talking about here, right? Expected values are expected values, but what you need to know is what scenarios do you actually need to look at. And that is what stochastic optimization does. It looks at different potential futures and tries to find a compromised decision today.

So, for the things you need to decide today, where you can’t wait to see what does the future look like, for those decisions, it tries to find a good starting position so that you can then act very well once the future is revealed. That is what stochastic optimization is, and that is, in my mind, is what every human does every day. Because we forget to do that as soon as we use a computer for these tasks.

Conor Doherty: Thank you, Meinolf. Joannes, how does that align with your understanding of stochastic optimization?

Joannes Vermorel: Yes, that’s the case of having, as Meinolf was mentioning, a pattern to the sunscreen that is very seasonal but the start of the season varies with the weather from one year to the next. It’s very classic. There are tons of products that fall into this category. Another sort of product that is also where taking a similar retail example would be the do-it-yourself (DIY) store where people would buy like four units or eight units at once because they’re like light switches and they don’t want to have four or eight light switches in their apartment that look all different. So when they buy, they will want to have like four or eight all the same at the same point of time.

If you think that having three on the shelf is like you’re not having a stock out, this is wrong. Because actually, the person enters the store, says “I want four”, there are only three, so they go somewhere else where they can find four units that happen to be the same. So the lumpiness of the demand really matters and that’s the sort of things where suddenly we need to look at the fine structure of the erraticity matters more than something that is averaged out over a long period of time.

And indeed, that would be what instinctively the person running the store knows. Those light switches, having one is useless. I need to have either a box of them all the same or it’s better not to have them in the first place because people would just not even bother having a look at a standalone product. A standalone hammer is okay because people don’t come and buy like four identical hammers, but light switches are not. And that’s something that is very, I mean it’s not like rocket science mathematics when you think of it.

I think you’re completely correct. I’ve witnessed the same thing. People, especially supply chain practitioners, they would know that in their guts. They don’t need the math but as soon as they enter the realm of enterprise software, suddenly a moving average and a little bit of exponential smoothing is supposed to cover the case. And they say don’t worry, if moving average is not enough, we have ABC classes on top to refine the thing. And I’m thinking that’s still not helping. And I agree, there is this disconnect where supposedly, when we go into this realm of software, people leave their common sense at the door thinking it’s the machine, it’s too complicated. So, obviously if they are doing exponential smoothing, there is exponential in the term, it has to be scientific and advanced, right?

Meinolf Sellmann: We like to decompose problems. That’s why I like that term lumpiness that you mentioned. It’s delightfully not a technical term that you can use. But it goes even cross product. If you run a supermarket and you increase your milk prices, all of the sudden fewer people come to your store because they don’t get their staples there anymore. Suddenly, the kayaks or whatever else you’re selling there seasonally are also not being sold anymore because you just have much less traffic in the store. So really what you need is some joint posterior distribution where you’re forecasting potential scenarios on how everything flows.

Conor Doherty: That description there, specifically of the staples and then the interrelationship, sounds remarkably similar to the basket perspective that we talk about. You go in, you might want to buy a hammer, well they don’t have a hammer, I leave. I go in, I want to buy many things, I want to do a full laundry list, a full shopping list, milk is missing, okay I’m going to go somewhere else where I can get milk as well as everything else. Thus the penalty for not having the milk is not isolated to the lost sale of the milk, it’s everything. Because having the milk would have meant I would have sold the butter, the bread, the jam, the ice cream, the bacon, whatever. But again, and now this kind of comes into the next question which is to Meinolf, to defend with the principle of charity anyone who might not agree, people like heuristics. When you were talking about decomposing problems, people like heuristics. So the idea of an ABC class, exponential smoothing, these are things that are easier to understand, you know, rules of thumb. Stochastic optimization is more complex than that, to be fair. No?

Meinolf Sellmann: Well, I guess what’s fair to say is that we didn’t used to have the tools in order to tackle it neatly for a department, so while keeping those separations of concerns between machine learning and optimization. You don’t want to have to retrain everybody on your team in order to do this stuff. And so, it would have been fair up until, I don’t know, maybe five years ago. But with today’s technology, I wouldn’t necessarily say it is more complex for the people who have to stand up those solutions anymore.

Conor Doherty: Well then, the follow-up, and again I will come to Joannes specifically on this question, but then again, earlier talking about accuracy, how does accuracy, what is traditionally considered the absolute benchmark KPI for any forecast, how does accuracy figure into stochastic optimization? Or is that just another heuristic that sort of falls by the wayside when you shift to thinking about decisions?

Meinolf Sellmann: Yes, so I mean obviously if somebody could tell you what the lottery numbers next Saturday were going to be, that would be awesome. The problem is you have to make forecasts and they come with inherent uncertainty. There is, you don’t know everything about the world in order to know what’s going to happen. So if you’re a store and you have to decide what kind of sushi dishes you’re going to put in there, perishable goods, kind of everything applies what Joannes was mentioning before about overselling seats on an airline. If you’re not selling the sushi, you have to throw it away. And that means that the whole cost of producing, transporting, and pricing and putting it there is just lost if you’re not selling it. So you don’t want to overstock those relatively low margin compared to the cost that you’re losing when it’s perishable.

Do you know whether there are these five young mothers who decided “Great, we can eat sushi again” and are going to have a party and are raiding your store? You don’t, you just don’t know that they would show up and buy 40 of those sushi dishes all of a sudden. And you cannot possibly know these things. So there is uncertainty there. If you had a perfect forecast, it would be awesome. But now, seeing that you cannot have it, you do the next best thing and that is you’re trying to learn how do my forecasts err. And this is what we call with a posterior distribution. So we’re saying, okay, look, you know, is this, does this mean if I put these sushi dishes in there, does this mean for my expected value that I’m having here, say 50 dishes or so, most of the days it’s 50 dishes, sometimes it’s 48, sometimes it’s 42, good. Or does 50 dishes mean, well, yeah, it’s either 25 or 75. Big difference. The accuracy is the same, the expected value is 50, right? But the scenarios you need to look at and the decisions you need to derive from there in terms of what you’re going to put on your shelf is very, very different. So it’s a bit misleading. Accuracy would be awesome if you could get 100%. If you cannot get 100%, you need to do the next best thing and that is you need to forecast and assess how you are erring.

Joannes Vermorel: Yes, and just to jump back on the comment of Meinolf on the complexity or the perceived complexity, my take on that is that very frequently when approaching a situation, the instinct is to start with a solution. It’s very difficult to conceive a problem until you have the solution. That’s very strange. You know, again, the Cartesian thinking would be let’s consider this problem and investigate the solution, but that’s absolutely not how people, even myself, operate. It’s more like I have this array of solutions that I can imagine and from that, I can reconstruct a problem that I can solve. You know, it’s typically the other way around.

So you start with a solution or the array of solutions you’re willing to entertain and then based on that, you will pick the problem that you think you can solve. Because there are plenty of problems that would be fantastic but you just can’t solve them. You know, having flying cars, I don’t know how to make an anti-gravity engine, so I don’t even spend time considering what would be the best design for a flying car because it’s so remotely far from being able to do that that it’s not even a problem that is worth my time.

So back to that, I think when we go to uncertainty in the forecast and then dealing, crunching this uncertainty on the optimization side, that’s stochastic optimization. I think the element, and I’m with you with the sort of progress of technology, the element is it requires technological ingredients, concepts, paradigms, a few things. They’re not naturally super hard but if you are initially living with those things that are completely absent, it’s very hard to imagine them out of nothing. It’s just like fundamentally they are not very difficult but they are very strange.

People nowadays take it for granted that they can have a call with someone that is on the other side of the world and it’s a given. Tell that to a person 200 years ago and they would think it’s complete magic. You know, that the idea that you could do such a thing was just inconceivable. So can people do it nowadays? Yeah, quite easily. But again, they know the solution so thinking about the problem is much easier.

So back to that, I believe that the challenge is that until you have the solution, it’s very hard to reason about that. And if you segue maybe into the sort of product that you’re doing at InsideOpt, with Seeker, is that if all you have are optimization tools that do not deal with any kind of uncertainty, then all the optimization problems that you’re willing to consider are, by design, due to your tool, the ones where you have kind of eliminated, in a paradigmatic manner, the uncertainty.

This is my silver bullet, so I need a problem that fits. And this is it. So that’s where I see the biggest challenge is sometimes just creating the awareness of the existence of the class of solutions so that people can even think of the class of problems. I know I’m very meta here.

Conor Doherty: Well, actually, to follow up because it’s a perfect segue, perhaps unintentionally, but I’ll give you credit nonetheless. When you talk about first principles, starting with the problem and then going to the solution, in one of your lectures, you talk about the Coffee Roastery problem. Could you sketch that out please? Because again, it’s a very vivid, very pleasant example. Define what the problem is and then explain how stochastically we might solve that or using stochastic optimization we might solve that problem.

Meinolf Sellmann: It’s actually a very classical problem in optimization. It’s called production planning. If you’ve ever taken a course on the standard work course that you were just referring to, which is the silver bullet for everything in optimization, which is mixed integer programming, you probably encountered an example of production planning.

So what’s production planning? You have limited resources to build the products that you want to make, and you have an expected profit that comes with making each unit of each product. But these products share these production capabilities. So, for example, machines that can do multiple different products, the packaging street, the roasters in case of the coffee, sometimes they share the ingredients. Different kinds of coffees might use the same kind of beans. It’s usually a mixture of beans that goes in there.

So the question becomes, what am I going to produce on what production capability, and at what time? This is something that needs to be done every day in order to produce coffee. There needs to be somebody there who says, let’s roast these raw beans over here, let’s store them in that silo over there because you can’t use them right away. They need to cool down before you package them.

And then you go and you take them out again. You also need to decide when you’re going to take out what from which silo and then you move it to the packaging streets which have limited capabilities. So far so good.

If you now knew exactly how long it takes to roast the coffee, life would be much easier. Also, if you knew exactly how long it takes for the beans to cool down, life would also be much easier. The problem is you have estimates for both. So depending on how dry the beans are that are coming into the roasting factory, the roast is less or longer in order to get the beans to perfection. And that of course screws up everything because you cannot just leave the roaster alone.

If you’re not roasting anything for longer than 10 minutes or so, you have to shut it off and if you shut it off then it takes half an hour to get it back operational. So you suddenly get these nonlinearities that you’re all of a sudden facing where you go like, well okay, so now the roaster just switched itself off and now good luck trying to roast for the next half hour.

Now you might say, well what harm does it do if I start roasting the next thing earlier? Well, you don’t know where to put that stuff because the silos are full and the packaging doesn’t follow. So in order to get room for the next product that would come out of the roaster, you need to free up that capability but that means creating stress on another part of the system.

And now you sit there and you need to come up with a plan. I mean, you find that roasters are actually standing still for some duration of time, plain and simple, because they do not know where to store the product, the finished but unpacked product. And that of course is an enormous strain on the operational costs of such a business.

Joannes Vermorel: I think it reflects the fact that beware of simplistic modelization. There is a degree of details in the business that is very high and also that calls for another thing. Most models published in the literature and most of what you get in courses, they give you a direct, more or less sophisticated solution to a nicely behaved problem.

So you have a problem that has a nice structure, something that because teachers, I’ve been teaching at University, you don’t want to spend two hours just writing all the variables. So you will present the problem so that it has like 10 variables top and so that you don’t spend two hours presenting all the factors of the problem. So you want something that can be max 10 variables, max three equations and be done with it and move on.

Which is misleading because the reality is that reality comes with a lot of details and thus, what does that leave you? It leaves you that being handed over a solution, a model, is kind of useless. It is not sufficient. It’s not sufficient because well, you never know exactly your situation. You will try to model it then you will discover something and then you will revise your modelization.

And maybe you will say, okay this thing is just too complex to modelize, I give up. But I need to reintroduce this other variable that I had ignored because in fact, it was a mistake to ignore it. It does really impact my operation. And so the model itself, again that’s if we look at the typical perspective of Academia, is the model is a given. There will be a proof, there will be like a canonical forms and whatnot.

And that’s exactly what you will get with mixed integer programming where they have a series of problems that can be solved readily with canonical forms and whatnot. But the reality is that when you’re dealing with an actual supply chain, you have an ever changing problem and you learn from applying the tool, whatever you have, to the problem and you revise.

And suddenly you realize that what matters is more like you need something that is again more abstract, something that lets you swiftly, efficiently switch from one instance of the problem to the next instance of the problem and keep doing that. Which means all sort of things. It needs to be computationally fast, it needs to be versatile so that you can express all sort of diverse solution. It should be quite convenient to plug into the rest of your applicative landscape.

So it comes with a lot of extra concerns that are again, if you look at typical mathematical optimization literature, those concerns that I just mentioned are not even listed. You can go from page one to the last page of the book and at no point they would discuss, well by the way, this method is super slow, impractical or this approach is so rigid that whatever at the smallest revision of the model, you would have to throw it away entirely and restart.

Or this approach is so error prone that yes, in theory, you could do it that way if you’re like the NASA and that you have super bright engineers and a decade to approach the problem. But in practice, if you’re in a hurry and whatnot, it will never fly. So there is plenty of meta concerns that are very, very important. And again, I believe that it could relate to the sort of things that you’re doing with Seeker and how you’re even thinking this class of problems.

Meinolf Sellman: Yes. And I mean, if you look at the, you know, kind of back to what you were saying before, that we’re always looking for a silver bullet here, right? So, you’re absolutely correct, Joannes, that when people model the business, and that’s probably what the audience is interested in here, is to say, hey, how do we get better results, more tangible for the business?

You are, to some extent, forced to make an approximation of real life in the computer and kind of simulate that out, right? I mean, in one way or another, you can think of any optimization model as a simulation of, hey, what would happen if I do this? And then you kind of compute what would happen, right? Is that still something that the plan I can still execute, which we would call feasible in the terminology of optimization?

So, does it obey all the side constraints? This is executable. But then, secondly, also, how good is it actually, right? Which the objective function, so this is how we measure the KPI that we’re trying to optimize, is called. Would you know, so, so that would be better. But the point now is, if you run around and the only tool you know is a hammer, you will find yourself putting nails on top of your windows to hang your curtains at some point.

And it’s a very, very bad idea. And this is kind of what the MIP folks are doing when they’re dealing with things like supply chain and optimization under uncertainty in general. Because they’re using a tool that is made for deterministic optimization and for that, it is absolutely fabulous. But it forces you to make approximations both on the side of determinism versus non-determinism or uncertainty, I should say, because non-determinism has another meaning in that context.

Of linearizing everything, there are so many relationships in a business that are just not linear. And then the question is, okay, so can you somehow approximate that? Can you fit some piece-wise linear function to your nonlinear function? Can you binarize things? Is that what’s going on?

Now, to make that more tangible to the audience, if you are running a business, say you’re the CEO of a company, sure, I mean you can just go to Lokad and say, “Hey, we’re gonna buy it from you,” and they will take care of you. But say you have another company that does this for you, or you have your own department that does it, what should you do in order to get better operations?

So, you might now be intrigued and go like, “Oh, there is this whole other thing and you know, 20% better operational cost sounded awesome. How do I get that?” The first thing you need to do is you need to ask, “What is our process? Is this a predict and optimize process?” But then secondly also, “What liberties did we take in modeling the real system that we’re actually dealing with? Where are we approximating?”

And so, what you should follow for both actually, in order to see what the discrepancy is, is to say, “Okay, look, your optimization model had an objective function. It said I’m favoring this solution over this other solution because my objective function, which is supposed to approximate the real KPI, is better for this solution versus that other. Now tell me, for the optimal solution, what is my expected KPI?”

And then track that, track that and compare it to your actual in your system solutions and results that you’re actually getting. So, if you’re minimizing cost, track costs and look at the difference between what the MIP thought what the costs were going to be and what your real costs actually are. Or you know, if you’re maximizing profit, well, you track that one. But the point here is this, track it, look at the difference between what was used in order to make the decision versus what then actually materializes.

And there are two sources for any discrepancy. The first one is that you were forced to fit your system into a modeling framework that was too restrictive. That’s a bad thing. Or the other one is you just completely ignored that there’s uncertainty and that’s where the discrepancy is coming from. If you see more than a 5% discrepancy over there, you need to talk to one of us.

Joannes Vermorel: And I would add to your recipe, and I agree on your thought process that you just outlined, I would even add something that I typically recommend even prior to this process. So something that at the very, very start is to, because there was like the problem with the wrong tools, but I would say even before that, there is even the problem of having the wrong concepts, the wrong ideas.

Just to give you an example, the notion of feasibility that is found in the classical optimization literature. People would say, “Oh, there is a solution that is feasible or unfeasible.” Okay, let’s have a concrete example. Is it something that is really black and white like that?

Just to give an example, we are in a warehouse and we pass orders to suppliers routinely. And the warehouse has a finite capacity for receptions. So on any given day, let’s say it can take in a max of 1,000 units. Beyond that, you can’t. And there is stuff that just accumulates in front of the warehouse just because people cannot bring the boxes in and whatnot.

The problem is, let’s say you are passing purchase orders to overseas suppliers. You don’t exactly control the delivery dates. You know that if you arrange things, on average, you should stick to your constraints. But still, you may be unlucky and then there was an order that is postponed, another that came a little bit faster, and then bam, you end up with one Monday where you end up with 2,000 units that come in the same day. But those orders were purchased like a month in advance.

So here, you see this is a solution where for every decision that you take, there is a probability that you will end up in an unfeasible situation. It’s not entirely under your control. So it’s again, that’s the sort of things where I say wrong concepts is that where it’s very dangerous is that when you analyze the situation with kind of outdated concepts or too rigid or inappropriate, the problem is that you cannot even get into the intellectual mood that lets you appreciate what the better tool will actually bring you.

So you’re stuck with feasible, not feasible. Well, once you understand that actually feasibility is not entirely up to you, obviously there are stuff that will collide. So if you order suppliers from the same area, the same day, same quantity with same ports and whatnot, the probability that everything arrives at your door the exact same day are pretty high. So you would spread out, but even there you have like sort of risk.

And this happens in many, many settings. And that’s an example. So here we see that what is taken for granted, feasibility, a feasible solution, an infeasible solution, it’s not exactly that way. This is, you see, the concept is a little bit off.

Another way would be service level. People would think in terms of service level, yes, but is it really what people perceive? That’s typically where I engage with the discussion of quality of service. And quality of service may include the possibility of substitute cannibalization or even people’s willingness to postpone. And suddenly we end up with something that is very different.

And when you go straight to the problem with the concepts that you had from this world where uncertainty doesn’t exist, where your optimizer is always classic, so non-stochastic, then my take is that you most likely, the sort of path that you propose will feel incomprehensible. So that’s why typically I encourage my own prospects to just pause and really do a briefing exercise and start looking at the world through different lenses and just take the time to consider those things with this sort of gut feeling before jumping into the technicalities that can be very distracting because they are a little bit technical, plus it’s software, plus it’s a little bit of math, plus it’s etc, etc. And that can be a huge distraction, especially if you have like competing vendors who are throwing tons of nonsense into the mix like, “Oh, you want to do all of that? You know what, I have the answer for you, LLMs. And you know what, you had this uncertainty but we have LLMs, Large Language Models. It’s going to, you know, those forecasts, you will not believe how good they will be. And the optimization with LLMs, it’s so good.”

Is it? Obviously, when people are thinking, “Okay, I’m just…” Yeah, I mean because at least at Lokad, when we are talking to prospects, the problem is that we’re never talking alone to the prospect. There is like half a dozen of other vendors who are also pitching their stuff and frequently they are pitching a lot of nonsense. And so the prospects are just overwhelmed with all those shiny things and those outlandish claims.

Conor Doherty: Well, it occurs to me that you both basically just sketched out one of the key problems with getting people onside with stochastic optimization and any other sort of black-boxed technology. It could be probabilistic forecasting or anything involving maths basically, is that there is a bar to entry to a degree. And so, Meinolf, to come back to you, in your experience, how do you try and, because you’re very good at teaching, how exactly do you make people comfortable embracing the kind of uncertainty that we’ve talked about for an hour?

Meinolf Sellman: There might be a cognitive disconnect right now with the audience where they go like, “Okay, so you told us that the airlines are losing their shirt on the day of operation, but for some reason, they have been adopters of this technology for decades. Why on earth aren’t they using the stuff that Lokad and InsideOpt are talking about?” And the reason is exactly what you’re alluding to.

If you wanted to do stochastic optimization for something like an airline, you would be increasing the size of the problem to a point where you’re just not able to solve it anymore. The folks who are working in the airline industry and doing optimization for it are very savvy and they know, of course, about stochastic optimization and the traditional one, but it was always based on MIP.

I don’t want to get too technical here, but essentially when you’re looking for something, there are two ways how you can look. One of them is to say, “Oh, I found something that’s already good. Let’s see if I can make this better.” And the other way is to say, “I’m never going to find something over here.” MIP works by saying, “Oh, I don’t have to look over here. It cannot be here,” and then looks someplace else.

Now, if your search space is vast, and if this whole thing that tells you, “You don’t have to look here,” doesn’t work very well, so it’s kind of like, “Yeah, it doesn’t look like it, but I can’t rule it out either,” then you keep looking everywhere, and it becomes much more effective to go and look where it’s already looking pleasing, let’s put it that way.

So, if you’re trying to do stochastic optimization with mixed integer programming, which works kind of in this way where you’re saying, “Oh, I know there can’t be anything there,” your so-called dual bounds will never be good enough in order to cut the search base to a point where you can actually afford to do the search. And that’s where people have been stuck for 20-25 years.

And now, there is this new way of doing things, which is essentially AI-based search, which says, “Look, I don’t care whether I’m going to get some kind of quality bound for the solution I’m going to get, but instead, I assure you, I’m going to spend all of my time trying to find the best solution I possibly can in the time that I have to do the job.” It’s very pragmatic and practical, and that now exists.

In that framework, you suddenly liberate yourself also of all the other shackles that you had to deal with beforehand, such as that you have to linearize and binarize everything. All of those things are gone. You can do non-differentiable, non-convex optimization with a tool like InsideOpt Seeker, and modeling these problems is actually not so much a problem anymore.

There are a couple of other benefits you’re actually getting from it, which is parallelizing mixed integer programming. This branch and bound approach is very difficult to do. The speed-ups you’re getting are limited. You’re lucky if you get a five-time speed up on a reasonably large machine. This AI-based search really benefits from throwing 40-100 cores at a problem.

And so, that goes with the development of hardware as well, that this might actually be the better technology to use. But the bottom line is, by using a different way of searching these vast spaces, you’re enabling the users much more comfortably to model the real system as opposed to some crude approximation thereof.

And at the same time, handle things like multi-objective optimization. It’s rarely only one KPI that matters, there’s multiple. Handle things like, “Oh, I mostly want this rule to be obeyed over here, but it’s okay if every now and then it’s disobeyed.” So, it’s okay if there’s a scenario where that visibility doesn’t hold. You can model that very easily.

And of course, you can do stochastic optimization, not just in the sense that you’re optimizing expected returns, but you can even actively optimize, constrain, minimize the risk that you’re running that comes with your solutions. And that is the paradigm shift. This is what I think drives Lokad and InsideOpt, to say, “Hey, look, there is a completely new paradigm that we can follow, which allows us to do all of these things which were just unheard of for the past three decades.”

Conor Doherty: Joannes, same question.

Joannes Vermorel: Thank you. Yes, and I would like also to point out, in the early 2000s when I started my PhD, which I never finished, the interesting thing is that the belief of the machine learning community and the optimization community about the fundamental problems of optimization turned out to be completely wrong.

When I was in my PhD, the belief was the curse of dimensionality. If you have a super high-dimensional problem, you can’t optimize. And now, with deep learning models, we are dealing with models that have billions or even trillions of parameters. So, apparently, yes, we can optimize problems, no problem.

Then, they were thinking, if it’s not convex, you can’t do anything. It turns out that no, you can actually do a lot of things even if it’s not convex. And indeed, we have no proof whatsoever, but if you have something that is, according to other criteria, quite good and useful, it doesn’t really matter that you can’t prove that it’s the optimal, as long as you have other ways to reason about the solution and that you would say, well, I can’t reason about the optimality, but I can still reason about the fact that it’s an excellent solution even if I don’t have the mathematical proof.

And that there was like all those series of things where also the idea of, for a long time, the only way that I was, when I was looking at the stochastic optimization, people would say, “Oh, that was what you were mentioning about augmenting the dimension. Say okay, so you’re going to spell out a thousand scenarios and write those thousand scenarios as just one situation that you want to optimize at once.”

It’s just macro expansion. You just take your problem, you just macro expand this problem into a thousand instances, and that gives you a problem that is a thousand times larger. And then you say, “Okay, now I am back to square one. I can actually just optimize that.” But by doing that, you already had, with the old branch and bound paradigms, terrible scalability.

So if your step one is to macro expand your problem by a factor of 1,000, it’s going to be absolutely dramatically slow. And what surprised, I believe, for example, the deep learning community so much was the incredible efficiency of the stochastic gradient descent, where you can just observe situations and just nudge the parameters a little bit every time you observe something.

And there were plenty of insights. And the interesting thing that I’ve seen during the last two decades is that machine learning and optimization have been progressing side by side, mostly by destroying prior beliefs. That was a very interesting process.

Most of the deep learning breakthroughs came through better optimization tools, better using linear algebra and GPUs, special kinds of computer hardware, and mathematical optimization. More and more is using things from machine learning techniques where you don’t want to search randomly.

There are places where you say, “Well, these things here, I can’t prove anything, but it looks like super bad.” So, and it looks like plain bad, this whole neighborhood is like super crap, so I need to have a look elsewhere. And also, other considerations such as, “I’ve already spent a lot of time in this area searching, so maybe even if it’s a good area generally speaking, I spent already so much time there, then maybe I should be looking somewhere else.”

And that’s this sort of optimization technique, but that is very much, I would say, machine learning-oriented in the sort of thinking. And my take is that maybe 20 years from now, there might even be one domain that has merged, that is like machine learning optimization, and you don’t really differentiate the two.

It is one of the things that I had on my radar for two decades, and every year that passes, I see this gradual convergence. And it’s very intriguing because I feel that there are concepts that are missing yet still.

Meinolf Sellman: Yes, and I’ll hone into one thing that you said over there. Machine learning is awesome when you have repeated games. It’s like counting cards in blackjack. You can’t guarantee that you’re going to win, there’s still a chance that the forecast will be off, but if you play that game repeatedly, you suddenly have a big edge.

And that is kind of why I said before, “Look, trace your operational results, your profit, your cost, or whatever you’re doing for some period of time.” Because on any one day, the solution that you ran can be off. It’s a little bit like somebody saying, “Oh, I’m going to roll this die, and if it shows a four, then you lose and you have to pay me a dollar to play. But if it’s anything but a four, I’m going to give you a million dollars.” And then you give them the dollar and you roll the die and it’s a four. It was the right decision to do that, right? Because if you play that game repeatedly with that strategy of accepting that game because the expected value is, of course, so high and the loss in any way is very doable, you suddenly gain a real advantage. And that is kind of the name of the game when you use machine learning in optimization. This is exactly this paradigm of AI-based search. We call it hyperreactive search. I don’t know how you guys call it at Lokad, but it’s exactly this idea, right?

Can I, for your problems, so this is kind of what drives InsideOpt Seeker. This is what the solver will do for you once you know what your model is and what problems you’re solving. And what, now every day, you have these operational problems you need to decide. What am I going to roast where today? How much inventory am I going to relocate today and to where? And you have those instances that you need to do over the course of many different weeks and production days.

You then go and you ask the solver, “Hey, you know, look at your strategies on how you’re actually searching this space. Could you have found better solutions if you had searched differently?” And then it will look exactly at runtime features like the one that you mentioned, Joannes, like, “Oh, how long has it been since I’ve been looking someplace else?” So it seems like I’ve thoroughly searched this whole idea over here. Let’s see if I can do something else.

And others like it, right? And those runtime features then influence other decisions such as how many things am I willing to alter at the same time, right? Should I actually do an investigation? So if it’s only very recent that I came to some search space, it might be a very, very good idea to be very greedy, to kind of say, “Hey, any improving move I’m just going to take right now in order to find a good solution in that space.”

But then after a while, you’ve been in there for a while, then you go like, “Well, I need to broaden my horizon a little bit here because I might actually be stuck in something that’s only locally optimal, but globally there would, I could have set other variables much better so that overall I could have done something better over here.” And that’s kind of the paradigm shift right now, right? So it goes away from this whole thinking of, “Can you quickly detect there’s nothing here?” to “Can I learn how to search better?” And that’s the revolution.

Joannes Vermorel: To jump on the AI search. Yes, absolutely. And especially with the sort of problem that Lokad is solving for our clients, most supply chains can be extensively approached greedily, not all the way through, but extensively. And there are some kind of Darwinian reasons for that. If you had supply chain situations that were really, I would say, antithetical to a greedy approach, they have been purged, eliminated already because businesses didn’t have the luxury of having super fancy optimization tools.

So they needed, and very frequently that was literally a design consideration, which is, “Can I set up my supply chain and my processes so that I can directionally move in the right direction and still be okay?” So that was typically guiding principle at the design level. And then indeed, when you get into the fine print, you realize that yeah, I mean, you can be stuck in some bad places even if directionally you’re still good.

So typically, yes, Lokad would rely extensively on the greedy perspective, even going all the way through with gradients when you have it. And then, indeed, do the local once you’re into the final phase where you want to do the micro adjustments and maybe get a little bit more resilient. So if you can do some adjustment that doesn’t cost you much but can give you a lot more leeway in operations, that would be just to make it more concrete for the audience.

Let’s say, for example, you operate a warehouse. You think that there is like a 0.1% chance that you’re out of packaging cardboard to ship your stuff. It may seem like a low probability event, but on the other hand, it feels very dumb once in a few years to shut down the warehouse just because you’re short on cardboard that is super cheap. So you would say, “Okay, it’s so little that yeah, let’s have a couple of months of extra cardboards.”

Because they are folded, they take like no space at all, they’re super cheap. So that’s the sort of things where a little bit of extra optimization would say, people would say, “Oh, we have a three days lead time for those cardboards. We already have a month in stock.” People would say, “Oh, it’s enough. We don’t.” And then you do the simulation and say, “You know what? You still have this 0.1% risk. It’s pretty dumb. You should have like three months.”

And say, “Okay, it’s very cheap, but it feels like really a lot.” But say, “Well, it’s super cheap. It doesn’t take that much space. And why risk it?” You know, that’s the sort of things that are slightly counterintuitive where you would realize that it’s, yeah, it’s this thing is just once in a few years. But then you have a lot of things that happen once in a few years.

And that’s where having a nice optimization that will let you cover those things that are so infrequent that for the human mind, it feels like it was another life. I mean, people rotate. They rarely stay two decades on the same job. So probably something that happens once in three years, the manager of the warehouse has never seen it. The teams, most of the people don’t even remember seeing the thing.

So it’s there is a limit to what you can perceive when it’s like below the perception threshold because it’s too infrequent. And yet it’s extreme. There are so many different things that when you put them together, you realize that no, I mean, it’s 0.1% plus another thing that is at 0 plus another thing. And you add dozens and dozens and dozens, and at the end of the day, you end up with something where it’s like every month there is one of those problems that could have been prevented if you were really taking into account the risk.

But it’s a little bit counterintuitive because it’s a little bit more spending in all sorts of places. Why the extra? Well, the extra is because, well, infrequently but almost certainly, you run into trouble if you don’t.

Meinolf Sellman: Yes, and that is actually the trap that you’re falling into when you actually have a provable optimal solution. It sounds like, “Okay, so look, you know, this is my provably optimal solution, and I have good forecasts.” But now if the forecasts are slightly off, that provable optimal solution, because it squeezed the last penny out of the solution, is extremely brittle. And right and left of that forecast, the performance goes down and is abysmal.

And you want technology that allows you to say, “Look, yes, your expected profit is 80 cents less, but now your risk of having to close down the warehouse is reduced by 75%. Good deal, right?” It’s a good deal to make. And these are exactly the kind of trades that you want the technology for you to find because it’s very, very difficult for you to say, “Okay, look, you know, let me constrain one and optimize the other,” because then you run into another trade-off trap.

You want to be able to say, “Look, I have all of these concerns. Try and find a good compromise. Find me the cheapest insurance against an event such and such.” And that is kind of, you know, maybe on kind of closes the loop with how we started on how hard it can be to kind of wrap your head around the decision-making and uncertainty.

But this is in essence what it is. The misconception is that if you had a solution that is optimal for one predicted future, that it will probably also work reasonably well for futures that are just slight derivations thereof. And that is just not true. You need to actively look for a great compromise operational plan that works against a big probability mass of futures that can actually hit. And you want it in such a way that there actually trades off your risk and your expected returns in a reasonable fashion.

Conor Doherty: Correct me where I’m wrong, but the ultimate goal then of stochastic optimization would be to find, I guess, the optimal tradeoff or the optimal decision that balances all the constraints and all the trade-offs that you have to make. And that’s not the perfect decision, but it would be the best compromise to satisfy all the decisions or all the separate problems, yes?

Meinolf Sellman: Correct. We could do that mathematically, but I don’t want to lead people down that path. The point is this: had you known exactly what was going to happen, there most often is a better solution that you could have run. But in absence of knowing the future perfectly, and I mean perfectly, not just, you know, 99.9% perfectly, in absence of that, you need to run a compromise that is essentially good across the board for all the different things that could happen.

And that is exactly what stochastic optimization does for you. And thereby, it removes fragility. We might say it’s robust optimization, but that’s own terminous technical, so we can’t actually use that. But that’s what’s meant, right? You want to remove brittleness, you want to remove fragility in your operations. Have very reliable, continued, repeated results. That is what stochastic optimization will give you. And at the same time, your expected profits will actually rise beyond what you thought was possible.

Because if you just go by cross-validation performance and then provable optimality, you’re completely missing the point. There is what you think is the cost of not knowing the future perfectly, is the cost of assuming in the optimization, is the cost of assuming you knew the future perfectly. This is what makes it brittle, is that you’re assuming that that forecast was correct 100%. And this is how traditional optimization technology functions, and you need to jettison it and start working with modern technology in order to harvest those 20%, easily 20% in operational cost that you can reduce your operations for.

Conor Doherty: Well, thank you. I think we’re kind of winding down. Joannes, I’ll give you a last comment and then throw it over to Meinolf to close. Anything you want to add there?

Joannes Vermorel: I mean, yeah, the intriguing thing is that the best supply chain, what it looks like, that’s, and the best, you know, sort of risk-adjusted decisions are the one where the company keeps humming gently, you know, where there is no, I would say, absolutely critical disastrous decision that gets made that just, you know, blow everything up.

And that’s where people would expect, you know, like the most brilliant supply chain plan ever would be to identify this one product that has been completely ignored by the market and say, you know what, we have to go all in on this one product that was like super niche and bam, sell one million units while nobody was even paying attention. I say this is magic. No, I mean, maybe there is like Steve Jobs sort of entrepreneurs that can do that, but it’s just almost impossible.

So the idea that you can seize the future, identify the golden nugget, the bitcoin-like opportunity, and go all in on that and make a fortune, it’s very ludicrous. What it looks like, I would say, excellent management of supply chain is that it’s humming gently. You have risk-aware decisions so that when it’s bad, it’s actually very moderately bad. When it’s good, it’s most of the time very good, so it’s very solidly profitable. When it’s bad, it’s kind of limited and it’s not horrible.

And when you revisit a decision, you know, you go back in time and you look at it, yes, if I had known, I would have done differently. But if I do the honest exercise to try to get back into the sort of things that I knew at the time, you say, yeah, it was kind of a reasonable take at the time. And not confuse, not let the hindsight pollute your judgment about that, because it’s very bad.

And I knew, for example, some of our clients, they don’t do that anymore, but aerospace, they would, for example, after every single AOG (aircraft on ground) incident, so there is like a part that is missing and the aircraft cannot fly anymore, they would do like a whole postmortem investigation. But the reality was, when you have like 300,000 SKUs that you need to keep in stock to keep the aircraft flying all the time, I mean, having parts that are out of stock, especially when you have some parts that are half a million dollar and above in terms of price point per unit, it’s kind of okay to not have them all the time readily available.

So what we investigated was that, for example, for these AOGs, they were exactly as expected according to the structure of risk of their stocks. So there was no point in doing any kind of investigation. And that would be my concluding thought, is that the probably the most difficult selling point for stochastic optimization is it’s fairly unimpressive. You know, it is just something that just, you know, hums gently. The problems are much less worse, the successes are slightly not as extreme but much more frequent.

But again, what do you remember? Do you remember a football team that is winning consistently, 60-70% of their matches for the last 30 years? Or do you remember this one team that keeps losing all the matches but in a row, they win 10 matches against the most prestigious teams? So obviously, you would remember this absolutely extreme streak of successes and say, oh, this was incredible. And you would completely forget the sort of boring track record where it’s just excellent on average, but it’s just the average, so you don’t remember that.

You see, that’s my sort of vibe. And I think it’s part of accepting that what you will get out of stochastic optimization is gentle, unimpressive decisions that turn out to be quite good on average. When they are bad, they are slightly bad, nothing, you know, you’re not going to lose your shirt. It’s, there is a lot of damage control going on.

And so, I mean, that’s the funny thing is that we at Lokad, when we discuss with our clients, very frequently, they have, when we are in production for a few years, they would actually have little to say. You know, the worst, I mean, not the worst, but the best kind of compliment on the other kind is, you know what, you’re so uneventful that we are kind of deprioritizing the supply chain in our list of concerns. You’re just like having access to running water, you know, it’s uneventful, so you don’t have to pay that much attention, it just works kind of. And that’s okay, you know, that’s, we are not as good, you know, obviously, supply chains are not as uneventful as water supply, not quite yet, but there is this kind of vibe.

Conor Doherty: Well, thank you Joannes. Meinolf, as is custom, we give the closing word to the guest. So, the floor is yours, and then we’ll close, please.

Meinolf Sellman: Yes, well, thanks again for having me, Conor and Joannes. Just two second what Joannes was saying, we frequently find that our operational teams are surprised and their clients are not surprised. And that is exactly what you want. The operational teams are surprised that suddenly things can run so smoothly, where before it was always, you know, once a week you had a day of hell, and all of a sudden, it’s like two months and you go like, this just works and you know, no crazy, no craziness, nothing.

But what’s more important is their clients are no longer surprised because they are suddenly without a service or something like this. And this is what your business is there for, and that’s why you should be using this kind of technology in order to run your operations because you don’t want to badly surprise your clients. And then you can drink a very boring margarita on a nice island and enjoy your average returns which come with very low variance.

Conor Doherty: Well, gentlemen, I have no further questions. Joannes, thank you very much for your time. Meinolf, an absolute pleasure, and thank you for joining us. And thank you all for watching. We’ll hopefully see you next time.