00:00:00 Book introduction: decision-making under uncertainty
00:05:55 Motivation: failures of static models at Toyota
00:11:50 Using narrative to teach decision concepts
00:17:45 Mistake: averaging out uncertainty
00:23:40 Plans fail; need scalable decision processes
00:29:35 Prefer numerical recipes over perfect plans
00:35:30 Meet stakeholders where they are; iterate
00:41:25 Use simulators to test policies safely
00:47:20 Framing problems requires stoic clarity
00:53:15 Forecast accuracy isn’t the ultimate objective
00:59:10 Forecasts are inputs; allocation decisions matter
01:05:05 Prioritize economic value over forecast precision
01:11:00 Plan for correlated tail risks
01:16:55 Human action drives supply-chain risk
01:22:50 Design resilient systems for catastrophic disruptions
01:28:45 Adopt by co-creation; assign decision ownership
Summary
The Decision Factory argues there are no perfect solutions—only trade-offs—and rebukes the belief that better plans or forecasts replace judgment under uncertainty. Instead of brittle central plans, firms need repeatable, testable policies that use probabilistic inputs, feedback, and economics to manage constraints and tail risks. Simulators stress-test policies; Pareto trade-off explorers reveal costs of reducing extremes. Adoption requires co-creation with the field, clear ownership of constraints, and precise incentives and metrics (dollars, not misleading percentages) to build resilient supply chains.
Extended Summary
There are no solutions, only trade-offs. The Decision Factory is, at bottom, a rebuke to the grand illusion that better plans and sharper forecasts can substitute for judgment in a world of uncertainty. The authors—drawing on scars from Toyota and large logistics operations—show what should be obvious but isn’t: the future is not a set of facts waiting to be discovered; it is a stream of human decisions beyond your control.
Much of modern supply-chain practice is a relic of central planning—Gosplan with dashboards—where a single “optimal” plan is imagined to govern millions of local realities. In the real world, the plan is obsolete before breakfast. The workable alternative is not clairvoyance, but policies—what one guest aptly called “numerical recipes”—that make good decisions repeatedly under uncertainty, using feedback, constraints, and economics rather than oracle-like forecasts.
The obsession with forecast accuracy illustrates the problem. You can predict 1,000 mud flaps next month with perfection and still be unable to install more than 800 because of regulatory or physical limits. Accuracy at the input does not equate to wisdom at the decision. Worse, companies pour most of their analytical talent into shaving errors at the margins while ignoring the tonnage of constraints that round those grams into irrelevance—labor, storage, truckloads, regulations, contracts.
Risk is not primarily in the mean; it lives in the tails. Businesses die there. People multiply tiny probabilities as if disasters were independent coin flips. In reality, bad things cluster: strikes, late shipments, angry customers, and managerial panic arrive together. You don’t engineer that kind of risk away; you hedge and design for resilience. Simulators here are wind tunnels, not crystal balls. They don’t predict exact arrivals; they test whether your policy’s wings shear off when the gusts hit.
The book’s practical shift is from plans to testable policies, from point forecasts to probabilistic inputs, from static models to experimentation. A Pareto “explorer” makes trade-offs visible—spend 3 percent more on average to compress tail losses by 20 percent—and lets operators own the choice. Ownership matters because their phones ring when trucks don’t. Adoption, therefore, is not “change management” imposed from above, but co-creation: build with the field, embed their heuristics, and assign clear ownership of constraints so someone can be called to unlock them.
Language itself is a stumbling block. Words like “risk,” “uncertainty,” and “online learning” mean different things to different people. Precision in terms—and in incentives—matters. So does the metric that rules behavior. Dollars, not percentages. Companies proclaim profit as the goal and then govern units with percentage KPIs that insulate decisions from consequences. That schizophrenia is costly.
As product variety explodes and customization proliferates, the old top-down model strains. Field learning ceases to be optional. The hard lesson, delivered here with stories rather than slogans, is the constrained vision: think in economics, test in reality, design for tails, and stop worshiping plans that cannot survive contact with the world.
Full Transcript
Conor Doherty: This is Supply Chain Breakdown, and today Joannes and I will be breaking down The Decision Factory, the great, I love it, new book by Adam Chans Jr. and John Elam. Now, they are both notable decision science practitioners, but more important than that, they’re friends of the channel and returning guests. John, Adam, great to see you both again.
John Elam: Glad to be back.
Adam Chans Jr.: Yep.
John Elam: Yeah, thanks for having us back.
Conor Doherty: So it’s our pleasure. So it’s a large panel today. Okay, we have a remote, we have in-studio. So I will be supervising what happens. I’ll monitor the live chat. For people watching, if you have any questions, submit them and I’ll pose them to the panel a little bit later.
Now, gentlemen, before I ask questions, I’m Irish and I, as such, I am more comfortable being sarcastic than sincere. But you both know that, like we’ve been communicating privately. You both know I really enjoyed the book. It’s quite popular here at Lokad already.
My feeling is you guys have taken incredibly complex and consequential topics and you’ve broken them down in a way that is actually accessible to not just the average practitioner but to people who you wouldn’t even classify as practitioners. So I would hold myself in that. Like I am very interested, I work at Lokad, my background is philosophy, I’m not an engineer. I was able to follow perfectly and learn quite a lot, as the discussion will show.
So I think you guys are great teachers and I’m really looking forward to talking about the book today. So, I can’t look at you both because it’s, I’m being sincere, but I’ll get straight to the first question. What is the book about and why did you write it, Adam?
Adam Chans Jr.: So the book is simply about making decisions under uncertainty. And the, I mean, the reason why this was written is because we’ve experienced the sufferings of the old static models that we’ve produced over the many years.
And during the tenure at Toyota, we were particularly burned by uncertainty. So as you know, as you know, one of the world’s largest supply chains by footprint, if you include all the suppliers, and almost there’s almost no guarantee of an optimal solution. So this is really a story of what everything we’ve learned and John along the way. I met him there as a product owner and kind of we had many, many discussions about this very topic and actually a lot of it was inspired by Mr. JV over there and Warren Powell. The two of them together really opened our eyes to it and I talked to John a lot about this.
Kind of the dialogues we’ve had. We said, you know what, this is a great way for people to learn. Maybe we should write a book in this manner. So that’s kind of the backstory a little bit.
Conor Doherty: John, is that how you see it?
John Elam: Yeah. No, I’d say that’s pretty accurate. Like one thing I share a lot is my sort of journey because it’s this sort of like linear growth since graduating towards decision science.
I didn’t know this is where I was heading, but it becomes obvious the more and more you look at, you know, what are we doing with data? What are we doing with machine learning? What’s all this AI trying to do? And the more you think about it and the more you see what’s actually trying to get done is making better decisions.
And so, you know, in working with Adam on building optimizers and recommendation systems, you know, it filled a lot of gaps in my head and I realized that I’m probably not the only one with this gap. And there’s a lot of people that are curious. They want to do better and they’re just unaware of what tools are out there.
So that was really a lot around about what this is: making people familiar with what should you even be asking about and what’s possible. That’s the other side, right? You know, you start thinking of like multi-stage stochastic optimization. That is complicated. It’s often intractable. It feels like the correct answer. It mathematically probably is, but you can’t do it. So what do you do? Like what do you actually do? Like this is cool theory.
Working with Adam and the things we’ve been able to build here at Toyota, it’s been incredible to see what can we actually make that actually is going to help us make a better decision. And so we, like K said, we brought a lot of those conversations that we’ve had and also ones I’ve had with some of our peers, whether that be other product owners, actual analysts, right? Because they’re the ones who experience the uncertainty because it’s their phone that rings.
And of course working with other data scientists. So it’s just been very illuminating and we’ve realized that this is a topic that more people need to be aware of because it’s the correct thing to be studying.
Conor Doherty: Well, to follow back on that, because again it is a point I think that everyone on the panel has expressed before, the gap between what you would say like what works in practice, let’s say, or sorry what works in theory in a vacuum, like academic theory, and then what actually happens on the ground floor, in the shop, in Fulcrum Logistics, the factory in the story. So like what is the most, what is the single most common mistake that you guys saw around uncertainty that like inspired you to write this book?
John Elam: I’ll take a stab.
Conor Doherty: Good.
John Elam: I’ll say, I’ll just say one quick thing and then I’m sure you can run with it more, which is that we average it out. Uncertainty gets averaged out instead of being embraced. That’s probably the most common thing.
Adam Chans Jr.: Yeah. I’d also say, yeah, the most common thing that you see is management demanding we make a better forecast. Better forecast meaning a singular, you know, a point forecast. Like how do we get, how do we get this better?
You’ll hear really crazy things too, like really absurd. Like two years out we want to have a 99% accuracy. That’s not possible. That’s not even anywhere near possible. Yeah, people, a lot of management take this very serious, like there’s a very serious directive. We got to get to 99%. Yeah. Okay. Now, let’s step back.
If we had 100% accuracy right now, like what do we do? Even if you know that there’s a certain demand, like you might not even be able to fulfill that demand. So what do you do in that case? So, you know, demand’s 100 units, you only have 50. What do you do now? How do you disperse it? How do you mix it?
So that’s just, that is kind of the core, one of the core inspirations and faults that we see in reality and just not looking at the broader picture of uncertainty.
Conor Doherty: I actually think, and correct me if I’m wrong, isn’t that the quote that begins chapter 7? “Perfect forecasts don’t exist. A perfect forecast would be a fact and we don’t have facts about the future,” by Evelyn Adimi. Just memory, Joannes. I know you love, I know right there, I know you love point forecasts, so you would disagree with what they just said, right?
Joannes Vermorel: Yeah. No, I really think that, you know, if you step back, the idea of having a plan is I think one of the ideas that has captured the intellectuals for literally one century. The whole USSR thing was built on the idea, among various things, but it was built on the idea that we will have a grand plan and they even set up an institution called the Gosplan. For 70 years it produced plans for the entire USSR and you would have think that with the end of the USSR in 1991, the Gosplan and all the sort of ideas of this kind of top-down planning would have died with it, but it didn’t.
It survived and even thrived in large companies. That’s very strange. But you see, this is, and the problem is that, and I think that’s why uncertainty is biting so much, is that there is, we are talking of thousands of decisions a day even in relatively small companies and tens of millions in the large ones. So you cannot, at this scale, you know, again even a small company, we’re talking of thousands of them and it’s mostly resource allocation. What do I buy? What do I build? Where do I move this inventory? Etc.
And so that’s why you cannot just rely on human intuition because you need to operate at scale. So if I have just one decision such as do I buy this apartment, yes/no, I need to look at the future price of the market and whatnot. I can spend literally months thinking very carefully about this decision. That’s fine.
Now, we don’t have, so it’s completely fine if you have like no process whatsoever because it’s a one-time decision. You invest literally days of mental power on this. But now if you want to do thousands or more of decisions at scale and you have the coordination problem because it’s many distinct people who are taking those decisions, so it’s not going to be as perfectly consistent as you thinking about an apartment with your wife, for example. It’s not two persons in very close relationship who are very aligned with what they want. No, it’s going to be a thousand people who will try their best through meetings, emails and whatnot to get some kind of alignment, but it’s very difficult.
And so what I see is that this idea of the plan is, it’s very much wishful thinking. It’s to think that if we could just know the future, everything would be kind of simpler.
Conor Doherty: Yeah.
Joannes Vermorel: And yes, I agree. If anybody could know the future, certainly supply chain planning would be a lot easier. But that’s not something that is going to come true ever.
And the consequence of that is indeed that anything that is built that assumes some kind of reliable knowledge, as you were pointing out, about the future, treating the future as if it was facts, is going to blow up in the face of the company trying to do that. I would say it is the necessary consequence.
Conor Doherty: Well, the thing is that is literally how the book opens. John and Adam, like again it starts, Fulcrum Logistics: there’s a plan. And again later you get into the difference between plans and policies, but like “I have a perfect plan. It was built by our MIP and by 6 we had a plan, it was perfect. It was theoretically optimal and then by 6:47 it was completely redundant. It was a complete catastrophe.”
And the thing is, and this then leads to the next question because you take that example and then you also, then you were talking about like the combinatorial complexity in the book. The central character Evelyn very intuitively breaks down like for Fulcrum Logistics here are all the possible decisions and all the possible scenarios that you could have and for a company of Fulcrum’s size it’s like more possible scenarios than there are grains of sand in the world, just mathematically.
And the thing is, you explain all that in a very, very straightforward way, but you explain it through a story. And the question is, why did you choose to tell it through a story, John?
John Elam: I would tell it through a story. So like this is something that we’ve been doing a lot, and I’m not just talking about Bit Bros. I’m talking about like humans. We don’t remember facts very well. We remember stories. We remember how things make us feel.
And whenever you can present people with something that is going to help them make better decisions, but they see the emotional impact of that, right? My life is better, right? Like not just like, yeah, we’ve made the company more money. Cool. That’s great, right? My bonus is tied to that and I want the company to do well. Absolutely.
But you know what would be really nice is if my day wasn’t crazy. That’d be really, really nice.
And so like showing people, you know, we can go, we can run through objective functions. We can look at charts. You can, you know, “Hey, look, if you think of things stochastically, you can avoid these situations.” And I could just say those facts, and those facts are, you know, presented in the book, but whenever you feel like what does that turn into, it turns into your phone not blowing up at 3 a.m.
That more concretely brings these, you know, technical concepts, you know, simulators, probabilistic forecasts, optimizers, and marrying that with the actual operational real world that it’s trying to help. All of this is in service of making operations better. And if you have any sort of inclination to think otherwise, you’re wrong. I don’t know what else to tell you. That is not what we’re, we’re not here to do math. We’re here to run an operation.
And operations, you know, and really getting people to think is, you know, in stories. And one of the things we’ve written in our very first book, right, is there are lots of different ways to tell stories like the hero’s journey and rags-to-riches, etc., etc. There’s a reason that playbook is copied. It’s because it’s what people will actually remember because I don’t want to just have people have a reference book. They can reference this material. I wanted to show them why they should do it.
Adam Chans Jr.: I would add to this. Another reason for this, another reason for this format is actually the dialogue that’s in there. A lot of people are afraid. So you can present them facts.
But they’re afraid to almost ask stupid questions, as they say, right? Like there’s questions they don’t feel comfortable asking. And this allowed us to have different characters from different backgrounds ask these types of questions that normally you wouldn’t see asked, especially you wouldn’t see phrased this way. You wouldn’t see answered this way.
But also, it allowed us to show that these characters that are asking these, what technical people would call them, simpler questions, we also created a way to show that these people also bring value to the company as well. So it’s beyond, it’s not just technical people bringing value. There’s this institutional knowledge that has to be brought into your models.
And we, like a sort of tangent from that is a lot of the story, and Joannes might disagree with a lot of this. This might be where we actually disagree. So, a lot of practitioners use MIP models. That’s just the way that it is. They’re trained this way and they think this way. They think objective, decision variables, and like if there’s ever any uncertainty, like they think, okay, stochastic optimization.
Okay, I need to do this like multi-scenario, right? And like, and our whole thing is like you can still make use of your model. There’s structures there. You can add the tunable parameters and this is where we go into the more like the Powell side of things.
But like through simulation, we can run over these probabilistic forecasts. We can look at what happens with like correlated catastrophes, right? Like something goes wrong, another thing goes wrong. It’s not these one-off events, and like how does that affect things?
I don’t think the average mathematician will even consider these ideas if they were presented in this way because they’ve been trained a certain way. But through the story, it’s kind of implicitly they’re gathering, okay, how do I actually work with these types of problems and like how can I expand my thinking from these static MIP models to still using MIP and still using the tool, but you’re using it in a different way.
So that was another reason for doing that.
Conor Doherty: This is just as a quick follow-up question because again you’ve mentioned Warren, Warren Powell. He might even be watching this, who I was talking to him earlier. And you’ve mentioned Joannes. I am just curious because with the central character, Evelyn, I presume is a composite of multiple people. Is it true—and this is just my speculation as someone who knows Warren, knows you guys, and knows Joannes—is there a bit of Warren and Joannes in Evelyn? Because the way they speak, the way they approach—like there are passages, I swear to God, there are passages I could read and it’s like, that sounds exactly like Warren, and that sounds exactly like Joannes. It’s just like “Safety stocks, you’re thinking about it the wrong way.” It’s like “Service level: is that a constraint or is it a goal? For some people it will be a constraint, for some it will be a goal, but if you focus on one over the other as your objective function, that changes the decisions that you can take.” It’s like that is a very Joannes way to present it. So I’m just curious, is it a composite character? You don’t have to say that I’m right. You don’t have to say it.
John Elam: I mean, I’ll say it. You’re right. So, and I’ve told you, I’ve told Joannes. I said this. I said you are one of my favorite practitioners because like—and I mean it—actually loves you.
Conor Doherty: [laughter]
John Elam: Yeah. It’s because it’s true. Like you’re, like when I was doing my graduate school and I had an adviser, he told me a piece of advice he told me is you have to understand the theory to know when to ignore it.
And so I think like he’s a very good representation of this, right? Like he gets the theory, understands what actually happens in practice. When do you ignore pieces? When do you use it?
And I think that’s a skill that people need.
Conor Doherty: Right. Well, now that we’ve confirmed that I can read between the lines perfectly, we’ll push on to the substance because again I think across the book there are many—we could talk for hours—but I think there are a few core ideas. And I think the first one is again the difference between plans and policies and how like pursuing optimality, as you guys put it, might not be the best objective.
So what is the lesson here that you want people to take at a high level? What is the lesson? So if you’re pursuing like the perfect plan, that’s a mistake. Okay. Why? And what are you replacing it with?
Adam Chans Jr.: So I will say the thing is that the optimal solution doesn’t exist. So I know like solver companies—and me being at a solver company particularly—like we love to say the optimal solution, and in some cases it actually makes sense. There’s many, many cases where everything is automated end to end. It’s truly in MIP-style fashion. You can take the output, you can implement it.
But there’s also a large category of problems in industries where you can’t do that. And so the optimal solution for the model is not the same as the operational optimal decisions. They part ways and like in supply chain it’s very evident of this. Now there’s also many cases where you can, you know, do things more the academic way, as I’ll call it.
But I think that’s my big takeaway that I want people to understand is like you have this tool. The tool is great. Like the MIP solvers are amazing. And part of the amazing thing is you don’t really need to care about how they work. You no longer need to care about the algorithm. If you give it a decent formulation it’s going to take care of it. You’re going to get an answer out. Like how do you use that as a tool to go with your uncertainty? And of course there’s other methods too, but this is such a prominent method out in industry. I think it was important to talk about how to use this.
John Elam: Yeah, I would add that like you can’t think your way to good solutions. The best method that I’ve found is to experiment. Now whether that’s experimenting online or offline depends on your cost structure and you have your explore-exploit trade-offs you’re always, you know, calculating whenever you’re actually going to try things in the real world. But you can think about a solution all you want. You can formulate it and spend years creating this perfect formula for this insanely complex system, but you don’t know if it works. You won’t know if it works until you try.
And the sort of like two main methods is that, right, you can try in a simulator. Try to faithfully represent your world as best you can. It’s not going to be perfect. And I really like, like Evelyn has a quote about thinking of simulators as wind tunnels, right? They’re not telling you that I’m perfectly going to get to this destination at this time. It’s “Will the wing break if I put this much wind on it?”
That’s what we’re trying to measure. And so I want to measure like will this policy collapse under, you know, catastrophic situations, and if so, what does that look like? So that’s a simulator that I get to play with and that’s going to actually validate, partially at least validate, your claim that this policy is good. What’s actually going to validate it is reality.
And so, being able to put it out there and test it and be able to update that information intelligently is I think one of the main things I wanted people to take away is try things out, and then like, like that’s your best way. Like they’ll fail. It’s really the “fail fast,” right? Like I don’t really like that phrase so much because I think it’s been abused now. But that sentiment I think holds true. Go try it out. That will tell you if it’s working.
And you don’t need to be a genius to do this, which was our other point. Like if you look at the practitioners in the book, they contribute. Yes.
They very much contribute to the bottom line without knowing any math and they start to realize, “Oh, you know what? I do have a part in this.”
Conor Doherty: Yeah. So that’s in the chapter where you talk about the heuristics and that you can fold heuristics into your policies.
But again, Joannes, again I know this is very much what we do here. So again, do we pursue optimal plans or what is our approach? [laughter]
Joannes Vermorel: Yeah. No, not really. And you see if I really look at how we think the problem—yeah—is that typically I’m not using the word “policy” because the way I see it, it’s a small detail but it’s just where we prefer to use the term “numerical recipe” and that’s a slight, you know, that’s a slight change but it’s also a slight change of focus.
Policy is a mathematical instrument that lets you do the sequential part of decision making. It’s very useful if you want to sequence your decision and everything mathematically you will have—you would carry this object, this policy. But when I look at real world systems, this policy component is like 1% in terms of lines of code. It’s not even 1%. It’s very frequently like 0.1%.
And so what do we have? We have like a massive amount of assumptions, for example, on how we process the data. Tons of data. It’s going to be pre-processed in different ways. You’re going to select the tables that you are going to extract from the ERP, select the fields, you’re going to do all sort of, I would say, messaging to the data.
And the gist is, so it’s twofold. First, and I think “policy” is good because it forces you to think of the future with something that is not a plan.
I think that’s the first step, is that as long as you think as you have like a completely, I would say, black-and-white perspective on the future, like either we have a plan or we have nothing, I think you’re kind of stuck. So that’s first: policy. It clarifies that there is at least one third way. In fact, there are many, many third ways.
But the first thing is to start thinking that thinking about the future does not necessarily mean perfecting a plan. That’s something—at least not a plan as in God’s plan. And then yes for the focus, and that’s for me the thing about, you see, the difference between policy and a numerical recipe, it also touches the reason why I’m using that. It’s to precisely make it very obvious that there is like no optimality whatsoever. You know, if you use the word “numerical recipe” it’s not something where anybody expects any kind of well-behaved mathematical behavior from that. You acknowledge from the start that it’s going to be a mess and hopefully you want to make it kind of better in a way. It doesn’t mean that they are all equal.
But “better” can mean so many things. It can mean that you have a clever business idea. You have a hack and you are going to do something that you don’t do. So you expand the amount of, extend the range of possible decisions because you accept to do something that usually you would not even consider. Or you just negotiate different terms with your suppliers and then one constraint that you used to have doesn’t exist anymore, or etc., etc.
And so it is fundamentally a way to avoid having some premature rigidification of your, I would say, of what you’re looking at and, you know, a premature formalization of the system before you’ve really started to assess everything. And I completely agree with also the idea of doing experiments, is that even if now you’ve started to look at, okay, many things that are possible, not possible, you test the water. Most likely the first time you actually start to run anything on the data you will realize that you have—so you’re going to get feedback and that’s going to blow up your plans because you were thinking directionally, “I want to go this direction,” and bam you realize that something is completely broken with your approach because data doesn’t match your expectation in ways that you could not even think about it. And that’s good. It just means that you just need to iterate fast. In fact—
Conor Doherty: Well, I do want to make it concrete because while that’s, again, high-level theory and it’s perfect, one of the things about the book is, again, Adam, John, to come back to you, the book goes systematically through how you would move or transition from thinking in terms of plans to thinking in terms of policies, experimenting. I think there’s a chapter on like a laboratory—I can’t remember the exact—Decision Laboratory, I think that’s the name of the chapter where it’s like, okay, let’s walk through what does a Monte Carlo simulation look like, and we get into again all the optimization theory.
But what I want to ask is, perhaps even taking the analogy of Fulcrum Logistics, could you explain in practical terms what it looks like when they transition from purely plan-based thinking to policy-based decision-making?
Adam Chans Jr.: First I just want to touch back on the numerical recipes point.
Conor Doherty: Oh, feel free.
Adam Chans Jr.: And which will go—so I, okay, so I agree with this. I agree with almost everything you say. Usually, the one issue that I have—and probably John will agree that I’m speaking for him. [laughter] It’s an executive, I’m making an executive dance. So like the thing is that in reality one of the hardest parts was getting people to change and that’s something that we had to deal with at these mass—like when you’re talking about the Fortune 50 companies—it’s so hard to get people to change and while everything you said is true, people are still caught up in their ways of doing things.
And so like one of our philosophies is meeting them where they’re at, speaking in their language, which in this case is MIP usually, because that’s what everybody uses. And then going from this plan to a, you know—you don’t want to start with a plan, but they already have a plan—and it’s easier to meet them where they’re at, grow them along the way, and then from there maybe we can get into, you know, numerical recipes as the term.
So like that, we intentionally did that. Just to be clear, like that is why we did that. Although we agree—and I keep saying “we” because I’m speaking on behalf of this company. [laughter]
John Elam: I agree as well. I agree as well.
Conor Doherty: John, anything to add there? Sorry.
John Elam: No, no. I think Adam summarized it pretty well. I was gonna talk about like the path: like how do you make, what’s the big transition?
Conor Doherty: Yeah.
John Elam: Yeah. Answering your earlier question, I think chapter three actually does a lot of the heavy lifting on that, which is framing the problem. That is just framing the problem. It’s SDA.
And yeah, so yeah, it’s very much like the, you know, the first core part in SDA. You know, if you’re keeping up with Warren, you’ll see he’s currently very focused on this topic right now.
Conor Doherty: Yeah. Yeah.
John Elam: And I think for good measure, we do this very poorly as a species. This is something we do very poorly. And I’ve actually—this isn’t in the book currently. I’ve actually been kind of thinking about this for a second edition. I haven’t talked to Adam about this much yet. So, surprise.
Conor Doherty: Exclusive. Okay. Exclusive.
John Elam: You know, one of the things that I’ve actually been thinking about a lot lately is how stoicism, like a stoic mind, is a prerequisite to good decision system design because you need to see—Marcus Aurelius used the term “to see things naked and unadorned”—and is to see things for what they are, not for what you want them to be or what you think they will turn into, or no, no, no. What is it? Where has it been and where is it likely going? And if you can observe things in that way, framing the problem becomes a lot easier.
But that is hard. That’s like anti-human nature.
Conor Doherty: Yes.
Joannes Vermorel: Absolutely. I would even say that to be able to think about a problem instead of not thinking the solution is so incredibly hard. You know, what comes naturally is to think a solution first and then you will say what is the problem? Oh, the problem is exactly what the solution solves. And that’s very, very difficult, very, very difficult. And by the way I’ve been struggling with that for years. I think it’s just another aspect of the same intellectual mechanism.
Elon Musk was saying that for SpaceX that they were struggling with the fact that they were constantly having like the bogus requirements. And I think it was also same sort of thing. What is it that you’re trying to solve? And the requirement is a way to specify the problem but you can—it’s very easy to realize how you can frame the problem in a way that is completely bogus. So it doesn’t have any solution, or it has a solution which is good locally but not very good for the whole system of the rocket and whatnot. And I think it is very intellectually very difficult.
And I have even more bad news. LLMs are absolutely terrible at that. I’ve tried. They are very, very bad. [laughter]
Conor Doherty: So ChatGPT will not save you, at least not the current version.
Joannes Vermorel: Actually again—
John Elam: I’d say it would reinforce bad habits, in fact.
Joannes Vermorel: Oh yeah, absolutely.
John Elam: From my experience.
Joannes Vermorel: Exactly. Exactly.
Conor Doherty: Well, again, and to bring it back to the book, I think when people think about making improvements, the thing that they go to first—and I think this was a strategic choice on your guys’ part to push it towards the middle of the book—is to think in terms of accuracy. And I believe, I’m going to pretend I don’t know for a fact, chapter seven, “Forecasts as Variables.” That is where it actually begins with Liam Chen, I believe the character, and he’s sitting there and basically I actually think—and again correct me if I’m wrong—that he’s sort of a proxy for pretty much everyone in this conversation or like anyone who reads the book because their first thought is “Oh, I need to make things better, I need a more accurate forecast,” and it’s like a hundred pages into the book, once you’ve learned how “Here’s the problem, here’s the complexity, here’s how to frame the problem,” that you finally have a discussion and like Liam is sitting there, and actually Adam you said earlier about people being afraid to ask potentially simple or you know dumb questions.
Liam is like sitting there afraid or like waiting to have his forecast criticized again because every time something goes wrong, it’s like the forecast was it. And you guys methodically demonstrate like even with a small forecast error, it depends on what you did with that. That’s actually where the damage lies because again forecast as input not as goal.
But again, you guys wrote the book. Please tell everyone why do you think it’s wrong to pursue forecast accuracy in isolation? Excuse me—in isolation.
John Elam: I’ll just give an example that I use when I’m communicating with my colleagues here at Toyota. It’s concrete, very simple. Let’s say I’m predicting mud flaps, right? We sell a lot of cars. It takes a long time to source millions of mud flaps as you can probably imagine. So, a good forecast out in the future would be great, wouldn’t it?
Well, first we got to figure out what the heck we’re forecasting. And most people think they’re forecasting demand or some proxy of demand. And so let’s say we do in fact forecast demand of like how many mud flaps are going to get ordered next month. Let’s say it’s a thousand. And it’s perfect. Like I predict now it’s a thousand. Next month happens, it’s exactly a thousand. So first off, I need a Fields Medal. And then after we cross that hurdle, what if I told you that the EPA will only allow us to install 800 because they are aero impacting. It impacts the miles per gallon. It’s tiny. Well, it’s tiny, but it’s enough, right? And that car was certified without mud flaps in this case.
So you can only install so many before you have to like recertify the car because you’re selling so many of them. There are so many things that go into that decision, right? I could—but making that forecast better just tells me I’m going to miss.
Now what I do about that: I could recertify the car. I could intelligently pick who’s going to get those mud flaps. Perhaps I give it to high profit areas. Perhaps I do a fair share. Perhaps I do first-come, first-served. There’s so many different ways to allocate those resources. But they will be constrained for sure, at least next month. That’s for sure. Now longer term they might be—maybe we can unlock that by, like I said, recertifying the vehicle or something.
That accuracy doesn’t matter. Like if I was 80% accurate on a thousand, right, plus or minus—800 to 1200—I’m still going to be well outside of my ability to actually install. So it’s not—that forecast is not a decision. That is not a plan.
And I like to use that example because it’s just so—it flies in the face of “Let’s get more accurate.” Okay, cool. You literally can’t fulfill that accuracy. That’s not legally or physically or whatever. Like there’s some sort of real limit.
So now you have to make a decision. Yes, you have a constraint. So it doesn’t matter if your accuracy is high. I mean it matters. It’s definitely going to help make a better decision, but it’s not the decision.
Joannes Vermorel: Another way to approach it is focusing on accuracy. You’re going to do your calculation to the gram and then you will have something else that will round it up to the next metric ton. And yeah, okay, so you have improved gram-wise some measurement and then there are stuff that are doing those massive roundings.
And the stuff doing the rounding, it can be the EPA, it can be the fact that you want to do full truck shipments, it can be the fact that your storage capacity doesn’t let you store the stuff, or the fact that you don’t have the workers, or the fact that you can negotiate with a client that they wait longer. I mean there are so many other things.
And so obviously you don’t want to purposely make your forecast inaccurate, you know, because obviously it’s not going to do any good. But it’s a matter of having your attention, giving it the right coverage and not having—like the problem with forecasting accuracy, it’s not that accuracy in itself is such a bad problem, it’s that very frequently for companies it’s like almost all the brain power of all the people who are doing analytics is going to go for that.
And that’s crazy. It would be as if at Toyota you would have like 90% of the engineers who would be focusing on how much paint I put on the car. I mean yes, you need to have people who think about exactly how much thickness of paint you put on the car.
But if it’s 90% of your engineering workforce who is doing that, it’s probably like a massive misallocation. So there are other problems than the thickness of the paint. And that’s a little bit, it’s not that thickness of the paint is not important, it’s that there are plenty of other things that are also important.
That’s kind of the way I see it.
John Elam: Adam’s been talking a lot about this—like not this specifically but tangentially—like the economics of your decisions are what matter. And in this case like what you have your engineers work on is a decision, right? Like are we going to work on forecasting or optimization or better contracting, right? Maybe we spend more resources on purchasing.
And knowing where to put effort is a skill in and of itself.
Joannes Vermorel: And it flies in the face of the mainstream supply chain theory which do not see supply chain as an economic game. So you see, there is that also. When you say we need to have the economics of decisions, I completely agree. But you see we are already here saying, okay, we have a supply chain where it’s an economic game. And thus the criteria to judge if we are better or worse is going to be economics.
And again I very much agree with that and I think this book captures this idea very well. But I would like to point out for the audience that it is, I would say, a very, very fringe belief. The vast majority of papers that are being published or books do not embrace this perspective at all.
Conor Doherty: No, sorry, just like as a side note, when you say fringe—but prominent—again I’m not going to disclose any names but I was in a conversation yesterday with a practitioner and he referenced both of your books. He’s saying, “Oh.” So just for the record, it’s growing. I would say that much.
Joannes Vermorel: Yeah. I mean the interesting thing is that if you go back to operations research in the 50s, it was economics, and then it went on the tangents where people were chasing, I would say, very abstract mathematical models. And then the economic aspect was lost. In fact it was not lost. It was sterilized, you know, to a point where economic was not there. You just had like a theme of economics but not the substance.
And there are like tons of papers about supply chains still being published that have like a veneer of economics but it’s not the real thing. It’s like very superficial. It is just, again, it’s a little bit like visiting Disneyland and thinking that Midwest city towns look like that. You know, it’s just—it looks kind of like that, but it’s so profoundly different.
Conor Doherty: I think a key part of the economic perspective—and this actually pushes on the conversation just a bit—is the idea of risk. And again, this is why I thought listening to some of the passages from Evelyn, again, I think this is particularly chapter 7, I thought it was very reminiscent of you because the idea of like the risk is not in the mean. It’s not—it’s the risk is not when things go well. Like because being right on average is not as rewarding—sorry, the damage of being wrong in a fat tail event is far more cataclysmic than the rewards of being right. And if you’re not planning for those rare events, you are blind essentially, not just to the uncertainty, but in more financial and economic terms, the risks.
So again, there’s a whole chapter about, you know, fat tails, why the rare events are important. So, Adam, why should people, if they’re listening to this, care more about the fat tails than the mean?
Adam Chans Jr.: Yeah. I mean, it’s exactly what you said. Businesses live and die in the tails. So one of the issues is that I don’t know why this is but like the chance of one of these tail events happening is quite small.
But the thing is that—well one, it can be catastrophic—but beyond that, the reason almost why it’s also catastrophic and why people overlook it, they take this like—they almost try to be too smart where they look at like, “Oh, well, the chance of this happening and the chance of this other one happening and the chance of this other,” and they do like this—they like multiply through these probabilities essentially.
But it’s almost like you’re trying to be too smart and mathematical. Like just think about it, like all the stuff happens in correlation.
Joannes Vermorel: Yes.
Adam Chans Jr.: Right. Like when the strike happens, the shipments don’t come in, the customers complain, the regional managers start freaking out, and like all these things happen at once together, and they’re in relation.
It’s not even about—
Conor Doherty: Yeah.
Adam Chans Jr.: And you don’t need this. Like it’s almost like people try to be too smart. Like don’t be too smart. I have somebody that I admire that says, you know, “Stay in the middle of the bell curve.” Like there’s a lot of problems that people think are solved and they’re not solved and it’s all being—there’s a lot of opportunity to be had in simple things that people overlook.
And I think if you seek out these things that you think are solved and that just sit in the middle, you’ll find that there’s great success to be had.
Joannes Vermorel: Yeah. I mean, the way I see it is that we have a problem with even English, you see, the language. Because when we say “risk” I see two different sorts of risk that belong to different worldviews. You have what I would call the natural science perspective—that’s physics, chemistry, all of that—and what does that mean? For example, for Toyota, that means we don’t kill people because the braking system explodes on the highway or something like that. So it is something which is about controlling the atoms. It’s about controlling the physics.
And here when people say risk, they don’t want risk. You want to have a risk that is so low that it becomes not measurable. You know, that’s the attitude to risk. And that has been one of the great successes of modern mass production, is to be completely intolerant to this sort of risk. So here we are talking of a class of risk that belongs in terms of study and analysis and phenomenon to natural sciences. So it is fundamentally about—think of it as physics, chemistry, material science, all of that.
Now the problem is that we have the same word that is going to qualify risk both for a failure of the braking system and the fact that, let’s say, one of your suppliers decides that you are not their top client anymore and they don’t send their production to you but to somebody else. That’s also the same risk. But here we are talking of—it’s a different worldview. We are in what is technically called a praxeological world. So it’s a world of human action, intent.
And this—unless you remove the free will of people—you cannot eliminate this risk because literally the risk means people changing their mind. And because they’re free, they can always change their mind. The clients can decide that they are not going to buy. Your supplier can decide that they’re going to serve somebody else. You, etc., etc.
So if you are in the realm of praxeology—which is, you know, it’s human action, it’s intent, it’s about human decisions—then there is no such thing as engineering away the risk because fundamentally that’s engineering away the capacity of people to change their mind. And you’re never, never, ever going—I hope, you know, I don’t want to live in a world where we have like mind control—so as long as people can change their mind that means that you will be at the mercy of the risk that emerges from that.
And I think that’s why many companies, when it comes to risk, have such a schizophrenic [approach] because they are trying to reuse the risk management techniques that they have in engineering—which is, for example, it has to be right the first time; we don’t tolerate risk; things have to be, like, I would say pixel perfect within tolerances that are so small, etc., etc. But supply chains do not live in this universe. That’s why the risk is not something that you can engineer away. You can hedge your risk just like traders do on the stock exchange. But fundamentally, as long as people can change their mind, the risk will be there.
And that is why fundamentally you have to embrace the risk instead of trying to wish it away with perfectly accurate forecasts because a perfectly accurate forecast would be intellectually a way to say my risk—instead of being in this praxeology, you know, area—it’s natural science, something where I can have the tolerance that is so small that it becomes, you know, irrelevant.
And—but it’s not going to happen due to the fact that we have people with free will. And there it is. This is what our risk in supply chain is about. It’s those future decisions that are not being made and that depend on people we do not control.
John Elam: So you’re actually getting on to something that—something else I think and talk a lot more about lately. Someone even introduced me to the term “red words.” And it’s words that factually are correct, but do not translate well in your receiver’s mind.
I’m coming to the conclusion that terms like “risk,” frankly “operations research”—I think you guys saw a post that went fairly viral about my thoughts on that name. I would say even “uncertainty.”
Like “uncertainty,” when you say that to an executive of marketing or, you know, an operations leader, they think “You don’t know.” And everyone on the call would say, “Yes, I don’t know,” and we’d smile about it. Like, “Yeah, we can use that.” That’s not what they’re thinking. That’s not the translation. The translation in their mind of “uncertain” means I just don’t know. Like I don’t even have a good guess. It means I don’t know.
And so that’s another reason why we wrote this as a story, is because it allows a vehicle for people who are unfamiliar with some of these terms to ask those dumb questions like, “What do you mean that’s optimal?” And it’s like, well, we’re talking about math and mathematically speaking that’s optimal, but that doesn’t translate to your real world. And like we got to have this vehicle that allowed us to call out words that mean other things in other places.
Heck, me and Adam debated quite a bit—we’re still debating, honestly—about a term that I think a lot of the folks in the audience will resonate with, but it struggles for me, which is “online learning.”
Conor Doherty: Online learning.
John Elam: Such a bad terminology.
Joannes Vermorel: Yes, that’s an awful word.
John Elam: I’m going to continue to fight Adam on this one because—yes, “online learning” means something to 95. It means something to my little cousin who’s seven years old. He knows what “online learning” is. That’s trouble, right? When communication is about me taking my idea and giving you the words, visuals and mechanisms so that you can reconstruct what I have in my head in your head. “Online learning” is taken.
So I firmly believe we need to drop that word. Or get everyone else to change what “online learning” is. That feels like a harder battle. So I’m going to pick my battle. So yeah, that word “risk,” that’s why we used the novel as a vehicle to communicate these ideas.
Joannes Vermorel: Yes. Yes. And by the way, I complain frequently about the bad terminology of supply chains. But computer science is terrible. You know, you have things that are, for example, “dynamic programming.” It’s not dynamic at all. Certainly not in the modern sense. When people say a dynamic programming language, they would mean Python. But dynamic programming has nothing to do with that.
John Elam: Yes. Yes. Yes. Yes.
Joannes Vermorel: And “serverless”—
John Elam: Yes.
Joannes Vermorel: And we have—but you see that’s why I think these sort of stories are interesting because it’s about, you need to unpack some ideas and it takes pages and pages and you don’t necessarily want to do that like a philosopher where you have to introduce 20 very precise definitions where you very carefully construct your jargon, you’re paid by the words.
And then you can start explaining your ideas. It is the most precise way to do it. But the problem is that it makes your writing so full of jargon that it’s almost opaque to someone who didn’t pay the price of learning all your definitions. And that’s a real challenge. How much jargon are you even willing to carry through your writings?
Conor Doherty: Well, we—there are some audience questions I want to get to, but there is still one looming question I think that actually ties all of this together and does actually change in a fundamental way how people would approach planning. And by that I mean like actually making decisions about what inventory they have, where they send it, etc. And that was the idea of taking the understanding of, okay, my risk lies in the tails—like the low probability events, the sigma-3 events, as you describe in the book—but you say something like, you know, “Pay decisions that cost 3% more on average but give us, essentially reduce the tail risk by 20%. That’s something we really should actually consider.”
Okay. Now the thing is mathematically you can say that, but how do you translate that, which essentially means paying more—how do you translate that to people who are like, “Yeah I want to reduce my risk but I also don’t want to spend more money if I don’t need to.” That is—Adam, I’ll go to you first.
Adam Chans Jr.: Okay. [laughter] Go for it.
Yeah. So quite simple—I mean in our case you just show them examples of what this looks like. So that’s part of why we’d like to have a simulator when possible. One thing you can do with the simulator that you cannot do with just a static optimization model is, like a lot of times you have multiple objectives. Sometimes that’s not even an objective, but it’s a KPI.
And a lot of—depending on how you implement this of course—but a lot of people implement some like weighted objective and it comes out to be a number. That number doesn’t mean anything at all. So you need some way to post-process some type of metrics and those metrics don’t necessarily have anything to do with the objective. Like they could be separate ways of measuring the quality of a solution.
Like if you’re doing a production, you know, production planning or something, some tangential metric that you might not put in your model is like the month-to-month smoothing of your vehicle builds, for an example. But it’s important, but maybe it’s a secondary objective. Or like—but somehow you want to capture this, maybe even post-process it.
But if you can show examples, actual concrete examples of these scenarios where hey, we’ll pay a little bit more, but if—and I like to relate back to something that happened, like “Remember that port strike that happened in December, like this is what would have happened. This is how much we would have saved in that case if something like that comes up again.”
And so like making it very concrete for them and running through different scenarios is the only way to do it. If you start showing them graphs and giving them statistical terminology, like it just doesn’t really make any sense. And showing them different examples of different outcomes and then measuring it and getting the economic value—like does it even make sense? Maybe sometimes you do, maybe you don’t. Maybe it doesn’t even make sense even in that case. Like, “Let’s pay 2% more on average but we’re covered in this like 40% risk area,” but like maybe that doesn’t even make sense to do.
So like you have to run through the numbers. If you don’t have the numbers none of this is really worth talking about.
John Elam: I like to take that method that Adam just described and visualize it in what the term we used in the book is “Pareto Explorer,” but you basically plot out your Pareto frontiers across different objectives and KPIs that you’re tracking. And then you’re not building this perfect model for somebody. You’re building a mechanism through which somebody can evaluate their decisions that they would make. That is power. And it’s adoptable. Like you can get someone to use that.
I really struggle to get people to just use an optimizer. That takes months of, “What do you mean it’s optimal?” and “How do you trust it?” and blah blah blah blah blah. And it’s like, well, you didn’t, you know, you didn’t take stats and you sure as heck don’t have a master’s in applied mathematics, so you’re not actually going to know. You’re going to have to trust me, right? That doesn’t—that takes forever to build through.
But whenever you just say, “Hey, actually the choice is yours,” everything changes. And you’re just basically providing people a ruler with which to measure their decisions. And that ruler is the Pareto Explorer. So yeah, that’s sort of using those methods that Adam just described and visualizing them for folks is a really powerful mechanism to get people to actually adopt what you’re trying to do because it’s not really your decision. You’re—the other thing is you’re not going to own it. My phone doesn’t ring, right? When the truck doesn’t show up, they don’t call me. They don’t call Adam.
They call Maya. They call the operator. The operator needs to have the control. They’re going to override the system occasionally, right?
You should know why and try to embed that into the system as much as you can and be continually working with them. But they need to have the control. It is ultimately their responsibility.
Joannes Vermorel: Well, you as a vendor—I tried that 10 years ago. It didn’t go very well. I tried—the situation was when we were doing that what would happen is that bad decisions would be taken. We would reanalyze the thing and we’d say, “Oh, the tool, it actually told you at the time it was very, very bad.” And people went through, and at some point we say, “Okay, okay.” And then people were saying, honestly, “We have your stuff, we keep taking bad decisions.” They say, “Yes, but you agree that it was—you knew exactly.” And they say, “Yes.”
But nevertheless this is a problem that is on you. And so we changed from this attitude to “Lokad generates the decisions unattended and then essentially people have to come up with a justification.” So we invert when they want to change it, you see. So we need to make it like the exception.
Obviously we can’t go into production like crazy. That means that we need to reach what we call essentially the zero insanity target, which is we generate decisions and nobody’s raising their hand anymore and saying, “Oh, this number is so absolutely bogus that it would literally blow up the company if we were to do that.” So we have this zero insanity target. And typically it takes one, two months to get there.
But then once we are in the zero insanity—not zero inaccuracy—zero insanity target, the question becomes we need to invert, you know, the burden of the proof. People have to come up with—there is an unattended decision-making process that generates this decision; it is self-justified with economic factors. Now if you challenge this decision, what we say is that you have to at least challenge one of the economic factors that is reported by the tool.
And that can be just judgmental. It can be, “You’re telling me that the cost of stockout here is this much. I just don’t believe that.” They don’t necessarily need to have like a model themselves, but they can just say, “My rough assessment is that it’s too large, too low.” And then, “Oh, maybe something like there is even another economic force that you’re not even reporting that I believe would be worth reporting.” Okay, fair point.
But you see, because the thing is this economic mindset is not a given. And in most of the companies which we serve, they have been operating under percentage-based targets for so long that the idea that you’re now going to count your dollars—so the targets are in dollars.
I mean it’s very interesting because people will—it’s, I would say, corporate schizophrenia. When you ask people a question, “What is the purpose of the company?” Make money. Okay. How do we make money? More dollars in the bank account. Okay. Now, what about your division? “Oh, no. My division, no, no, we don’t count dollars. Just percentage. No, no, no, no. Yes. So the other divisions, yes, they do make dollars, but we—it’s just going to be a percentage.”
And it’s very funny. I mean there is this corporate schizophrenia where people know the theory—like the company is there to make money—but in practice, whenever they can shoot for a percentage instead of shooting for dollars they would shoot for the percentage. And I guess it’s a little bit like bureaucracies at play. You know, if you shoot for a percentage you have so much leeway to make your own performance look good. It’s very tempting.
When you play with dollars—because it’s, you know, and that’s why—
Conor Doherty: The game fundamentally.
Joannes Vermorel: Yeah. That’s why, for example, people who are traders, it’s so brutal, is because if your job is to be a trader, you know, buying, selling commodities, anybody sees your performance in dollars in real time. It is absolutely brutal. But in a large corporation because you are several steps remote from the actual clients, actual suppliers, actual production lines, this percentage can pretty much insulate you from the real world consequences of your bad decisions. And that’s very tempting. I understand at a human level it is very tempting to just pick those percentages. And thus that makes you say, “Oh, look, we were very compliant with the percentage targets.” Oh yes, it created so much chaos down the line, but we were compliant with the targets.
Conor Doherty: Gentlemen, any closing thoughts on that before I push on to the audience questions?
Adam Chans Jr.: No, I very much agree, and it is a major challenge to get people to think in dollars.
John Elam: Yeah.
Joannes Vermorel: Probably the biggest hurdle in this whole process.
And again, that’s the economic perspective. The problem is that the people who already embrace this economic perspective, for them it’s like self-evident. But my casual observation is that this economic perspective is embraced by maybe, you know, like 1% of the population. The vast majority of the population do not have this economic perspective on anything.
And it is a major challenge. It is a major challenge.
Adam Chans Jr.: I’d say it’s the hardest part of all this, frankly.
Joannes Vermorel: Yes. I mean, and you can even say when I say 99%, it’s that for example you can make a survey and ask to people, “If you buy a house, does that make you richer?” Obviously a house is a liability, not an asset, but I would—I’m pretty sure that in France and in the US you would have at least 90% of the people who say, “Oh yes, I own a house, I am richer now.”
No, you have a liability, not an asset. You’re not getting any richer by having that. It doesn’t mean that you should not have a house. It’s very nice, you know, it’s very pleasant. It’s quality of life and everything.
But let’s be clear, this is not the class of stuff that you can own that will make you richer. That’s again economic perspective. And I stress, I would still think at least in France it’s absolutely sure that it’s going to be at least 90% of the people who fail this test. And in the US, I’m not sure, but I would say probably kind of the same.
Conor Doherty: All right.
Joannes Vermorel: More.
Conor Doherty: I don’t know. I don’t know. I mean, the French illiteracy in terms of economics is really, really, really high. So, you know—
Well, I’m going to sidestep our—what the kind of rants we have privately on that one. So, I’m just going to go straight to possibly a money-making opportunity. Someone asked, “Could you share which real customers inspired the fictional company in the book?” I told them, for a fee we’ll share that information, but to let us know, to message me privately.
Are you okay if I take that? Can I take that one? That’s for me.
Adam Chans Jr.: I just want 20%.
Conor Doherty: 20%. All right. 80/20. That’s fine. But real—well, it was a question, but real question. So, this is from Jacob. You devoted considerable attention to the decision simulator in the book, but perhaps less to real-time response to unexpected or expected events. A well-designed problem-specific GUI paired with the decision engine could be a powerful complement here. People in the field may need advice for immediate action. What are your thoughts, panel? Adam first.
Adam Chans Jr.: First of all, if you really want to know the companies that were—
Conor Doherty: Well, we don’t have to get into that.
Adam Chans Jr.: We are not associated, for the record.
Conor Doherty: No, no.
Adam Chans Jr.: Yeah, I know. I’ll just say that this comes from work that I’ve done in the past in consulting as well as a large automotive manufacturer. [laughter]
But really there was a large logistics company who I’m bound, I cannot say the name, but there’s a very large logistics company that represents the problem in this story quite well, at least a good portion of it. Especially the “Hey we need to forecast ETAs better,” like we have to have ETAs. But it’s like well, take a step back. Yeah, like it makes sense. Of course you want a better ETA. It’s a better customer experience. But if the ETA says, “Hey we’re going to deliver your package in two months from now,” like would you rather have a wrong ETA but they get the package tomorrow? I think they would.
So like there’s this whole forecast versus decision aspect. I’m glad you liked my consulting phrasing of “a large automotive manufacturer.” [laughter]
And the other question is about online—like, well you don’t like this term apparently—but the online learning. Learning while doing is another way we say it. Yeah, we didn’t talk much about this because, one, it’s quite hard to do and I think it’s an ongoing piece of research in practice and in academia.
But just like the number one thing is like can you get something that looks reasonable? I really like Joannes’ explanation of the 0% insanity piece. That’s really nice. I found that to be extremely useful. Like can I make some type of decision that, in an automated fashion, if I were to show this to another human it looks like another human may have made this and it looks reasonable. Like that’s the first step.
And then throwing something out there and getting feedback immediately—like how agile was supposed to be before it became a meeting and, you know, a bunch of meeting theatrics. But getting it out there, getting feedback, iterating quick.
I think there’s a part about a UI. Yeah. You have to have contingency plans for when things go wrong. Like that’s part of the whole policy aspect. You don’t know when there’s going to be a severe storm, but how do you behave when the storm shows up?
Conor Doherty: Oh, John, if you have something to add there.
John Elam: Better answer—I want to just take a little bit different angle from it actually. You’re already doing online learning. Right? You’re already doing—like that’s the natural state of the world. That’s how babies work, right? They touch stoves, their hand hurts, they don’t do it again. Or maybe they do. Maybe their knowledge gradient isn’t very steep and they took a little longer. But—
Adam Chans Jr.: My son is. But yeah.
John Elam: You know, apples falling from trees, whatnot. I think so. People are already learning online. People already have Tableau dashboards. You already have a UI around that.
Now I would definitely say—and I think what the question’s kind of hinting at is like, well couldn’t we create systems that show, right, like your actual—like using like a knowledge-graph kind of approach of “How much do I actually know about this policy and how much do I not know about this policy I haven’t explored?” Basically the same sort of method you would use in the online method. It’s the same thing. In the online, you’re just more conservative.
That’s one of the things we try to talk a lot about in the book is that we are kind of trying to get people to kind of get on the simulator bandwagon a little bit. So we did focus on that—probably I would almost say we over-indexed on it in a real-world situation, in fact. But there’s a reason for that. It’s less natural. It’s less used and it’s highly effective and cheaper than I think people think it is. You don’t need to simulate every leaf on the tree to simulate that there is a forest.
So, I think that’s part of it. We’re kind of trying to shock the system a little bit.
Joannes Vermorel: Thank you, John. I could see you were waiting to say something. Again, that’s the sort of reason why, again, I bring back the idea of numerical recipe—is that the way I see it is that the main problem is not what academia would see as online learning—like you have new data and you need to nudge your parameters a little bit—yes, but I would say it is such a small portion of the operational problem that we face. For example at Lokad, the main problem is that this 0% insanity principle, it’s very frequently heavily violated.
For example, you have a client and then they suffer from a ransomware attack and their data is gone and they have backups, but it’s going to take like one month and a half to bring back the stuff from the backups. So now you need to operate in a completely different way and you have 24 hours to deliver whatever is going to be the substitute. And again, because we have to do decisions at scale, the idea of “Let’s bring back people to do it” is not even an option because there is just too many things to decide to do it manually. So it’s going to be software no matter what. And it can just be like super crappy software or slightly better software.
And knowing that you’re in a crappy situation—and this sort of situation at Lokad, it happens very frequently. I mean a few years back we had a client in Germany and they literally had—one day they called—there is like half of their warehouses, they had dozens in Germany, were flooded.
Conor Doherty: Okay.
Joannes Vermorel: Okay. So now they just say, yeah we can measure centimeters, we can give you how many centimeters of water if you want. And they are not coming back, you know, coming back anytime soon. It will take months to dry, to restore, whatever. And so now it’s like you have 24 hours to make whatever you can with the ones that remain.
And that’s the sort of, I would say, problems where you want your solution to be very resilient to this class of problems.
So you see it’s not about the fact that, for example, the dollar lost 1% against the euro today or whatever. Those are not the sort of things that just explode completely. It’s like, for example, we had clients who were suddenly having major problems for their supply chains when the de minimis rule in the US was removed for small packages because they had pretty big pieces of machinery where the spare parts were sent as small parcels.
And those small parcels would end up being, you know, stuck just like all the TEU parcels at the customs even if it was like a piece of electronics that would just make a multi-million-dollar machine work again. It was actually stuck in customs despite the fact that it—just for super random reasons.
And again, what do you do? What do you do with that? And you need to figure out, I would say, approaches where, for example, the amount of software mass is not so big that a, I would say, rescue mission becomes impossible. It has to remain, you know, sufficiently small so that you can potentially rewrite the whole thing—super simplified, dumbed down—in just like ideally one day, maybe just a few.
Conor Doherty: Well, I’m going to push on to Antonio’s question. It’s actually a long comment. He’s bought the book, so thank you, Antonio. I get no share of that, but I know Adam and John appreciate it. I’ve summarized it because it’s a very long question. But in the book field learning is a central concept. However, in big companies, information often flows one way. HQ will push decisions out while the field and teams have no reliable way to push real constraints and their knowledge back in.
Question: What’s the main technical or organizational requirement to build a truly two-way feedback stream that will allow field realities to enter the process without slowing down decision-making?
John Elam: So, John might know a large automotive manufacturer that works on a push model. Do you know a large automotive manufacturer that works on a push model? Maybe you can speak to this, Adam—oh, sorry, John. I mean, yeah, there’s—
I’ll be honest. That is a question that sounds like there is like an answer to that I’m unaware of, if I’m being candid.
This is a human nature—this isn’t a math problem. It’s a human nature one. And so because of that, frankly, you’re going to need a big—in my experience, you’re going to need a big problem to happen for people to want to listen to the field—headquarters.
Conor Doherty: A catastrophic event potentially.
John Elam: Yeah, a catastrophic event. That could be the trigger, the impetus to want to take this seriously.
And at that point, things will get serious. You can—like we build mechanisms that allow for the field to communicate back to us directly both on the strategic planning—right, we have whole new vehicles coming out—we have very robust mechanisms to get their sentiment on these things that have never been seen before. We also have, I would argue, maybe less robust mechanisms to understand the day-to-day, month-to-month order. You know, “Here’s what we think we need. So many yellow, so many gray, so many blue.” We have mechanisms for that as well. In fact there’s a full computer system around it that we’re enhancing at the moment.
So it took a long time to get to that though, right? We used to just shove them out. We’ve had inventory sit and, you know, we have a long history, right? So we have a lot of lessons that we’ve learned from, and some of those lessons are, “Hey, if we listen to the field, they sell stuff faster,” turns out. So I feel like we’ve actually learned that lesson pretty decently, especially for our size and scale, right? We are the largest automotive manufacturer on the planet. The fact that we have any sort of sensing of what’s going on in, say, Arkansas is impressive in its own way.
So you’d need something that will trigger the organization to care enough to make the investment. Once the investment’s made, then it’s shepherding and making sure that the correct constraints are flowing back into the decision system and that those constraints have ownership. This is one of the things that I have seen kind of—
Conor Doherty: Well, that would be the last—
John Elam: Yeah, it needs—and it needs to be brought to the forefront. And I talk about it a lot—at least weekly if not daily—at work is, “Yeah, we got these guides, we got these constraints. Who updates it? Who owns it? If I found out that we can make a hundred million more dollars if that constraint got bigger, who do I call?” Because we can do anything, right? We need enough time and attention. And so I need to know who to call to unlock that constraint. So it needs to get assigned and owned.
That’s one of the things we’re working on right now. We’re assigning very clear ownership to even the finest of constraints. So we’re starting to take this very seriously. It’s taken a lot of problems and headaches to want to take it seriously, though—just is my sort of candid response to what—I think they’re wanting a silver bullet or a method, a phrase, or an example. I don’t have one. People have to feel that pain before they change their behavior in my experience.
Conor Doherty: Thank you. Joannes?
Joannes Vermorel: I have a slightly different perspective. We need to understand—and that’s a discussion that I had very frequently with my parents—because you see, to understand why you have those top-down mechanisms and the short reason is it’s because those mechanisms made the company successful in the first place.
For example, Procter & Gamble, the 70s in France.
Conor Doherty: That’s where your parents worked.
Joannes Vermorel: Yes, that’s where my parents worked. So the strategy was very simple. You have smart people in Cincinnati. They—smart chemists—they come up with something that is like a new sort of shampoo. It is easy to produce. It is cheap. It is safe. It smells good. Perfect. Then you have like brilliant marketers. They would have the very good packaging. They would have—so they would essentially think it through, the whole thing. And then when you enter French markets like France, it’s completely underdeveloped where, again, there is like no industrial-grade mass-produced shampoo. So all the competitors are craft. All the practices are craft because it was, you know—and that’s why Procter & Gamble had literally a military command structure where it was just “Obey,” and you had to be very good at taking the thing exactly where it was designed because if the American chemistry team decided that the shampoo was exactly like that, there were reasons. I don’t care about you second-guessing the American chemistry team. They have done their job. So don’t second-guess. Don’t create like a variant of the product for the French market. Just do it exactly as you are told.
And obviously this—if it’s products that you’re going to put on your body, the chemistry has to be exactly the same because if you, for example, have a small divergence in pH that’s not going to be good. So that made those companies like economic war machines. They would just conquer markets because they had standardized products they can replicate. That would be: you build a factory, you put it on the shelf of every single supermarket, you do a TV ad, and bam, you have like 20% market share, and you move to the next country.
And okay, those are old stories, but if you look at, for example, Microsoft doing data centers, same story. They would have very smart people in Redmond. They would pioneer the right way to do data centers and then they say, “Okay, we have 100 data centers across the world, and no, you don’t get like a French way of doing data centers. We just do it the American way because that’s a way that is proven, reliable, whatever.” No—so obviously this is very rational to do it that way. So you invest a lot up front to make really, really sure your recipe is very good and then copy and paste, no discussion. And if people second-guess the sort of engineering decisions that were made, even explaining why those engineering decisions were made can entail a lot of cost. So at some point it’s just “Shut up, just do it as you’re told.” It is the rational way to do it.
The problem for supply chain, the way I see it, is that we are living in a world that is more and more complex because the way you gain market share—you know, Procter & Gamble in the 70s—how could they gain 20% market share? They were the first to market industrial-grade mass-produced shampoo. Bam, 20% market share. You do the same thing for detergent, bam. Same thing for a coffee brand, again.
But now we live in a world where—and the total number of products that were managed by Procter & Gamble was like 200 and that’s it. But now I think if you think of Toyota, how many different cars can Toyota produce? I think it’s tens of millions if you take all the combinations of stuff.
John Elam: Oh. Hundreds. You can make a RAV4 a thousand different ways.
Adam Chans Jr.: Yeah.
John Elam: Yes.
Joannes Vermorel: That’s just one—that’s one car.
John Elam: Yes.
Joannes Vermorel: So we are talking of tens of millions, and that means crazy things such as maybe a car that was produced is only going to be produced in the exact same way like only like 10 times.
And when you think—and that’s why field learning becomes so important. It’s because the recipe of “Let’s have in Japan a lab that is going to test everything.” If Toyota was producing 20 cars and that’s it, they could do everything in Japan, test everything and that would be the very best cars, no discussion, etc. But they have millions of variations.
And the same story happens nowadays. Procter & Gamble is having like 50,000 products. I mean it’s happening for every single company. The complexity is sky high. The way you can win market is very frequently a lot of subtle differentiation.
And thus the feedback from the field becomes, you know, so much more important because you have so much diversity. And when you look at companies that do not have so much diversity—let’s say, for example, Apple—they don’t really care about that because they would say, “We are just going to have like 12 different iPhones variants with colors and that’s it. So we don’t really need to do this field learning because it’s so incredibly standardized against a small number of products.”
You see, I really see that as the—the importance of field learning: it’s something that is gradually magnified by the increased complexity of your supply chain.
Adam Chans Jr.: So there’s a large effort in automotive that I know across many of the manufacturers in trying to—now what they wanted to do is create this like custom customer experience. You could have sangria sunrise paint and whatever you want. And now they’re spending like tens of millions over and they’re spending many, many years trying to build these products where they’re doing the reverse now. So now they’re like, “Hey, can I simplify this? Can I only have five colors?” And this is a huge, huge Herculean effort right now across all of them because—and now you have to do things like, “Well if I get rid of orange, how many people walk away? Why?”
And also, you know, everybody wants this new stereo. They don’t want the sunroof, but we make money on the sunroof. So, can we force the sunroof in combination with the stereo?
So, like this is a huge thing. They’re trying to walk back on this customization now.
Conor Doherty: Yes. Well, gentlemen, we’ve been going for almost an hour and a half and the sun is setting here in Paris, but it’s funny because the last question I had is actually one that’s been asked by someone in the audience, so I will just read it verbatim and perhaps you can provide your closing thoughts on the book.
This is from Joshua Bradshaw. Hey, Joshua. This leads to a key question that often comes up: who truly owns the supply chain decisions? As you both introduce this new way of thinking, what are the most effective strategies you’re using to get organizations to adopt it?
Adam Chans Jr.: So, you want to—yeah, I can go, I guess. So, first of all, first of all, thank you, Mr. Bradshaw. You know, I love you.
Conor Doherty: We’re all friends with Joshua. [laughter]
Adam Chans Jr.: Yeah, great guy. Great guy for real. Outside of that, so I think like one of the key things—and John might have meant to say this, but I’m stealing it first. That’s why I wanted to go first. I think like a lot of times you hear “change management,” and change management means you’re taking a solution and kind of pushing it, and you’re working with them on changing the way that they adopt. Like you’re kind of forcing the adoption and getting them to work on something you’re pushing to them.
Where if you co-create and you bring the stakeholders in from day one, you get them involved. You even take some of these like crazy things they say that you know doesn’t matter, but you entertain it, and then they start to trust you. You build a relationship over time. You might be able to take that constraint out, for an example.
And then what it does is it stops them from saying, “I don’t like your model. I don’t want to use your model,” because it’s not yours. It’s ours. And they no longer reject something that they helped. Like you don’t reject your own thing that you propose. That doesn’t make any sense.
So I think that is probably the key strategy. And I stole that from John.
Conor Doherty: So I can skip John is what you’re saying?
John Elam: Go, John. I’m going to package it into something really tight that I like to say, and that is that I don’t like the word “change management.” I’ve written a whole article on “If you’re using the word change management, you effed up,” because you have built something and you’re pushing on people.
And that can work. It can. It has—it works. What I find works way more effective, as in both like it gets rolled out faster and we get more features that matter quicker, is co-creation.
So it’s messier. It’s not a paved road. Like we are not going to walk on some paved, you know—normally we’re paving a road for an analyst or an operator to follow and their boots stay clean and we did all the work. It’s messier to hold hands and walk down a path that may not have been walked before and that you’re going to get scratches on your legs, but at the end you walked straight to your destination, a lot of times, or at least a lot straighter. And there’s none of these extra paths going to buildings and scenes that they don’t care about, right? It’s much more effective to co-create, frankly.
Conor Doherty: Well, on that very, very pleasant note, we’re out of questions. We’re definitely out of time. Thank you, Adam and John. Really, you’ve given us so much of your time. You sent us the book. Really appreciate it. Joannes and I are very happy to have you. And I know you’re too kind to say it, so I’ll say it. Everybody, I stake my reputation on the book. I really like it. You should buy it. It’s available on Amazon. It’s great. Not sponsored, just for the record.
John Elam: Thanks, Conor. Thanks, Joannes. It was great to be here. I love these conversations. This really gets my creative juices flowing.
Conor Doherty: Well, if you write another book, we’ll bring you back. So, is there another one in the works?
Adam Chans Jr.: We might be thinking of one. So, that’s a lot of work, though.
John Elam: But it’s worth it just to spend time with you. So that’s right here.
Adam Chans Jr.: We do this for the love. [laughter]
Conor Doherty: All right. Well, guys, thank you very much again. Thank you all for attending. Thanks for your questions. And again, everyone’s already on LinkedIn. If you’re not already connected with John, Joannes, Adam, and myself, please do. You’re right on the platform. We’re lovely. We’re all happy to talk. And on that note, thank you very much for joining us, and we’ll all see you next week.