00:00:00 Supply chain optimization overview and Toyota context
00:07:28 Communicating value to executives; decisions and money
00:12:45 Digital transformation: simplifying complexity at scale
00:17:49 Embracing modern supply chain methods and language
00:24:00 Unfiltered data and edge cases in production readiness
00:29:44 Improving decisions: objections, disruptions, and forecasting
00:34:05 Safe decisions and managing supplier risk
00:36:23 Toy problems and real-world uncertainty in supply chain
00:41:13 Empowering executives, scenario testing, and ROI
00:46:10 Business impact: moving beyond accuracy metrics
00:49:45 Demand is engineered: accessories, forecasts, and history
00:55:01 Co-authoring optimization and the value of relationships
01:00:51 Stakeholder alignment, culture, and management tactics
01:09:29 Driving digital transformation and changing KPIs
01:15:19 Software, incentives, and lessons from business giants
01:21:17 Audience, language, and effective communication; book advice

Summary

In an interview on LokadTV, hosted by Conor Doherty, supply chain optimization takes center stage with speakers Joannes Vermorel, CEO of Lokad, Adam Dejans Jr, and John Elam from Toyota. The dialogue explores decision-making in global supply chains, emphasizing cultural shifts and simplifying complex processes. Adam Dejans Jr stresses the necessity of systemic reconsideration, while John Elam focuses on the operational scale as a source of complexity and emphasizes aligning language with desired outcomes. The conversation stresses transparency and trust-building, urging gradual introduction of complexity. Insights are shared on the importance of simplifying communication for engaging executives, showcasing varied cultural approaches, and advocating continuous learning to transform legacy practices.

Extended Summary

In an interview hosted by Conor Doherty, we delve into the intricate realm of supply chain optimization, featuring Joannes Vermorel, CEO of Lokad, Adam Dejans Jr, and John Elam from Toyota. This discussion unveils a multifaceted exploration of optimization in supply chains, navigated with diligence and clarity.

The conversation kicks off with an inquiry into decision-making within global supply chain transformations. Adam Dejans Jr emphasizes that transformations extend beyond mere automation, requiring cultural shifts and adaptations to disruptions. He underscores the importance of reconsidering systems rather than just upgrading processes, a sentiment echoed by Joannes Vermorel, who highlights the complexity introduced by the division of labor within large organizations. Simplification, Vermorel argues, is a critical counterpoint to this complexity.

John Elam enriches the discourse by pinpointing the operational scale as a source of complexity rather than the questions themselves. He advocates for aligning language with desired outcomes, particularly when persuading executives. Adam Dejans Jr continues this line, advising a gradual introduction of complexity to build trust and emphasizing political obstacles’ role in complicating decision sequences.

Through poignant comparisons, Joannes Vermorel discusses competitors like SpaceX embracing efficient practices despite political challenges, asserting that companies unwilling to streamline face existential threats. A consensus emerges: understanding and integrating political and business perspectives is crucial.

As the dialogue progresses to engaging executives with optimization, Elam and Dejans Jr reiterate the centrality of embracing uncertainty and aligning plans with financial metrics over purely technical terms. They advocate for starting with simple models to gradually introduce complex layers, fostering transparency and building relationships to gain acceptance for optimization frameworks.

In examining Lokad’s approach, Joannes Vermorel describes the emphasis on probabilistic forecasts and the necessity to prioritize outcomes over technical means. He emphasizes the importance of iterating and refining decisions, particularly when addressing edge cases to ensure comprehensive, production-grade solutions.

John Elam and Adam Dejans Jr draw parallels with Toyota’s operations, focusing on understanding complex supply chains and validating processes across managerial responsibilities. They stress the importance of building trust through objective performance improvements, merging transparency with practical strategies despite partial understanding.

Conor Doherty’s inquiries lead to exploring change management, with John Elam introducing toy problems to illustrate integration of uncertainty in decision-making. This approach, along with Adam Dejans Jr’s experience in crafting relatable suggestions for dealerships, highlights the efficacy of simple communication for engaging uninterested executives.

The conversation shifts to cultural management approaches, contrasting styles between US tech companies and French practices, showcasing differing impacts on company dynamics. New creative leadership approaches emerge as crucial in overcoming legacy systems and driving innovative change within established companies.

An important facet revolves around language simplification for effective communication with executives. John Elam shares insights from teaching rhetoric and technical writing, enhancing engagement by adapting messages to audience and context. This discourse culminates with reading recommendations, underscoring the value of continuous learning and adaptation.

Throughout the interview, profound insights emerge about bridging technical expertise and executive decision-making. It’s a journey into navigating complexities and fostering collaborative evolution, underscored by humility, strategic communication, and relentless pursuit of tangible outcomes within the supply chain optimization landscape.

Full Transcript

Conor Doherty: Everybody wants a successful supply chain optimization, but people often don’t take the time to make sure that all stakeholders, particularly executives, understand what that even means. Now, luckily for you, today’s panel is going to discuss that very topic. Joining me today, Adam Dejans and Johnny Elam from Toyota, and in studio, Lokad founder Joannes Vermorel.

Now before we get to the panel, you know the drill: like the video, subscribe to the YouTube channel and follow us on LinkedIn. And with that out of the way, I give you today’s panel.

Well, Adam, John, thank you very much for joining us. This might be the fastest turnaround we’ve ever had on LokadTV, because reached out to you, Adam, a couple of weeks ago and here you are already. Thanks for joining.

Adam Dejans Jr: Thanks for having us. We’re glad to be here.

Conor Doherty: John, we’ll start with you. Could you introduce to the LokadTV audience your background and what it is you do at Toyota?

John Elam: My name is Johnny Elam. Right now at Toyota, I’m a manager of business insights and strategy. I work as sort of a tech lead, product owner sort of role, where I have three different products in the supply chain that help fulfill the supply chain functions.

There’s all the different sorts of functions going on, right? There’s demand sensing, supply fulfillment. We even have a customer preference engine, where we’re trying to understand what people like by looking at past sales. So I manage three different teams doing that. My background that helped me get to this state is I’ve been an application developer, a data analyst, a data engineer, and kind of had this natural linear progression in my career where now I’m a product owner, connecting my technical knowledge to the broader business vision. Excited to be here.

Conor Doherty: Adam, I know you both work at Toyota, but I know you’re in Michigan, John, you’re in Texas. So Adam, how did you get to Toyota and what is it that you do?

Adam Dejans Jr: Yeah, so my background is in mathematics and operations research. I had a career in the automotive industry for some time since I’m born in Detroit. That’s kind of all there is here. I worked at Ford for a while, did a stint of consulting, and then joined Toyota to come back into really owning the ownership side of products rather than leasing my time.

At Toyota right now, I work as a principal decision scientist. I mainly work in the supply chain transformation with John. Although I’m in Michigan, I do not work at the Michigan Center. I actually work at the Texas headquarters, remotely for now.

My focus is I work with John on a lot of the products he’s on as well, but I tackle it from more of a technical perspective, doing more of the systems design, the mathematical underpinnings of the different algorithms and products we have. Our goal is a North American supply chain transformation that touches globally, taking our learnings to expand globally. We’re under a huge digital transformation effort at the moment.

Conor Doherty: Thank you. Well, thank you for the introduction. When you talk about a nationwide and possibly even a global transformation, how do decisions fit into that? Because that is something that I’ve seen Adam post a lot about on LinkedIn, and your focus is not on anything in isolation. It’s always looking at decisions. So how do decisions fit into the supply chain optimization that you’re describing?

Adam Dejans Jr: I think the biggest thing is it’s really a cultural shift, especially for companies that have been around for a long time. A lot of the processes, even at Toyota today, there’s a lot of manual processes that happen. The first thing people do when they do a transformation is they just want to automate what’s existing.

So they want all these steps that happen in sequence done manually to be automated. But that doesn’t do much; that’s more of an automation, not a transformation. A transformation requires rethinking the whole system. Those steps might not exist as they do today with computers taking over. To determine how the system should operate, you start with what business metrics you’re trying to achieve, then what decisions you need to make to achieve those metrics, and lastly, what could go wrong with the system and how to recover from that.

We’re concerned with things like the pandemic, port strikes, or when parts are not available. When parts are supposed to arrive but don’t, or if there’s a batch of defective parts—how do you decide what to do next? How do you decide to do that intelligently, automatically, and in a way that can adapt in real-time.

John Elam: Also being pragmatic about the boundaries of control. We are a global company with control in America but outside of that we request. Japan is the mothership; they make the decisions on many globally controlled pieces like engines and other strategic supply resources. We’re competing against Toyota Motors Europe and Toyota Motors Asia to win supplies distributed globally.

Part of decision-making is figuring out what decisions we can actually make. It’d be nice to make certain decisions but we can’t yet. Part of it is proving our value, growing, and showing how we can help the globe. Starting small and knowing our limits is key; people trip over envisioning a dream state without understanding reality—you can make certain decisions, but some things are outside your control.

Conor Doherty: Joannes, I’ll come to you in a moment, but just to follow up, John, when you talk about Toyota being a huge, well-established, successful company, how do you start to orient people in the direction Adam talks about? Going from old ways of doing supply chain decisions to what’s more optimization and operations research focused?

John Elam: That one, I kind of have the same answer for like all projects. It’s the thing most technologists don’t want to hear, but it’s the cold hard truth. Meet them where they are. If they are in mean, median, mode world, go to mean, median, mode world and use that language. Figure out what they know and talk to them with that language. Nobody wants to learn a new language.

I don’t want to learn Portuguese; I can have a successful life without it. In Portugal, it’s helpful. There are languages needed for survival in local environments, but not everyone needs them. As a young engineer learning new things, I realized most people don’t care about the math or tooling; they care about what it can do for them in their world.

For business leadership, it’s money. How much money can we make, how much time can I save, etc. I leave my language at the door and adopt their language, starting to get understanding. I may know a better direction and illuminate it using their language. It’s the only way to get people to move along.

Conor Doherty: Well, Joannes, your thoughts on this again? Because we operate in Europe, that’s North America, similar experience or no?

Joannes Vermorel: Yes or no. You see, my take when I look at large organizations is that when you’re talking—so we are talking of several ingredients here: large organization, digital transformation, going for decisions. Okay, let’s put those pieces together.

The reality is that in supply chains, decisions are extremely simple. They don’t need a specific language. We are talking, for example—let’s say Toyota is able to produce 50 million engines per year. Let’s assume that’s a made-up figure. I don’t know the numbers.

And then there is a question on how they have to allocate this production: a portion to North America, a portion to this, portion to that. Fine. So the question is really an allocation of resources and of what engines. Okay, the fine print is obviously a lot more complicated—there are plenty of different engines, etc. Fine. The reality is we have a series of simple decisions against supply chains.

It’s mostly about allocating resources, moving them. It’s not fundamentally very abstract. It’s not even really hard to grasp. It has a physical, tangible element to it. However, large companies, when they were thinking about doing that manually, what they did—they had to organize a division of labor.

And thus you end up with a decision that was very simple, and you end up having 20 different functions in the company who are piecewise contributing to this decision. This is only a consequence of division of labor. If you were having a super-intelligent AI or whatever, it would not need this division of labor—it would be just one entity taking the decision directly.

And so very frequently, when I see this sort of complication, languages—my take is that, okay, what we’re looking at is mostly a side effect of the division of labor, where you have an explosion of complexity. But it’s completely made up, you see? It’s not real. It’s something synthetic, created to support your very large organization.

And very frequently, the question is to expose the basic thing that is being decided through all those layers of complexity. Usually that’s where the real surprise is—that you can end up with 200 people interacting with that, but in the end, it’s just one number. And maybe you should—and that’s the interesting thing—if you have something digital that can compute like a computer, you realize that you don’t necessarily need to have 200 people involved in this thing.

That’s where I think that the transformation is very significant. That’s what at Lokad we have done. We have very frequently replaced processes that were enormously complicated because of the division of labor with something that in the end is fairly straightforward. You just don’t need that many people. And having so many people was creating a class of problems that cease to exist when you put a numerical recipe in place.

Conor Doherty: John, you were nodding along there at many intervals. I’m just curious about your thoughts.

John Elam: No, it’s—he’s totally correct. There’s a lot of made-up complexity that’s there because—just because of the size of the problem we need to solve, right? Like, we have one tool that has like 21,000 different constraints that come into it.

So it’s not—no human’s going to manage that. Humans were managing it, but not like managing-managing it. More like “move from left over to right,” this kind of management—not like “how should I reorganize this information so that I’m making a better decision?”

There was no way you were going to organize that information in a way that a human was going to make a good decision. So yes, I’ve seen firsthand how tools can just cut—the right tool at the right place—it just cuts through the complexity and reduces it down to something simple.

Because you’re right, a lot of times we’re either just trying to—if it’s forecasting world, we just want to understand what the demand is. If it’s allocation, we just want to optimize for putting these things where they should go. The actual decision, like you said, it’s very tangible—you can see the engine going to that plant to make that car.

But yeah, the size of the amount of things going on is what’s created the complexity. It’s not the question itself.

Adam Dejans Jr: So I guess I have something to add to that. As I was saying, yeah, you have these sequences of decisions. I didn’t say it as elegantly, but that process—one of the issues is that sequence is under different pillars of management.

And it could be under different pieces of the organization, and they won’t let you come in and even learn some of these pieces. So there’s a lot of this—maybe you can come up with a solution, but politically it doesn’t work. Politically, it’s almost like you have to reverse it.

You might have one decision that’s split into 20 pieces, but now you need to take five steps and automate those at a time, and then the next five. Then you kind of work your way up. But this is also a problem that’s really overlooked: the political side.

Joannes Vermorel: I completely agree. But here, my message is—well, don’t let those political angles destroy you. Look at a concrete example: an incredibly successful American company, SpaceX. They decided, unlike NASA and Ariane Group, to have a supply chain for their rocket streamlined and organized in a sensical fashion.

It may sound classic, because in fact, it’s not super innovative. In general, the vast majority of modern companies are organized that way. Except when it came to rockets, Ariane Group in Europe was spreading the construction of the rockets across literally all of Western Europe.

So you construct your rockets in 50 different places just to keep every single European state happy. Turned out that NASA was doing exactly the same thing with its own rockets—spreading the manufacturing across every single state in the U.S. Turned out that’s completely dysfunctional.

The consequence is you end up with an organization that produces stuff at a price that is extravagant. It works fine until you have a competitor that just decides, “Screw this politics, we’re going to streamline things.” Politics be damned—do something that kind of makes sense.

My take is that you can afford to take it slow and preserve the bounties and the fiefdoms of this boss and that boss—as long as you’re not feeling too much pressure from competitors. If you have competitors who are really pressing you, then you don’t have this luxury.

I agree this is a big challenge. But historically, a lot of companies that were otherwise excellent just went bust because they failed to do this transformation. The competitor just thought of a way to simplify, sometimes drastically, how they were doing the business—and suddenly they had lower prices.

And the older companies could not survive in this new environment.

Adam Dejans Jr: We agree—totally agree. I guess my point is that there are two aspects: one, if you’re looking at it from the business perspective, and one if you’re looking at it as an individual. So there’s two different ones.

Joannes Vermorel: But Toyota is very, very competitive. Currently, for example, the UK is almost having no car industry anymore. They all went the way of the dodo again due to not being able to adopt more modern production methods.

Conor Doherty: Well, if I can actually just tie together a couple of points there and bring it back to the main topic—convincing people, particularly executives, to get on board with optimization. A key element of that, John, is embracing uncertainty. I’m just curious—in your context, be it at Toyota or in consultancy work—how do you, when you’re in companies with legacy systems, established practices, and internal politics, convince people to see things from your perspective when it comes to decisions—be that probabilistic forecasts or anything else?

John Elam: Yeah, it starts in the same way as my first response. First figure out where they’re at. Where are you at? Learn the language. But then, getting someone to this point of thinking even just in deterministic optimization—for some folks, it’s a whole new way of working. And then, much less a quantitative supply chain or an SDA, where you’re actually adding a time component to it—is a whole other level.

And so honestly, a lot of times I just add them on in layers. Right, let’s start with where you’re at. Right now they’re doing just good old-fashioned min/max stock rates or some refill logic. Frankly, you just start—if you can, and this can be difficult because there can be a lot of interdependencies, so trying to build something optimal becomes a local optimal—but find something that can hopefully be chopped up and containerized. Prove it out as a POC and show them the value of it.

And then connect it to them. A lot of it, frankly, is just talking and using their language. They care about dollars, they care about hours, they care about safety factors. They don’t care about p-values. They don’t care about what the variance is. They don’t even know what that means, half the time. As unfortunate as that is—yeah, it’s unfortunate—but that’s just where they’re at. That’s not the language that they use. We’ve adopted and used it for years, but they’ve used market share, profit, revenue, volume. Those are the terms that they use.

And so connecting and showing—creating toy examples. Start out with: here’s where you’re at, here’s what I’m thinking, with a toy problem. Some small, simple Excel thing. Here’s how that would play out. And then you actually grab a real thing that’s working and run it in parallel for a while. Here’s your process, here’s my process. Especially if you’re not even doing deterministic optimization—holy cow does that blow it out of the water in the first shot if you set up a good model.

And then now you’ve got trust. And now that you’ve got trust, you get a lot more free reign to go experiment. And then again, you bring the next one in. You’re just bringing in the next layer of complexity each time. And then eventually you get them to understand the framework—here’s thinking about optimization problems over time, here’s thinking about forecasting decisions over time. They’ll let you start to expand. But there’s no—you can never just show up and sell your way through it. Selling your way through it is proving.

One of the things I talk a lot about with the product teams I work with is: if you want to go fast, you have to have trust. And if you want to have trust, you must have transparency. So I’m being really transparent with them about what I want to do and how I want to get there. A lot of times they’ll reciprocate. Reciprocity bias is extremely strong. And so people are willing to give up some of their information. A lot of it is relationships. I wish I could math my way to the answer, but it is just getting people to like me, honestly, is nine-tenths of the battle.

Conor Doherty: Joannes, again—how much of that tracks with your experiences at Lokad?

Joannes Vermorel: At Lokad, we tend to do it quite differently. For us, the way we approach the problem is—the means, especially the technical means, are kind of irrelevant. At the end of the day, yes, we use probabilistic forecasts—that’s fine. Stochastic optimization—that’s fine. I mean, a lot of things that they have never heard of, don’t know, don’t care, no time for that. And it’s fine.

What we want is to get to a point where we have those decisions identified. Plenty of scopes—it can be allocations of production, allocation of inventory, purchasing quantity, even pricing optimization, with prices that are up or down. No matter. The milestone to earn trust is to achieve—by the way, this is really technically our primary milestone to go to production—0% insanity. So we need to generate decisions, ideally by the millions, at scale—massive. We go directly for massive scale.

And there’s a reason for that—it’s actually easier, and faster, and cheaper. May be counterintuitive, but most statistics work better when you have more data. And extracting data from an ERP—if you want to filter it, it’s more logic. So if you don’t filter it, it is actually simpler—if you have the proper tools. Usually, filtering creates a lot of complications, especially in data extraction.

So we prefer to say: we are going to work with our systems, we don’t filter, we just take all of it. It’s fine. It just makes everything easier. And then the question is—when I say those decisions should have 0% insanity—it means people should be able to look at all the decisions we have generated and not find any objections.

Initially, we will iterate, because people have objections. They say, “Oh this decision is interesting, but we can’t because of this and that.” Very well. We change the logic and we fix that. Or: “Here, you’re not really paying attention. This is a VIP client.” Oh—new concept, VIP client. I didn’t know. It was not documented that you have VIP clients. Tell me more. Explain why this client is so much more important. Very well. Then we are going to factor those VIP clients for you, etc., etc. Rinse and repeat. Iterate. At scale, with maximum parameters.

And at the end, the idea is that within—it just takes a series of weeks—but you have something where people can’t even object anymore to anything. That’s where we get the trust. Suddenly, they have a system that just generates decisions that are very easy to understand—because it’s decisions. And nobody really has any objection to anything.

For us, that’s how we gain trust. Usually by embracing all the edge cases, all the weird stuff. So that it doesn’t have a “POC” vibe. It has a literally production-readiness vibe. Even if it’s technically just a pilot—it’s really maximum scale, maximum coverage of all the weird stuff. Which means that if you’re not as optimal from an optimization perspective—your tools are crude and whatnot—it is fine. It can be postponed to later on. That’s the sort of thing that, for us, initially the problem is not so much having something hyper-optimized, but having something where there is not a single line where people have valid objections.

John Elam: I think we’re kind of getting at something similar. When I say “chop something out,” what I mean is—the Toyota supply chain involves… I mean, I’ve been here three years, and I still can’t fathom. We have fourth- and fifth-tier suppliers. We’ve got accessories—accessories installed at a factory, accessories installed at a plant, accessories installed by the dealer, or accessories installed by a vehicle distribution center. And then you can also just buy accessories from us.

And that’s just accessories. Then the engines—we make them all over the world. So a lot of times when I say a “POC,” what I mean is one of those swim lanes. You’re not going to pick all—because they all intersect with one another. That’s another problem. I have to get my vehicle forecast right in order to get my accessory forecast right. Because I’m trying to forecast: how many mud flaps am I going to put on Siennas? Well, how many Siennas are you making?

So it’s like trying to chop up—okay, which one can I actually… can I stay in a swim lane? And a lot of times, when I say a swim lane, it’s like a manager’s responsibility, frankly. Because their sphere of influence has boundaries. So you’re absolutely right. One thing that I love that you said is: you cover all the edge cases in the thing that we’re going to tackle. Yes—we’re going to make that thing. If you flip the switch, it’s production-ready. It solves all of the problems.

Yes, I cannot agree with you enough on that. Whenever—I’ve been calling this my “methodical data transformation” approach. It’s kind of like, how do you approach things? Do you go process through process and drag the entire organization through it? Or do you start with one part of the organization and do all of its processes, and then cascade that down? Kind of like two different ways you could do it.

But either way that you do it—whether you’re going process by process and covering all of the different sales verticals or whatever, or you’re doing just North America and trying to do it all the way across—whichever lane you pick, it needs to be 100% done. Because that’s the only way I get trust—by showing that I’m actually doing just as good as you are. And in a lot of these cases, I’m objectively doing better. And that’s what I mean by POC.

So yeah, I think you’re right. I don’t mean POC in that it’s a science fair experiment—I mean that it truly has proven the concept. And that ideally, when the PC is done, you have a true MVP. This is a usable product. It helps the business add value. Once you’ve flushed through all those edge cases. But yeah, that’s a really good point. I don’t want people to think we’re building like Jupyter notebooks and calling it done.

Joannes Vermorel: Yeah, exactly. Notebooks. Exactly. That’s—I would say—the data science pitfall I’ve seen so many times. The thing has just so many lines that are blatantly wrong that people—people operational, you know, people who would in the end be responsible for the decisions—they just look at the numbers and every ten lines, they find an insanity. Something that is just nuts. It would not work, would not fly, would cause damage, complications.

And that for me, that’s the fastest way to lose all credibility. Doesn’t matter which technology is used—if the managers who are reviewing the decisions can spot insane stuff, it needs further iterations. And you want to iterate until there are no objections. People look at those decisions and say, “Well, if it was a colleague doing them, I would just greenlight all of that.” Probably, time will tell, some of those decisions will turn out to be mistakes—because again, forecasts are not perfect. But at the time being, considering the information that I have, I would greenlight all of that. And that’s it.

John Elam: Yep. That’s a good way to think of it. That’s a good mental model. Would a colleague consider this to be a reasonable forecast, decision, whatever? And if they can’t get to that point, you don’t get to go any further. You’ve not won the trust yet.

Joannes Vermorel: And very frequently, when there are objections, very frequently, there is stuff in the modelization that is just wrong. It can be things that are dumb, such as—your purchase quantities that you are asking from our suppliers are good, but you forgot that our capacity to absorb deliveries at the warehouse is limited. And here, we are going to have a collision—too many trucks delivering at the entrance of our warehouse on the same day. So you see, maybe the quantities you order are correct, but unfortunately there is something else—seemingly unrelated—that still prevents you from doing that.

Again, there are plenty of things. And those objections—it’s very important to integrate them. So that people don’t have super blunt objections like “This number is not even in the realm of the feasible. You have this and this and this that just wouldn’t make it even a workable solution.”

Adam Dejans Jr.: I guess for me on this—I ask a lot of questions that kind of hit home with some of the management. Such as: “You remember that port strike last year? That was no good, right?” Stuff like this. And—losing money sucks.

Really, a lot of what you hear is: “We need to get the accuracy of the forecast better.” And what I explain to them, or try to talk about and think through, is: it’s pretty easy to predict when everything’s going right. If everything’s stable and everything’s going well, then yeah, it might be more accurate. But when the time comes—and let’s say a port strike happens—you’re not predicting that happening.

So when you need this forecast the most is when it fails. When you need it the most is exactly the same time when it fails. So what if instead of trying to avoid that and pretend it doesn’t exist, we embrace it and put it into our process? That’s kind of the approach I’ve been hitting. It’s mostly working. It’s a slow process. Anytime you’re at a really large organization, it’s very hard to make changes. A lot of it is because you’re siloed in these verticals.

But that’s the guess—I’m trying to really relate it back to real life examples that have happened to them. Where they want to fix it, and they don’t really know how. And then it gives you an opportunity to address their pain points that they just suffered through.

Joannes Vermorel: That’s interesting because modeling those disruptions is not that complicated—you just say, “Okay, I will just put a 5% chance to have a major supply side disruption yearly.” Boom, okay, why 5%? Well, in the last century we had two world wars plus a lot of stuff happening, so yeah—recently, even if you say 5% chance annually to have like a major supply side disruption, it’s not even super high. And you can have similar risk on the demand side and on other things. So those percentages are mostly guesstimate—that is fine.

The interesting thing is that’s why at Lokad, we don’t deliver forecasts because it’s way too complicated to understand that. We focus on the decisions. And the decisions—usually when we get to this insanity—people would say, “Oh, this decision, for example here, this inventory looks a little bit high.” And here the discussion becomes: yes, but is it crazy high? You know, there might be problems, so are we so high that it’s unreasonable?

And then you see that the interesting thing is that if you look at the forecast people have, it’s so difficult to think at the same time—the demand might be 100 or it might be 50 because we have like a weird thing going on. It’s very difficult, you know, thinking of all those possible futures is very difficult. But when you look at the decision and then people say, “Yes, this decision is an aggregation of plenty of risk,” and you would say, “Well, it looks a little bit conservative, but guess what? You can be—you can have so many problems: suppliers unreliable, delays, port strikes, and whatnot. Ultimately, it feels like safe.”

And that’s the interesting thing: when we displace—we try to displace—the discussion on the final decision, for example, the allocation of resource, and that’s where suddenly, I think people, especially on the management side, are much more comfortable with the idea that this decision embeds tons of risk that I don’t even really understand—it’s just a package, you know. And here, that kind of works. That works much better than trying to communicate on the forecasts that have like weird modalities with fat-tail events and whatnot.

Conor Doherty: Well, if I can just follow on that—sorry, I might just build into the next point, which is again a part of change management, which I know John, you like to talk about, and you speak about quite passionately. But a part of that is not just generating user buy-in. And a part of that—even if people see that it works—people still want to have some level of understanding of how the thing works. Like most of the time, people don’t want to just say, “Oh yeah, that works fine, good enough.” They want to be able to at least have an executive summary of, “Well, okay, how does a probability distribution work exactly? How do you take that and transform that into a decision?” So, John, to you first—how exactly do you handle that part of the change management, like getting people to at least understand on some level the intricate maths that is actually happening under the hood?

John Elam: Honestly, toy problems are amazing, right? Like, simple, easy-to-follow toy examples. Like, I want to embed in uncertainty into our decision for how much inventory to keep on ground stock—well, let’s just go with some assumption that there’s a 1% chance—you know, we order cars monthly (currently we’re looking to speed that up a lot), but right now we order our cars monthly. So let’s assume that each month I have, let’s say, a 1% chance of there being a port strike.

Well—and perhaps that even changes, right? Like, the likelihood of a port strike can go up as contract term comes to expiration. The odds of a port strike might go up, and so I’ll just plot that, real simple, right? Like 1% chance, and let’s assume it goes to 10%—just making, literally just picking numbers for an example. And we show that going up. And then what I’ll do is I’ll say, “Hey look, each month, right? Even just thinking about safety, right—we probably want to order like 1% extra cars or a certain amount of extra cars to hedge against the fact that we might go—there might be a strike at any moment.”

Now, the odds of that are pretty low right now, but the odds would go up as this approaches. And so I might want to increase my ground stock, knowing that that very uncertain thing is more likely to happen—that disruption is more likely to occur. So I want to build that up. And then we just play through the—there’s two possible scenarios, right? We don’t have a strike, or we do have a strike. And you just show them the outcomes, right? “Hey look, there was not a strike, we had a little bit extra inventory. The next month, I order a little less because now my uncertainty has dropped, my inventory goes back down to my normal level of uncertainty. And look at this extra holding cost that I carried for the next two months as we burn down that extra inventory to hedge that. Okay, there’s a cost to that—few million bucks or whatever it is.

Let’s look at the opposite. Okay, on the other side, there is a strike. And let’s say—how long do strikes normally last? Two weeks, right? You know, you can look back and say, ‘Look, how long do these typically last?’ Alright, two weeks of not getting any cars. Holy cow, how much does that cost you?” And just show the two numbers. Which cost would you rather have? You will absolutely incur one of these, right? And that’s—you know, anybody can agree that there’s—you know, we either will or will not strike. That’s pretty straightforward. And just kind of showing people, “Well, look at the different outcomes that could play out in this toy example.” And that’s how you get people just kind of thinking about distributions, thinking about uncertainty, and thinking about uncertainty changing over time.

Toy Excel problems are amazing—simple, easy to follow, and people want to learn like that kind of thing. They want to feel—like anybody, right? I like to feel smart, you want to feel smart, we all want to feel smart. So how can I help lead them to that? Not necessarily showing them each thing, right? But like, “Hey, let’s build this together.” If you show up and you have a PowerPoint, you messed up. Whiteboard, right? We’re going to whiteboard this together—together we’re going to learn, we’re going to figure this out.

And I like to even just start with their problem—they might be in accessories, they might be ordering engines, they might be ordering who knows what. Well, what’s your world? And let’s just play through, and you know, and then ask them, “What kind of uncertainty do you have?” “Well, sometimes we have whatever,” right? “There’ll be a strike on the trains, there’s a train strike last year, that was a big problem. Let’s talk about it, let’s talk about modeling those two different decisions.” And now I’m using their language, they’re kind of guiding, you know—we are guiding, you know, it’s truly collaborative. I’m bringing this sort of way of thinking, and they’re bringing their very real pain that they have. That is the best way to build products—is around a pain that’s being experienced, because you know the problem’s good when the pain’s gone.

Adam Dejans Jr.: I think another thing is, oh yeah—when they ask, like, “How does it work?” Sometimes they don’t really care about the algorithm, but what I’ve noticed is something they really do care about, that helps a lot, is they want to know what levers they can pull and change as well. So, like, can I—like, I don’t know—can I test this? One thing is scenario testing. They love scenario testing. Like, “Well, what if, you know, it was instead of 10% it was 50%?” Or, “Can I add more safety stock or whatever it is?” I noticed having these levers available to them and knowing what they can play with also really helps get the buy-in.

Conor Doherty: Yeah, exactly. I was just about to say—the internal locus of control. You’re empowering people to feel part of it. And on that, do you want to—does that align with, again, your approach for this?

Joannes Vermorel: Again, there are parallels, but we are doing it quite differently. The typical Lokad way of doing that is to decorate every single decision with half a dozen of what we call economic drivers. So, the idea is, depending on the case, we would have, what is the projected cost of inventory, cost of stockout, cost of supplier delays, cost of this, cost of that—obviously it varies depending on the vertical.

But the idea is every decision comes with half a dozen of assessment in dollars of what is at stake. And the interesting thing—and I go back, that’s why there are similarities—you know, we definitively go to the stakes in dollars, and then when it comes to challenging the decision, what we try to have is people challenge our assessment in dollars. You see, because it should be a way for people to say, “I disagree with this cost that you’re putting on that.”

They don’t really care on how exactly we did end up with this calculation, but what they think, what is usually very useful, is they say, “Okay, the risk that you’re saying in dollars for suppliers delay is way too low, for example.” That’s very interesting—might be because we are not looking at the lead times correctly, might maybe because we are not taking into account other things. But fundamentally—and that’s why I connect with the levers—is that that’s a way to kind of kill the people asking for 100 simulations.

In fact, it’s more like, “Okay, what in terms of—we have a divergence of business vision on what costs money?” For example, for the strikes in ports, you can think of it as insurance that you would have to pay. Are we getting the cost of this insurance right? Are we even in the right ballpark? And that’s where very frequently we get back—so the simulator or your methods, we go back to those toy examples, but typically circling back from the cost, this is, “Okay, we have this cost that we say for the risk of strike and whatnot, can we have a back-of-the-envelope calculation that shows us whether we are in the right area or not?”

And again, we have always this no-insanity—so this cost needs to be in the ballpark area of what we think is correct. And if it is, we’re good. If we realize that we are just vastly over- or underestimating one of those economic factors, this is what requires correction.

John Elam: I love that language—insurance. Because that’s exactly what it is. Yes, that is wonderful language to help people kind of understand, “Why am I paying this cost?” It’s like you’re hedging a bet, you’re getting insurance here. I love that.

Conor Doherty: Thanks. Again, that sort of ROI perspective on decisions—again, essentially treating your decisions in some cases as insurance, that sort of deviates or can deviate from the, let’s say, the established approach to decisions. So like, well, Adam, you said earlier about, “Well, I just want more accuracy,” for example. And to be honest, it’s an evergreen topic, but it’s one that anytime I go out, if I’m at a trade show or talking to a potential prospect, that’s what they’ll talk about. They’ll be like, “Well, I want to—understandably, I think that my pain point is I need greater accuracy.” So, again, Adam, we’ll come to you first: how do you tease apart those two? Because I know on LinkedIn yesterday you post about, “Oh, better decisions is better than better forecasts.”

Adam Dejans Jr.: The key there is just keep pounding home: what are you doing with the forecast, is one of the things. Like, you need to decide first, what business metrics are you trying to make better? And then everything else is really a support to that. So you could have the most accurate forecast, but depending on how you use it or don’t use it, could change things.

The other thing with chasing accuracy is, well, one, you’re never going to be 100% accurate anyway, because things change. You need to embed this change into your decision-making framework. But beyond that, let’s just say, I don’t know, you’re at 95% accuracy—at what cost do you really want to go to, like, 96%? And if you don’t—if you can’t tie that percentage gain back to your business metric, you’re just going to have all your data team chasing some arbitrary accuracy measure with no real tell of how it’s going to affect your business.

So, like, does a 1% gain get me—can we quantify that? Does it end up resolving back to the business in some quantitative money figure or what? Like, how are we using it? That’s one of the key things that I think I see happen, and especially at large old companies like Toyota. Toyota’s a Japanese company. They have never done a layoff. And it’s the—everybody stays there, like it’s a company you go to to have a career at the company, stay there. And so what they do, what they’ve done previously has got them to where they are, so they kind of like to follow what they’ve done. Because, like, number one automotive company for a reason, right? It’s like, “Well, if we keep doing what we’ve been doing, maybe we’ll just keep getting the same results we got. Let’s just do what we’ve been doing, but better.”

And sometimes, like as we mentioned earlier, eventually you’re going to have to change, because somebody else is going to come along and change. So, kind of going on a rant, but I don’t know.

John Elam: One thing I want to talk about is like, I don’t know, when you focus on decisions, there’s so many more things you get to talk about now. You don’t have to focus on forecasts. For example, one tool that we’ve built is a suggestion engine. It doesn’t know about forecasts at all—it doesn’t care about the forecast. Its objective is to create more revenue, purely. And when I say create more revenue, it’s a recommendation that pushes more accessories on the vehicle, up to a breaking point, right? Like, how much accessories can I put on here that people still like the car and that it still sells at a good clip?

I don’t really know how much faster it will sell—that wasn’t part of the measurement. The part of the measurement was, will it sell just as fast on the average, and does it have more dollars on that car when I sell it? We did a paired and unpaired t-test, looking at pilot and control groups and looking at historical averages over the same time period for these two different—you know, some were getting recommendations, some were not. And we made a lot more money. There’s no forecast there, right? There’s no accuracy around it.

We literally just copied strategies from winning dealerships and pasted that strategy onto dealerships that are struggling, and we made more money. And I get a lot of questions about that product, around “When’s it going to forecast? When’s it going to tell me what to—” and I’m like, “That’s not what it’s going to do. It tells this set of folks that do this one function that if they follow this recommendation, it will probably sell just as fast, and it does have more money on it.” That’s all it does, and that’s a decision. It’s a super simple decision, but it does help the business—it does help us make more money.

So it is—that’s why I like focusing more about what decision are you going to make and less about this perfect forecast. Because there’s so many decisions that can be made, and forecasts—frankly, forecasts don’t add value. Making decisions based on what that forecast is telling you is how you create value. So yeah, everybody wants their crystal ball, but we’re never going to get one.

Joannes Vermorel: I could not agree more, and I think your example about preloading the cars with the right accessories is a blatant example of that. The typical mindset in the mainstream supply chain view about forecasting is to think of demand like the future position of planets—something that will happen no matter what. And if you can nail it down to 0.00001% inaccuracy, that is just nonsense. Here, what you’re demonstrating is that demand is engineered—that if you put a better car at a higher price point in front of the customers, well, they may buy the more expensive, better car at a higher price point.

Obviously there is a limit, because at some point people say, “It’s really, really a nice car, it has so many good things, but I’m afraid that I cannot afford this car anymore.” So obviously there is a limit, but until you’ve kind of tested the limit, you are leaving money on the table. And the problem is that if you were kind of conservative on that in the past, your projection just reproduces the mistake that you were doing, which was not putting cars that were equipped enough in front of the customers.

So that’s really the planet trajectory mindset—you just look at the past, but the reality is that the future is contingent on decisions that have not yet been made. And that’s why I very much agree with you that decisions are superior to forecasts, because to a large extent, the future is the outcome—the consequence—of decisions that you’re about to make, not the other way around.

Adam Dejans Jr.: You also see this with—sometimes there might be a recall, or we don’t have some part or some accessory, and then it’s missing from the historical data for, whatever, six months. Does that mean nobody wants it now? Well, historically it’s going down—guess nobody wants mud flaps on their car. But that’s obviously not true.

John Elam: That is such a good point. Sometimes, as a manufacturer, we’ll have a quality hole. This is very core to our Toyota culture—if you’ve ever studied TPS at all, like, we literally stop the line if there’s a problem. And so sometimes we’ll stop—if it’s a big enough problem, we’ll stop it for days and weeks, and we will solve that problem before we start making more cars. We don’t make bad cars, at least not knowingly.

And so there was a point where we stopped making a certain vehicle line—a very, very popular vehicle line—for months. So if you just took the averages and went forward, you’re going to have this significantly reduced forecast, when the actual demand is extremely pent up. We’ve got backlogs at dealerships for hundreds and hundreds of these vehicles at each dealership. And so you got to know where to look, right, as far as understanding what forecasts even should be—are you forecasting demand, or are you forecasting your histories?

Joannes Vermorel: One of the biggest mistakes of the mainstream supply chain theory is again this focus on time series—as if it’s a one-dimensional vector. For the quasi totality of businesses, it just cannot reflect what is happening.

An example would be, even for cars, for example—the demand is not a one-dimensional thing. Are you okay to wait for the car? For example, for Mercedes: you want a Mercedes? No problem—wait one year, and Mercedes is selling cars. So obviously it depends—the answer is, it depends. But the thing is, it’s not a one-dimensional thing. Demand is conditional on price, it’s conditional on delay, it’s conditional on locations. And if you just flatten your demand as if it’s “number of vehicle per day,” you are completely missing the plot on all those dimensions.

And they are not necessarily super complicated. That’s the interesting thing: I’m not talking that you need to have crazy complication—as you were saying, you had, for example, a simple logic that was pushing more accessories, based on simple heuristics—copying the winning strategies of the very best dealerships. That’s also something that is beautiful: sometimes, engineering a good decision is an order of magnitude simpler than actually a good forecast. You can get to good decisions while being fairly in the dark with the fine print of the future.

John Elam: Even just simple order-up-to logic, if you have nothing else, that’s very helpful.

Conor Doherty: Well, again, so listening to the discussion on how to arrive at decisions, a key part of this—again, to come back to change management—really is, how do the non-maths experts fit into this? Because again, if you come in as the maths geniuses, the wunderkinds, you’re still trying to implement that in a room with people who have expertise in other fields. So I’m just curious: how exactly do you leverage that to sort of co-create or co-author an initiative, an optimization? Because again, you still need the information in other people’s heads—how does that fit into your process?

John Elam: Like I said on the earlier comment—don’t show up with a PowerPoint. Because that means you have the answer, and I don’t want—you know, I have an ego, right? I think everybody has a little bit of an ego. I want to build it, right? Well, then let’s build it together. And it sounds so simple, and I almost repeat myself, but it is really that simple. And that’s probably the hard part—is that it is so simple. What is their problem, what is their language, what do they know? And then pull them to where I feel like we should be going with their problem set.

And a lot of that is uncovering the pain—so what is, you know, people know what’s painful in their work. It’s oftentimes what they spend a lot of their time on, or—I say that sometimes we spend a lot of time on problems that don’t need to—that, when solved, don’t help that much. But they do often understand: this is what we’re doing today, and here’s where this is broken. And we can talk about that quite a bit. And a lot of times where it’s broken is something that either can be automated, or something that can be forecasted, or something that can be simplified even.

So it’s really just meeting them where they’re at, learning that language. And like I said, you don’t build anything without them—you’re building with them, get that requirements from them, and if necessary, lead them to the right answer, but don’t tell. You cannot tell people. I can’t stress that enough—a lot of people don’t want to be told what to do. They love the “aha” moment, and so if you can help guide them towards, “Hey, I think it’s over there, let’s go check it out together,” and then—sometimes I already know the answer, but that’s okay, I don’t need to shove it down their throat. Let’s spoon-feed it for them.

Adam Dejans Jr.: I’ll give an even simpler—a different perspective. But to me, over all these years, just relationship building is even more important than any of this. And also recognizing what we refer to as the “power chart,” which is kind of like an influential organ. You have your organizational chart, right? You have, like, these people report to them, and this is the manager, this is the executive. And people think that the power and persuasion goes up that chain, but oftentimes it doesn’t.

Oftentimes there’s somebody whispering in an executive’s ear because they’re friends with them or whatever, and just opening your network, building relationships, just listening to people, just talking to them, gives you this avenue to later come at them with more technical things and they’re open to listening because at this point you have some relationship—they trust you. And that piece is so important, and a lot of people, especially junior engineers and scientists, miss this because they think—and they’re usually right—that they objectively have a better answer, and they usually do. But you’re not going to implement it that way.

And that’s why people get frustrated in corporations too, because there can be a worse solution—objectively worse—but yes, it’s sold better, because either that person’s just more influential, whether it’s through speaking, selling, or just relationships in the network that they built. As part of what we do is career coaching as well. When we do that, we really have a whole section that we talk about this, because it’s very important in order to drive changes. It’s often overlooked.

Conor Doherty: I know you talk about coordinating with Toyota, the Japanese headquarters. And I know, having worked in China for five years, I know the importance—like, when you said “relationship,” the word I thought of was “guanxi,” which roughly translates to networking or relationships, but in China it’s just so much more powerful. If you don’t have good guanxi with your superior or your colleagues, nothing gets done—or, sorry, it’s so much more complicated to get things done, even if, as you said, you have an objectively superior or the most objectively impressive idea.

It’s, well, the way you framed it, the way you walked into the room, you made people feel like idiots, you didn’t involve them, as you said, John, you didn’t involve them in the process. So my question—when you say relationships, do you mean that? How much of that is informed by the sort of intercultural work you do with Japan as Americans working with a Japanese corporation, or how much of that is just in general?

John Elam: Yeah, it’s both—it’s definitely both. But I would totally agree that in the Japanese, like low-context or high-context culture, right, it is paramount. Even there’s a term for it—we use it all the time, there’s websites about it in our internal—it’s a Japanese term called “nemawashi.” The direct translation is “to prepare the soil,” like to prepare the soil to plant something, but the cultural, societal meaning is, “Let’s all get on the same page.”

And like Adam was saying, whether it’s the informal power chart or even, and actually especially in this culture, both the regular hierarchy as well—we all need to get on the same page, and basically the decision gets made before the decision gets made. So what I mean by that is, I’ll go have—and this is, frankly, I mean, you said it’s kind of the only way—in my experience, it’s the only way to get stuff done in this culture.

I have to do these one-on-one connections with basically every single stakeholder who’s even going to have a sort of even just intermediate impact to their work, and then get them to understand what we’re doing, how it’s going to benefit them, what’s going to change in their world. And then when we have the meeting where we’re deciding what to do, we’ve already decided—everyone already knows the answer. And frankly, if there’s any problems at that point, you’re not going to move forward. You’re going to go back and you’re going to do some more nemawashi. So it’s highly important.

But even in the American context, like when we do our consulting work, the relationships are still paramount—it’s not quite as strict in that you need every single person on board, but you do need a critical mass. Now, certain people have different weight, right? And that’s the whole power chart thing. But you do need a critical mass to move forward. You’re not going to change an organization’s way of working with a cool idea or a really cool metric.

Conor Doherty: Joannes, I immediately want to throw that to you, because, I mean, we’re a French company—France is a high-context culture—yet as a French company, we deal—the majority of our clients are outside of France. So, in terms of culture and getting things done, what are your thoughts?

Joannes Vermorel: Yeah, I mean, that’s very interesting. Clearly, you see, what I have observed is that in the US, certain companies, especially the tech companies, they have a very, very hardcore confrontational approach to management. For example, the Jeff Bezos memo in 2002, where he basically sent a memo to his entire team, saying, “Every manager who in two weeks does not have a plan to expose the data of his own department through an API (if it’s not already the case)—if I don’t have a plan from this manager, he’s fired.” And he ended up firing—I forget—15% of the managers.

And yet—so that’s extreme. In France, that would be unthinkable and almost impossible—extremely costly. It’s possible to fire, but if you just do it like that, that would be just insanely costly. But the reality is, if you look at tech companies—well, they are kind of all from the US. So Amazon did not emerge in Europe, you know, it emerged in the US. And when you look at the other tech giants and what they have done in terms of management—for years, Microsoft was—yes, they were not firing, but they were so incredibly brutal in many ways, and yet, what a success.

So my take is that the amount of nemawashi versus brutality that you need depends a little bit on how fast your industry is changing. If your industry is changing slow, then probably, you know, the Japanese-style, steady, just make everything happy and always steer for something better incrementally, you don’t lose your human assets and whatnot—it’s probably best.

If you have things that evolve at, you know, like software, super fast, then if you do that, you are likely to be a company where the vibe is good, but you’re just obsolete, and you’re just replaced entirely by people who have eaten your lunch. So I agree with that. I would say, I don’t—I think that the answer is really dependent on what your competitors are doing and how much disruption are they bringing to the game. That would be my take—so, again, different industries, different times.

John Elam: That’s a really good point, because in software, right, let’s use Lokad as an example. If Lokad has this feature and they push it out and it’s just not that well received, you guys go change the feature, right? You’ll get that feedback presumably pretty quickly, and then you’ll iterate on it, and you can get a new output out pretty quickly.

Whereas, when we make a Prius, that Prius is in operation for 20 years—more than, longer than my entire professional career, those vehicles are likely to operate. And so that’s why it’s like getting it perfect on the first go is so critical. But you’re right—as we’re building software, and that’s a cultural change that we run into working in Toyota, that is a challenge, right? We’re building software, we’re building things that I can just update. “Tell me what’s wrong, I’ll iterate, we’ll get you something next month, give me more feedback.”

And that whole way of thinking is a challenge. The company is definitely adopting it, though. You can see the gears turning and shifting to be, at least on the software side, a little more flexible, a little more iterative. But it is—it’s definitely a challenge. But you’re right, I think there’s the right sort of culture and philosophy for the right place in time.

Joannes Vermorel: If massive disruptions are coming, then I think brutality wins. But on the other hand, if you’re just steady, then you’re just creating chaos for no reason. And it’s—but you see, for example, one of the things that is a very example of this absolute brutality, and yet it was a good move, was the takeover of Twitter. They did end up firing 90% of their employees, and in the end, the product has more features than ever and it has gone up in terms of traffic. Which is, obviously, the question for Toyota would be: is it conceivable that by firing 90% of the people working at Toyota, they would produce more and better cars? No, not a chance.

But in software, these sort of things do happen, and that’s where—but again, that’s very different—different culture. But I think where it’s interesting is that digital transformation, you have this element of this much more brutal, fast-paced, chaotic nature of the industries that is kind of, you know, seeping into companies where things were not done that way traditionally, for good reasons.

John Elam: It’s definitely a paradigm shift that’s being felt, and yeah, everyone’s growing through it, whether, you know, technologists joining companies that have a more traditional framework and way of working, and vice versa—these very traditional companies that are hiring a lot of AI, machine learning-type talent. There’s a bit of friction there, but I think good management finds the right balance there between the harmony that’s necessary for a manufacturing-type environment, and the progress and innovative thinking that’s required for, you know, changing what we’re doing.

Adam Dejans Jr.: It’s a slow process. It’s a slow process at Toyota. It was the same at Ford as well, though—it’s an automotive thing. At Ford, I worked in the autonomous vehicle group for a stint of my time there, and they kind of treated it as a startup, but funded through Ford Motor Company, right? So, like, a lot of money backing it. Yeah, I mean, I could tell the difference, and I’ve seen software done properly because we had to move fast in that environment, so there wasn’t as much of this hierarchical, like, automotive culture. But yeah, it’s different—it just takes time.

Conor Doherty: Like, we’ve been working this way for 30, 40 years, and we’re a multi-billion-dollar company—who the hell are you to come in and say, “Hey, you need to stop doing all of those things.”

John Elam: Yeah, it’s like, “Show me someone who’s making more cars than us.” You’re going to struggle.

Conor Doherty: It’s fundamentally true. So, again, if you were walking in with your degree in maths from MIT and you say, “Alright, all of this is trash,” or even if you—sorry, even if you take the very velvet love approach, which I love that you described, John, the whiteboard approach—even still, you are still butting against decades and decades and decades of almost unrivaled, highly profitable success. So how much resistance just comes from that? You can have resistance because of, “I don’t like the technology,” or, “I’m not familiar with the technology,” and then there is, “No man, status quo—we’re good, we don’t need this.”

John Elam: I’ve been fortunate enough to have been hired into a digital transformation, so I was literally explicitly hired with them knowing that that is the challenge, and there’s a reason we got to pull some folks from the outside. So I’m kind of privy in that aspect, that the reason I’m even at the company is because what we have been doing is not going to get us where we would like to go. So I’m kind of lucky in that way, but that doesn’t mean that basically every single stakeholder that I run into, other than my direct leadership chain, are in that sort of mode of, “Well, John, I’ve been successful doing this this way for a long time.”

It’s—you know, some—it can honestly—that output can be a bunch of different flavors. It can be, as I described before, white glove, whiteboard service, let’s co-create. Sometimes it can be showing their leadership or their executives, depending on the—a lot of this has to do with the power chart, and are you able to identify the actual influence—actually influential people at the decision-making level, which is like that VP level, is where real decisions get made at a company this size. And so, finding out who’s connected with them—frankly, sometimes if I’m just hitting a roadblock with a stakeholder, you have to just go around them, and let their management know what is possible, with other allies, right? It’s not just me showing up with, you know, Frank versus Bob, it’s Frank and friends versus Bob’s been doing this this way for 40 years, and here’s what that means for you, Mr. and Mrs. Executive—here’s how your KPIs are going to change, and kind of guiding them through that.

But fortunately, I don’t run into—I’ve been privy to being hired in as a digital innovator—digital transformation manager is exactly what they brought me in for, so a lot of folks know when the phone rings and they see John Elam, they know I’m going to talk to them about changing something, because it’s just literally part of, like, the—it’s actually next to our titles that we were hired into this digital transformation team.

So when folks reach out, they know why we’re there. Yeah, I’m not going to lie, there’s no silver bullet for that one. It’s working with folks, a lot of patience—a lot of patience. I mean, you’re going against people who have been doing this as long as I’ve been alive in some cases—literally. I mean, I have bosses who’ve—you know, I’m 36 years old, there’s managers with 40 years’ experience with the company. So patience will get you a really long way—slowly.

Conor Doherty: And I know that when you talk about this, you often use the examples of the tech industry, which obviously is typically a lot more agile than large, well-established, multi-decade, possibly even 50- or 60-year-old companies that have all these pre-established processes and a legacy of enormous success. So, when you have those conversations, how does the “well, here’s how it works in the tech world”—how does that rhetoric land for people?

Joannes Vermorel: I think the reality is that when you look at history of business, those very established companies—it is only an illusion of stability. You know, if you go back—for example, one of the greatest retail chains of all time, A&P, barely anybody remembers, but they were the largest retail chain worldwide through most of the 20th century, and they were in the US, and now I think they don’t have any store left.

So there have been so many giants that seemed unassailable that have gone. So my take is that markets are excellent filters, and the software industry is—and by the way, there is this general tendency of software eating the world, is that more and more, I see industries will follow the dynamics of the software industry, good or bad, just because software is an increasingly larger portion of everything.

For example, that’s very interesting, if you look at SpaceX—SpaceX is, most of it, a software company. This is not a rocket company, this is a software company first and foremost. For example, the vast majority of the improvement that they have brought to their rocket engines is through superior software to conceive the engines—that’s where the true magic lies for their rockets. Most of the magic about the rockets is this superhuman piloting ability, so that they can bring back their rockets—something like 30 seconds before touching the launchpad, the rocket is still traveling at hundreds of miles per hour.

By the way, the rocket is braking at 20 Gs before landing. If they were a human, the human would die—the thing is braking way too fast. So that’s something where no human could pilot the deceleration of 20 Gs. That’s where only software can do that. Again, that was very, very tricky, and there are plenty of spectacular failures, but that’s a very—so you see, that’s an example.

And tomorrow, for example, for car industry, if autonomous vehicles become, I would say, production-grade—that’s not sure where exactly we are right now—but to a large extent, it will become a software battle, platforms and whatnot. So, that’s very interesting because I see many industries like that, and the idea—I forgot the name of the VC who just said, “Software is eating the world.” I think it was Andreessen Horowitz.

So anyway, I see that, and I think the digital transformation for many companies and their supply chain and what they can do with that, it’s going to be one of the vectors where they can—where software is bringing, I would say, one of the largest amounts of transformation for otherwise fairly traditional companies.

Conor Doherty: Did you say Andreessen? Marc Andreessen?

Joannes Vermorel: Oh yes, that’s right, you’re correct, yes, exactly.

John Elam: Yeah, like Circuit City comes to mind, right? I don’t know if those are popular in Europe, but they were very popular in the US—they’re gone, they’re bankrupt. I used to clean them in high school, actually.

Joannes Vermorel: Radio Shack, same. Radio Shack, gone. Nokia, Kodak.

Conor Doherty: Kodak’s an interesting example that you’ve talked about before. Kodak, correct me if I’m wrong, they invented the digital camera, or am I misremembering?

Joannes Vermorel: Portable digital camera, yes, and nothing with that. And the interesting thing is that they had the projection—that’s also interesting thing with Kodak—they had the forecast, right? And literally, the guy had—there was an executive who, in the early ’70s, basically timed the dominance of the digital camera to early 2000s, and that was, you know, give or take three years, the forecast was correct. And so that’s the interesting thing where, yes, you can—that’s even worse—you can have the correct forecast and not act on it, and it’s absolutely terrible.

John Elam: I’ve got a hypothesis. I’d imagine in Kodak there’s a lot of different divisions—there’s probably, like, lens, camera, film, services, etc.—and I bet you the film and services were probably the largest part of the company. So the executives in charge of that have an outsized say in the decisions we’re going to make, and they made decisions that protected their work.

Joannes Vermorel: That was exactly the reason.

John Elam: Yeah, that’s because—yeah, politics are always going to be around, and if we’re not incentivizing people to help the company, they’re going to help themselves. So, you know, part—Incentivization structures is something I talk about with my leadership and other tech leaders, is, you will get what you incentivize. People are coin-operated. I’m coin-operated. You get what you pay for. Salespeople—I love it, they’re the purest, right? You can see it directly. But frankly, everyone is. And so if you incentivize people to protect your realm and your kingdom, you will absolutely protect your kingdom. So we’ve got to watch out what we incentivize—you make some bad, bad larger decisions.

Conor Doherty: Adam, John, if there’s anything you want to go back to that you want to amplify or just let me know, we can circle back, or are you good?

John Elam: Well, I’m trying to think of how to tee it up, because it was related to language and communicating with folks. And there’s something that we’ve put in our book, and it’s an image that you won’t be able to see on here, but I’ll try to—I’ll get you guys an image of it, so you can—however you want to look it up. And that is, it’s this concept from our book that we call the “word wheel.” We stole it from the emotion wheel. Zoom in a little bit, and I know you can’t read the words, but the concept’s pretty simple.

Out at the edge, you have the most technical, specific word that you’re looking for, and as you move into the circle, they get more generic. And the concept is really simple: your peers—a lot of folks on this call—we would use these around here on the edge, and honestly, heck, I’m even maybe a little more in the middle rung, personally, if we’re being candid. I know I would, so like, yeah, it’s like, I don’t—you know, do I know the—you know, what’s a good one on here, like, the greedy best-first search?

I don’t know, I never studied pathfinding algorithms, but if you told me it was a pathfinding algorithm, I could, like, “Okay, I at least, like, I can picture what bucket do I need to put this conversation into.” And for an executive, though—which is, the executive and customers we put at the most centermost—it’s the most basic: it’s just an algorithm, right? We’re just using an algorithm. They don’t even say pathfinding, because then they’re like, “Pathfinding? What’s a pathfinding algo—they don’t know, they don’t talk about this stuff.”

And so picking the right language helps you connect with the right audience—the people who want specifics, give them specifics. The people who don’t care—they don’t, they truly don’t care, please don’t inundate them with it, they don’t care. It’s literally noise, and noise is bad—noise is always detracting from your actual message.

Conor Doherty: Well, I’m somewhat confused, because if you don’t use big words, how do people know that you’re smart? Do you, like, show them your degree, or how does that work? I’m just—I’m making notes, John.

John Elam: It’s real tough to, you know, use big words, seem big, right? The point is humility, right? And I think one of the big things is everyone out there knows a lot more than I—I just try to approach it as, there’s so much more for me to learn than there is for me to teach.

Try to teach—you know, teaching means connecting, and teaching doesn’t mean saying words or saying concepts. Teaching means getting my idea into your head, and if I sometimes even have to use the wrong—like, you know, technically it’s the wrong word or technically that’s not the—it’s not a perfect analogy, but you have a better understanding, yeah, let’s get there, let’s get there. The point is just to have more understanding of what I can and can’t do, and so even if it’s not perfect—a simple explanation, a lot of times, depending on the audience, a simple explanation that is not perfect is better than a perfectly detailed, highly nuanced answer.

Joannes Vermorel: A slightly contrarian take, slightly. But obviously I agree on the idea that if you give an answer that just goes completely over the head of your audience, it’s just not a good answer. But a slightly contrarian answer is, in this age of LLMs, I found myself very frequently lacking words, and I realized that probably one of the most important things that one should be taught was to have a very large vocabulary, just to be able to ask the LLM the right—the questions that you’re looking for. And that’s—very frequently it’s, you know, putting it into words—what is it that I’m asking? And sometimes there are very specific words that are just eluding me, obviously.

And that’s very interesting, because, you see, that’s where it’s slightly contrarian, is that name-dropping words in this age of LLM can be the greatest thing that you can do for your audience. Say, you have all those words—yeah, I’m not going to spend an hour giving you all the—everything, you can do this work on your own with an LLM, ask questions. But I am giving you the list of words that give you something to chase and ask the LLM about, and then the LLM is kind of—you know, it may not be very wise, but it is extremely, extensively knowledgeable, especially about trivia when it comes to concepts and—for example, “nemawashi”—very interesting. You give me the keyword, now I’m pretty sure that ChatGPT can give me the three-page brief. If I want to have the 10-pages brief, I’m pretty sure it can do it. If you just want to have the paragraph brief, same thing.

So that’s why I really think that’s where it’s slightly different, where this approach to vocabulary—in the past, I would have said name-dropping tons of concepts to students would have been probably a waste of time, but in this age of LLM—ah, that’s quite interesting, and maybe the one thing they should take away is one page with a hundred words and pointers.

Adam Dejans Jr.: No, I was just going to say that it’s very audience-dependent. Like, even if you give an executive these words, they don’t really care, and they’re not going to go look it up anyway. So if you didn’t get your point across within three minutes, they’re not going to go search it, and they’re not going to use an LLM, even if they have it. It’s, like, understanding your audience and playing to that is also key. So, it depends what we mean here.

John Elam: Like a senior analyst who does not know what stochastic optimization is, and is—you know, if they’re—you know, almost every analyst I’ve interacted with is a highly curious person, they’re like a lifelong student. That kind of person, I’m going to name-drop some big words and let them go look it up. So yeah, I guess I agree with both of you. The LLMs are making it to where I have learned so many things that I would never have looked up because I can now get it digested at sort of my level. And especially, you know, now they have history, they know the kinds of things I know, so it kind of can be like, “Well, you know how you’re working on this other project? It’s kind of like that,” right? The LLM can answer stuff like that, and it really helps.

But to Adam’s point, though, a lot of the executives that at least I’ve interacted with don’t have that natural, innate curiosity to go, frankly, double-click on a concept. So, it has to land.

Adam Dejans Jr.: Even middle—yeah, even a lot of middle management doesn’t care.

Conor Doherty: I do very much like the way that you got—well, everyone has described that, but again, as someone—I teach rhetoric and I’ve also taught technical writing, and to me the prism that I apply to basically all forms of communication (and you’ll notice it the way I send messages to you), I adapt. So, it’s audience and purpose—who am I about to talk to, what do they already know, what do they need to know, what are their pre-existing skill sets. Purpose—what exactly is it that I want to convey, do I want to obtain something from them? And those two—just even thinking, like every email, every text, every brief, every PowerPoint, every speech, every video—audience and purpose. Who’s watching, what is it you’re trying to convey to them or get from them, why are you doing this thing? And understanding that—again, to your point, Adam—understanding your audience, so there are constraints: does the audience have the pre-existing knowledge level to understand what’s happening? Do they have the time, do they have the inclination? These are all shifting priorities, they’re all different things. Are they tired, are they glycogen-deficient in their brain because it’s 6:00 p.m.?

For real—because it’s 6:00 p.m. because of time difference. You started your day, you’re fresh, you just had a coffee—they’re exhausted. Again, that would be the context. So again, audience, purpose, and then context—where is the conversation happening? But in any case, so to transition to my final question: if people want to learn more about rhetoric, I recommend Aristotle. But if people want to learn more about probabilistic forecasting or supply chain, there was a question that I was pitched on a poll I did for this interview, and it was very simply, “For people—Connor, please ask the audience, or ask the panel, for some recommendations, be it supply chain optimization, probabilistic forecasting, or even just pieces of advice.” So, last question, in reverse order: Joannes, any book recommendations or advice that you’d share for people who want to learn more?

Joannes Vermorel: I mean, the series of lectures that I produced on YouTube—if you have hours, to be fair, I hope they are kind of good, but it’s a 100-hour journey, so you need the time and it’s a commitment, let’s say.

Conor Doherty: But there are also LLMs that can summarize the transcripts. Yes, the full transcript is on the website as well—if you have an LLM, you can condense that into a one-pager. Yes, all right. John.

Adam Dejans Jr.: Why don’t you do that for us and chew our food.

John Elam: One—a book I would recommend, and it’s, of course, going to come from me, the product guy, is “The Lean Startup” by Eric Ries. Not necessarily a technical book by any means—in fact, it’s probably not a recommendation this audience probably hears a whole lot—but it’s all about, this book is all about the product and the problem solving. And so, yeah, Eric Ries did this really good book, he’s got some really good examples of how do you get your—how do you test your idea, just in a general sense.

And he talks about how different government agencies have been able to get leaner and actually create more value for citizens. He has numerous examples of different startups starting without any technology—zero tech. “Is this even a real problem and will people pay for it?” And they’re literally handholding things manually, emailing things manually, just to test out: is this problem even worth solving? Because I think sometimes we spend a lot of money solving a problem that is there, but no one’s willing to actually pay money for it to go away. So that’s one book I would recommend: “The Lean Startup” by Eric Ries.

Adam Dejans Jr.: Yeah, I think if you’re looking for technical books, there’s plenty to find. I go back to a book called—I read it during the consulting era of my life—called “Just Listen” by Mark Goulston. And this book is more about how to get people to go from a defensive standpoint to really sharing empathy with them and persuading, which I think is more important than the technical piece. You can always find the technical concept somewhere. And then, of course, our own book—you got the data job, now what?

“You Got the Data Job, Now What?”—this is a book that we authored, John and I. This book came out of the fact that we saw many of our colleagues, who are very intelligent, often have their ideas overlooked simply because they didn’t know how to present them or they didn’t have the right relationships built or the right foundation before they move forward. John can elaborate.

John Elam: Yeah, this book was a lot of fun to put together because it basically was a big culmination of all the problems that I’ve felt throughout my career, and a lot of the problems Adam’s felt throughout his career. So, like, it kind of goes through some of the foundational things to having a good career and making an impact at work. And when I say impact at work, I mean that even in academic spaces—like, if you create this really cool new algorithm and it gets reviewed by all these journals but no one uses it—you know, hopefully your research gets used once you pass, I guess, maybe, I don’t know, but you want it to matter.

And so the whole book starts out with communication as the first chapter. There are so many little things in communication that we tried to cover, that I, as a young engineer, I was like, “I’m going to just show people that this is the right way—just objectively right, why not?” And I learned that that’s not actually—we’re humans, we’re very emotional, we’ve evolved to be very societal creatures that love stories, we connect to stories.

And so in the book you’ll find a lot of things about the word wheel that we just shared, you’ll find different storytelling techniques, you’ll find actually one of the things you were talking about, Conor, of like, when you’re presenting, there’s a whole framework in there of five questions you need to ask yourself: why are you here, why is your audience here, what state are they in, what do you really want to communicate, and what’s the call to action afterwards.

And if you’re not doing that, then you’re just kind of just talking, and you might get your point across, but if you can come across with, “This is what I want people to know and this is where they’re at,” you can guide them there. And there’s other things in there, like how do you kick off your first data project, what does that even look like? For folks that maybe have participated in a project but maybe have never run one end-to-end.

And then one of the other last things I want to talk about in the book that I think is important but often overlooked is our section on leadership—leadership and talking about formal leadership, informal leadership, but one of the main things that I wish I would have learned earlier was business cases. If I would have known how to put together a business case when I was a young engineer, there’s a lot more projects I would have gotten funded and that would have been helping the companies that I worked for.

And the main thing that I want folks to take away from that, in the business cases, is how ridiculously simple they are. I’ve never seen a business case with more than 10 line items. It’s always, “Here’s what we’re doing today, here’s what it costs each month, here’s what I’d like to do tomorrow, here’s the fixed cost, here’s our variable cost, here’s the delta,” and then people are like, “Where do I sign?” So it’s just very simple. When we say “back of the napkin,” I can’t stress that enough—there are so, like, I don’t think I’ve seen a business decision around money that wasn’t a back-of-the-napkin decision.

We’re just trying to make the best thing we can at the time with the information that we have. So, yeah, hopefully people find some value in the book—there’s also some fun stories of our wins and our snafus, so hopefully people find some value in it.

Conor Doherty: I’ll finish that by saying it’s available on Amazon. That’s right—well, because you were too shy to do it, I’ll do it for you. But anyway, thanks guys, appreciate it. No worries. I have no further questions. Adam, John, really, I know I’ve kept you for a long time, so thank you so much for joining us—really appreciate you.

John Elam: It’s been a blast, thanks for having us.

Conor Doherty: Thank you to everyone else—I say get back to work.