00:00:00 Chapter six: intelligence as supply chain glue
00:04:49 Profitability defines intelligence, not human likeness
00:08:17 Superior choices measure operational intelligence
00:12:50 Mechanization reaches white-collar supply chain work
00:17:24 Coding agents shatter readiness objections
00:21:12 Mainstream supply chain playbooks are robotizable
00:26:10 Try coding agents, see proof firsthand
00:30:20 White-collar automation turns OPEX into CAPEX
00:34:24 Software becomes an appreciating competitive asset
00:39:23 Systems of intelligence are only asset fractions
00:42:34 No incremental path toward software-first operations
00:46:23 Unattended automation should handle normal days
00:50:26 Humans improve assets through wicked problems
00:55:02 Every department becomes partly software-driven
01:00:46 Practitioners must rethink value against machines
Summary
The episode argues that intelligence in supply chain is not a matter of prestige, but of results: whatever turns information into more profitable decisions is, by definition, more intelligent. On that basis, much routine supply chain work can already be automated, and firms that ignore this will be outcompeted. The deeper human role remains in solving wicked, strategic problems that machines still handle poorly. Thus supply chain should no longer be seen merely as a cost center, but as a productive asset: a body of decision logic that can appreciate through improvement or depreciate through neglect.
Extended Summary
The discussion in this episode turns on a simple but neglected distinction: intelligence is not a compliment, it is a function. In supply chain, intelligence is not whatever sounds sophisticated, nor is it whatever flatters human judgment. It is the capacity to turn information into decisions that produce better economic results. If one method generates more profit than another, then, in this limited and practical sense, it is more intelligent.
That definition immediately strips away a great deal of confusion. Many companies like to say they employ smart people, but this says little unless those people consistently make better decisions than the alternatives. The relevant comparison is never between human beings and some abstract ideal of machine perfection. It is between one decision-making system and another. A mediocre human process is not vindicated merely because it is human. Nor is software disqualified because it is software.
From there, the conversation moves to the larger historical pattern. Physical labor was mechanized first. White-collar work is now being mechanized as well. What computers once did for arithmetic and bookkeeping, coding agents and automation tools are beginning to do for a much broader range of intellectual tasks. Whether firms feel “ready” for this is beside the point. Markets do not wait for psychological readiness. Competitors who use superior tools gain advantages, and those advantages compound.
But this is not presented as a blank check for fashionable claims about AI. Quite the opposite. Routine, mainstream supply chain work—the standard formulas, categorizations, and repetitive decision processes—can increasingly be automated with ease. The harder problems remain the genuinely strategic ones: valuation, trade-offs under deep uncertainty, and the creation of distinctive advantages competitors cannot easily copy. Those are not spreadsheet problems dressed up in jargon. They are wicked problems, and for now they remain largely human problems.
This is why supply chain should be seen not merely as a cost center, but as a productive asset. When decision logic is embodied in software, maintained, improved, and made economically effective, it becomes something like intellectual capital. It can appreciate through refinement or depreciate through neglect. In that sense, a company’s supply chain capability begins to resemble a piece of productive code rather than a collection of administrative routines.
The practical implication is not despair, but reassessment. Human value does not vanish. It moves upstream. The mundane should be automated. The rare, the difficult, and the strategic remain. The real question for practitioners is no longer whether machines can think exactly like people. The question is where people still create value once machines can do more than they used to.
Full Transcript
Conor Doherty: Welcome back. This is episode six of a special series where Joannes and I take his new book, Introduction to Supply Chain, and we go through the ideas chapter by chapter.
For this series, I assume a very specific position: someone who does not know Lokad, does not know Joannes, and certainly has not worked at Lokad for three and a half years. I am, in this series, one of the 10 million or so average practitioners in the world who might see the book, perhaps on Amazon, where it’s widely available, start reading, and have questions. I take those questions and I bring them to Joannes.
Now, as I said, this is episode six. That means there were five before this one. If you haven’t seen them, I kindly suggest you watch those first, because there will be things that we mention today, in fact in the first few questions, that reference things we talked about previously. And with that out of the way, Joannes, good to see you again.
So, chapter six is cryptically called “Intelligence.” Now, I know in chapter five we talked about data, information, and knowledge, and now we’re moving on to intelligence. So, as concretely as possible, what is the core thesis of chapter six, “Intelligence,” and how does it build on what we covered already?
Joannes Vermorel: Intelligence is the glue. We have established the goal in the chapter on economics: the goal is to maximize rate of return. That’s a given. And in the previous chapter, information, we have seen that we have, I would say, the ingredients. The ingredients are information.
So now we know what we want to do, rate of return, and we have gathered all the relevant information with a clarified understanding of what is data and what is information. So we have the thing, and now the only missing ingredient is very smart execution. But what do we mean by smart exactly?
Obviously, if you’re speaking about a person, you would say, “Oh, that’s someone who is kind of clever,” and whatnot. You have plenty of imprecise definitions. People say, as most companies say, “Oh, we hire talented, smart people,” which might be true. And certainly some very successful companies do, but it is not a very actionable definition, because when you fundamentally ask people, “But how do you define intelligence?” they basically say, “I don’t know, but I can recognize it when I see it in a fellow human being.”
To be fair, that’s also kind of truly a standard, I think. But this is not a very actionable practical definition, because we live in a world where software is very important. That’s what I say. It’s more a statement: those computers are not a detail of the supply chain landscape. They are the landscape, to a large extent. And that’s what I describe by saying that supply chains can only be observed through essentially electronic records. So the software part is not a detail. It’s literally the foundational layer. Realistically, you can only see your supply chain through those electronic lenses.
And guess what? Those computers can also, in some forms, deliver intelligence, and they already do. Arithmetic is clearly some kind of smart information processing. People could say, “Oh, it’s just mechanical,” but the reality is that until the late ’70s, it was actually people with degrees doing the calculations. So clearly computers already managed to relieve us from a burden of intelligence, of information processing, sure, the most mechanical parts of that with arithmetic.
Now we need to discuss what is left on the table. We have computers that can be kind of half-smart machines. What does that mean? And what is the role of human intelligence in all of that? And let’s even clarify what we mean by intelligence, because we need to have an understanding that goes beyond, “I will tell you when I see it.” This definition is just so not actionable that it’s impractical.
Conor Doherty: There’s a few points there, and I’ve already written down some notes, so I’m already off the prepared questions. But there’s an important distinction, I think, to be drawn when you talk about intelligence. And you made a good point right at the start about, you know, you’re a company, a big company, you hire intelligent people, and that might be true.
One of the points in the book, or in this chapter, that you cover is: what is intelligence in the specific context of executing the day-to-day decisions required to operate a supply chain profitably? And do you need that human-level intelligence for that? Do we already have software that can execute that level of intelligence reliably and at scale? And I think you very much argue the latter, and that we’ve had it for many, many, many decades.
Joannes Vermorel: Yes. First, I do not want to have a human-centric definition of intelligence, because the problem is that if your definition is human-centric, then you end up with such a problem, because you end up asking, “What is a human?” Obviously this question is very interesting at a philosophical level, but it’s not a supply chain question. So I try to disentangle.
I’m not trying to define what humanity is, the soul, the consciousness, all that. Those are incredibly important concerns, but they are not supply chain concerns.
Conor Doherty: Profitability and so on.
Joannes Vermorel: Exactly. So I tie everything down to profitability. I’ve explained in chapter three, epistemology—sorry, not that—chapter four is economics.
Conor Doherty: Yes.
Joannes Vermorel: And then we have chapter four, where I expand on economics. So I say: what does that mean, to operate intelligently? It’s just to generate decisions that are very profitable, that maximize this rate of return. So literally, in this book, in my book, quite literally, I equate intelligence, in the special supply chain context, with the capacity to generate profits.
That’s how you want to turn this raw information into profits, essentially by picking the right decisions. And that will be my definition. It’s very operational: if you have something that generates more profits, then it is more intelligent. If it generates better decisions that turn out to be more profitable, it is more intelligent, no matter what it looks like.
If the thing that generates more profits is using basic percentages, then those basic percentages are smarter. That’s the way I approach the problem so that I can have a definition that is very grounded. And this definition of intelligence is from the perspective of what we need for supply chain. Again, we are not here trying to solve the philosophical problem of human intelligence. We are trying to clarify what is the next ingredient once you have the information, the raw information about everything that you can possibly source about the state of your supply chain, the state of the market, the state of your clients, suppliers, et cetera.
Once you have all this information gathered, how do you convert this information into decisions? I’m saying this is where intelligence takes place. That’s the glue, the magic that will convert one into the other.
Conor Doherty: Well, actually, I found the quote, but I already had it in my notes. So just for a bit of context, you write in the book: “Supply chain demands substantial intelligence to execute profitably,” and intelligence, by your definition, is “the capacity to make choices that yield superior future rewards.”
Now, someone might say, “Well, we already have non-software systems for that. We have our human-centric processes to arrive at decisions that yield superior future rewards.” So how do you respond to that?
Joannes Vermorel: Again, I would say yes, but now: is it smart? Does it generate all the profit that it could? It’s a comparative point.
Conor Doherty: So it’s a comparative point you’re making.
Joannes Vermorel: Yes. Two things can be good, but one is better than the other.
Conor Doherty: Exactly.
Joannes Vermorel: The thing is that resources are being allocated. Stuff is being bought, transported, transformed, distributed, et cetera. So the allocations of resources happen. The question is: do they happen intelligently? Do they happen profitably, or as profitably as possible?
So the crux is always a counterfactual. It is about whether you are leaving options for profits on the table. And if they are very obvious, that’s why you would say, “Oh, you are doing something dumb,” because you are leaving so much money on the table. So it’s always comparative.
Now, if we have people who are already doing that, fine. The question is the comparison between what they are doing collectively—so it’s kind of a collective intelligence—and what anything else, including an alternative organization, could be delivering.
Conor Doherty: When you say “alternative organization,” you mean a system of intelligence, like software intervention, or just people organized differently?
Joannes Vermorel: Or just people organized differently. Again, you can imagine that it’s just people doing the work, like an S&OP process. So it can be just people, but organized differently, and it can be people with software, a little bit of software—that would be people with Excel—or people with a lot of software, or just one piece of software that happens to be powerful.
So you have the whole spectrum of having more or less people. Again, when we are in the territory of intelligence, more people doesn’t mean necessarily smarter results. As an analogy, can you just bring 10 people from the street to play chess against a chess grandmaster, and will they win? If you just pull 10 people from the street, the answer is no, probably not.
Conor Doherty: Mhm.
Joannes Vermorel: And you say, if you pull now a thousand people from the street, would those 10,000 people manage to beat the chess grandmaster? Still not. At least not reliably.
Conor Doherty: Yes. Probably not even in reality.
Joannes Vermorel: Exactly. And that’s where people tend to get confused, because when we think of “unity is strength” and these sorts of things, it is essentially for material tasks, not tasks of intelligence.
If the question is carrying bricks from one place to another, yes, if there is a thousand of us, we are going to carry a lot more bricks than if I was alone. But if it comes to creating the Mona Lisa, there can be a thousand of us, and none of us will ever manage to get anything close to the Mona Lisa.
So that’s why intelligence is so specific. Intelligence is incredibly elusive. It is difficult. And so what I’m saying is that I really want to anchor my definition into something that can ultimately be measured: profits. And I am not making a massive assumption that it has anything to do specifically, in the context of supply chain, with people.
I’m just saying that whatever generates the profits is good. How many people are actually involved in the final recipe, well, that will be something to be discovered, and that will be something very much dependent on the state of your software technology.
Conor Doherty: Well, to build on this, and this is something that we actually started talking about off camera, we don’t need to mention any names, but you mentioned to me today that you received a bit of pushback online for your own perspective on this very concept, the idea of AI—we’ll get to that later—the idea of intelligence and the standard that you were trying to set within supply chain.
People were challenging you, pushing back against you, by making the claim that supply chain is not essentially evolved enough for the kind of paradigm that we’re talking about here.
Joannes Vermorel: Yes, but here we need to look at the general trend of technological progress. Work has been mechanized, and it’s still being mechanized. It’s something that has been in progress for literally millennia, and that has accelerated tremendously for anything physical over the last five decades.
So we have progressed. Even irrigation is a way to save labor. There are plenty of things that took literally millennia to really get started, and in the last half-century, physical labor has undergone dramatic productivity gains.
Conor Doherty: Automation, essentially, we’re talking about.
Joannes Vermorel: Yes, automation, essentially, but on every scale. People can think that sometimes it’s not even smart automation. It can be just bigger machines. For example, a containership: it’s still a guy piloting the ship, except that there are 20,000 containers instead of—and thus if you want to move the same containers with trucks, you need 20,000 truck drivers.
So the containership is not a very advanced piece of technology, but just by making it very big, you can boost productivity enormously. So productivity gains can be achieved sometimes with fancy technologies and sometimes with just brute force. Container ships are very much like brute-forcing gigantism and batching.
Conor Doherty: Or even, sorry, just to bring it then back to the supply chain context.
Joannes Vermorel: Yes. And now we are talking about white collars, white-collar workers. With the emergence of computing machines, computers have been automating away white-collar work, at least the mundane aspects of the positions, and many, many aspects.
People don’t know: when my parents started working, there were still people who would be doing arithmetic by hand. That was literally full-time jobs. You would have someone who would add numbers for all the money that needs to be paid to suppliers, and you would maintain the ledger of every supplier by hand. And then later Excel came in, and later accounting packages came in, and then later ERPs came in to, again, improve this process.
Conor Doherty: Make it more efficient, essentially.
Joannes Vermorel: Exactly. So we started to see productivity gains not just for blue-collar people who were actually moving stuff around, building and crafting, but also for white-collar people. And the process is still ongoing, and lately it has accelerated a lot, a lot. That’s essentially the coding-agent story of the last few months.
The interesting thing is that there are many people who online, on LinkedIn—I routinely post stuff—and a lot of people say, if I summarize the feedback, it’s mostly positive. They say, “Joannes, you are advocating for something that will remain the future 100 years in the future.”
Conor Doherty: Yeah, maybe not a hundred, but you’re a decade ahead.
Joannes Vermorel: The automation—the world is not ready. Essentially they say supply chains are not ready for the level of automation you’re speaking of, what you advocate, what Lokad does. It is present, but you are an outlier. Companies are not ready.
And that’s very interesting, because with coding agents, what I see is that nobody is ready, but it’s coming. It’s already there. What coding agents—Claude Code, OpenAI Codex, and a few of their competitors—are showing is that it doesn’t matter if you’re not ready. It’s already there.
And the bar is—what the world, including me, is discovering, and it’s really a punch in the gut—the bar is so much higher than I thought.
Conor Doherty: Those two actually, since you even wrote the book.
Joannes Vermorel: Yes, since I wrote the book.
Conor Doherty: Yeah. Your question about LLMs is now actually out of date. That gives you the scale of progress, which is not a slight on you. It’s just that was before Codex.
Joannes Vermorel: Exactly. The progress is absolutely exponential, and it’s very, very strange. Even those AI companies—their marketing brochure is out of date. Usually, as a software vendor, you are always promising the moon. You’re selling something that might be there next year.
And here, they are actually selling stuff that was already done six months ago. So even the marketing departments are late compared to the actual capability of the software, and it’s fantastic.
In the book, I describe general intelligence as the capacity of an intelligent system to improve itself. And guess what? For the last generation—so it’s Opus 4.6 and GPT 5.3, so Anthropic and OpenAI respectively—guess what? In both cases, in both instances, software development teams are saying those coding agents have been literally writing the next version.
Obviously under human supervision, but still. It means that the idea that we are getting close to a software system capable of rewriting itself is—well, in the book I was mentioning with LLMs that maybe we had a spark of general intelligence. Now it’s not a spark anymore. We have a little fire. It’s not a big fire, but it’s clearly building.
So the criticism of saying, “We are not ready,” I think it’s irrelevant. It’s irrelevant. The market does not care if you’re ready or not, because those tools are so brutally efficient that somewhere in the world there will be a competitor of yours that will be using those tools. That’s it.
And again, how long will your company last if you bring—there’s the expression, “bringing a sword to a gunfight”—but that’s more like you bring a sword, they bring a tank. The magnitude is large. It is very much very large. So yes, and by the way, this is a punch in the gut even for a software vendor like Lokad. We are not immune. It is also something that we take very, very seriously.
Conor Doherty: Well, again, we don’t have to get too deep into the weeds on this, but there are some myths or some poorly conceived preconceptions that we would like to address here. So for example, people selling the idea that LLMs, as large language models, are a cure-all, a panacea, you might say, in supply chain—you point out that that’s nonsense. Now that is possibly a bit dated, but where do you now fall on the idea of AI in terms of supply chain decision-making?
Joannes Vermorel: What is clear is that if you want to completely robotize the mainstream supply chain approach, agents are way beyond that.
Conor Doherty: By that, again, you mean—
Joannes Vermorel: I mean safety stocks, ABC analysis, all the usual suspects. So if you want to take all the usual suspects, that would be the entire training material from ASCM, coding agents are way beyond that. If you embrace the mainstream supply chain theory, I would say there is not even a single question left on whether OpenAI Codex or Anthropic Claude Code can robotize that.
Yes, it can. It can with ease. That is not even something really challenging. That’s not even pushing those tools. Not anymore.
Conor Doherty: Why aren’t companies doing that then? This is commercially available for 20 bucks a month. Billion-dollar companies—
Joannes Vermorel: But some do. Again, we’re talking about the majority. Markets are filters, not educators, so you should assume that new technologies will never be embraced by all companies. Companies never all embrace technologies. Five percent of them embrace new technology; the rest just go bankrupt. That is just the essence of markets. And it’s very painful. Schumpeter spotted that. It’s the creative destruction of innovation.
Let’s go back to e-commerce. Walmart could have crushed Amazon. All it took was to allocate a few engineers. Jeff Bezos was working in a garage. He had nothing. He could not even hire the very good engineers. All of that came later. He could only afford, very early-stage, cheap engineers. He was a small startup, and he had so many problems. Nothing was ready. He didn’t have the connection with the banks. He didn’t have the logistic networks. So it was a complete mess.
And why did Amazon become such a giant? The answer is because everybody else dropped the ball. Nobody paid attention for decades until it was way too late. And here, what we will see will be a repeat, because e-commerce was not a one-off.
If you look at the end of the 19th century, it’s very interesting. Electricity happens. Electricity is a game changer. If you have electricity, for example, pick any factory, it doesn’t matter which factory, you put light bulbs, and suddenly your factory can operate 24 hours a day. So it is the conquest of the night. That means that your investment into hard assets—if those machines are expensive—suddenly your machines will be working 24 hours a day instead of working, on average through the year, 12 hours a day.
Before light bulbs, operating a factory in the dark, or with candles, was just not possible. So you see, it is the sort of thing where electricity, just adding a few light bulbs, could suddenly double the rate of return of your factory. And guess what? Every single industrial company who missed the opportunity to add light bulbs in their factories went extinct.
And the story has been repeating over and over and over. That’s the same, for example, for automotive companies. If you look at the beginning of the 20th century, there were hundreds of brands in Europe. Hundreds. And all those companies pretty much went extinct, and the reason is that Taylorism—the idea of organizing your assembly line in a very specific way that maximizes throughput and everything—what made the fortune of Ford. Any company that did not become like Ford, in a sense, went bankrupt.
So the question is: is it real or not? Because the thing with tech is that people have been making those grandiose claims over and over for sometimes very shallow things. All the buzzwords that I’ve been talking about. For example, blockchain was over the top for a complete nonsense. Closer to us there was demand sensing—again, that’s like vaporware. There is no demand-sensing technology. It’s just a made-up buzzword from some vendors.
So the question is: are we talking about something real? And my answer is: don’t trust me. Just take code. That’s what, by the way, I say to my colleagues at Lokad: take one of those coding agents, use it, and within an hour you will be creating your first app, and your mind will be blown away. It is that easy.
You don’t have to trust Joannes Vermorel from Lokad. You can get the proof within one hour by just doing a test. And by the way, I have seen dozens of YouTubers who are literally not even software people. They are literally discussing cooking, politics, whatever, and they say, “Oh, by the way, I tried this thing and I built an app in an hour, and it’s like a very fancy app.” And they are blown away.
So yes, this time it is something really, really impacting. And that means that this idea of automating the white-collar job—for my statement, the mainstream supply chain theory as it is can be completely robotized.
By the way, the Lokad alternative theory, the one that I present in this book, not quite. Not quite, for reasons that are, by the way, still very valid, that I discuss in the book. For example, the problem of valuation is a very difficult problem. Your LLMs will just give you nonsense for that. So there are some—and I list a certain list of very hard problems—where I say those problems, I do not expect soon an automated solution, because they are just too hard, especially the wicked problems.
Again, this book was written way before LLMs became mainstream, and certainly before coding agents became mainstream, but I’m relatively pleased on this front. Those predictions were kind of correct. Claude Code will not solve properly yet. Same for OpenAI Codex, yet. Those valuation problems are just too hard.
Conor Doherty: Well, this is the thing, because I think we’ve covered intelligence quite a bit, and AI, but there are elements of economics that creep into the chapter, which again is natural because supply chain is a branch of applied economics, excuse me. And that is the idea of—you mentioned assets earlier—that is the idea of OPEX versus CAPEX, and the idea of treating supply chain more as an asset.
So I just want to read a couple of quotes, and then I’ll let you take away. But you advocate that what we need to do is turn operational expenditures into capital expenditures. “The practice of supply chain is reified as a productive asset,” or should be reified as a productive asset. “It generates ongoing returns beyond its operating costs.”
Now, the idea of turning OPEX into CAPEX, I’m sure appeals to a lot of people. You’re not the first person to advance that as a concept: “Oh, we should view it more as CAPEX.” But you are advancing a technological way to make that happen. So, yes, in the book and also just from your own perspective, what does that actually look like? What is the productive asset? What is it? What does it look like?
Joannes Vermorel: Again, think blue collar, white collar. A blue-collar worker: if you want to turn OPEX into CAPEX, you replace your worker with a machine, and instead of having to pay a salary, you make an upfront investment, and then the machines will generate the same work for a much, much lower price.
You see, that’s the beauty of automation, that you can remove what is the scarcest resource of all, which is labor.
Conor Doherty: Mhm.
Joannes Vermorel: Ultimately, the scarcest resource of all is labor. What I’m saying is that blue-collar and white-collar are two different kinds of labor, but it’s labor all the same. On one hand, blue collar, you are buying the arms, the motoricity, of an employee. In the case of a white-collar worker, you are essentially buying the cognition capabilities of this employee.
But the bottom line is that for a long time, the idea of converting OPEX into CAPEX was the privilege of blue-collar stuff. You just buy a machine, and this machine does the physical labor.
With computers, we started to enter the territory of white-collar automation. Again, just think of the clerks who are doing the arithmetic to track the ledgers. They have already been replaced by computers. And I’m saying this is just a continuation.
The interesting thing about accounting is that accounting is essentially cost-centered. No company can ever become super profitable because they have good accounting. It’s only that if you have bad accounting—
Conor Doherty: Yeah, we’ve covered this last week.
Joannes Vermorel: Exactly. If you don’t have good accounting, money will be lost, stolen, bribed, whatever. So you need it. It’s survival for your company. You need double-entry accounting. It’s not an option, at least if you operate at any scale.
Conor Doherty: Yes.
Joannes Vermorel: Now, the interesting thing about supply chain is that this is—if you play the game right, that’s what I defend—supply chain is not a cost center, it’s a profit center.
First, to make supply chain a profit center, you need to understand why it can be a profit center, because if you’re stuck with the mainstream supply chain perspective, the mainstream perspective says: we have requirements, service levels, sort of things, and then the game of supply chain is to minimize the cost to just meet this bar.
I say if you frame the problem like that, supply chain can never be anything but a cost center. So this is just a wrong framing. Again, that’s a philosophical position, more than a financial one. When I get onto the exact meat, I approach supply chain as something where decisions can have a rate of return. This rate of return can be very nice, meaning that one dollar will become two after a while, or it will become half a dollar after a while.
So what I’m saying is that when I say “a profit center,” I don’t mean that your supply chain will make money. I just say it has the potential to.
Conor Doherty: Yeah.
Joannes Vermorel: If you play the game badly, you would just lose money. Now, if we embrace this idea that supply chain has the potential to be a profit center, then the question is: can this machine, this computer with the right software, generate automatically, machine-style, those profits for me? And the answer that I’m giving is yes, and it’s not even science fiction.
Conor Doherty: Yeah. Well, again, allow me to perhaps reframe the question. So when you say you want to reframe supply chain from a cost center to an asset, you’re being quite literal in the sense that you will actually—an asset doesn’t necessarily have to be something you hold. I mean, stocks are assets. But those are things that, by definition, appreciate or depreciate.
You are actually making the argument that you can convert essentially supply chain into a productive or appreciating asset. What does that mean, though? Productive asset. What does that mean? It means: what do I look at and go, “There’s my supply chain”?
Joannes Vermorel: In practice, the thing will be software. Because you have other kinds of productive assets that are just intellectual. It could be, for example, a brand.
Conor Doherty: Yes.
Joannes Vermorel: That’s an asset. Obviously, when Disney purchased Marvel as a brand, that was a productive asset. It’s something that can generate revenue and does generate revenue.
So what I’m saying is that this is an intellectual property asset. What sort of asset? Well, it’s not a patent, it’s not a brand, it’s a piece of software. It’s code, essentially. And why is it exactly an asset? The short answer is because fundamentally you are playing a game that is very competitive.
And thus the idea is that if you play the game of supply chain right, it is something where you can have in your software and your logic something that is very unique to you, and that lets you play better than the competition. Not necessarily everywhere all the time, but that gives you a niche where you are the strongest.
So that’s why I say this becomes a productive asset, because it’s something that, if done right, will generate profits and will help your company carve out in the market an island of superiority. Because if you do not have an island of superiority with kind of a moat, then you’re going to be replaced by competitors. By definition, a company that survives is a company that has an island within the market where they are not just good, they are the best. If they have zero segment where they are the best, they will disappear.
Conor Doherty: I used the term “appreciate” earlier and “depreciate” because if we’re talking about assets that look like a house, it either appreciates or depreciates. You buy a car, the moment you drive it off the lot, it has depreciated. You buy a Rolex, a good one, a Submariner, it can appreciate in value. So things go up or they go down.
Applied in this context, from my read of what you’ve described, you do turn the supply chain into code. It is an asset, and by actively maintaining it or improving it, it makes better decisions, because again it’s not a static concept. And by allocating resources sensibly, you can make that—again, to say that’s the line of code, I know it depends—but that’s the line of code that produces better and better decisions that are more financially rewarding to you. It has appreciated.
If I don’t look at it for six months, it will produce worse decisions, or not as good as it was at the beginning. That asset has depreciated. Have I understood you correctly?
Joannes Vermorel: Absolutely. Perfect. Absolutely. And by the way, when I say that for software, this game has been played for half a century by software vendors. Microsoft has an asset called Microsoft Excel. They can let it depreciate, and then the software gets older and older, and people buy less and less licenses. Or they can invest in it to make it better and give people an incentive to buy the new versions.
You see, it does. So this is a productive asset, because every single day tons of people are buying, as a subscription or something else, they are paying Microsoft for a license for this software. So for Microsoft, this is an asset. This is something that generates revenues, and if they don’t do anything, then it slowly but surely depreciates.
For example, some of the assets of Microsoft, Microsoft says, “We are never going to reinvest again.” That would be the Microsoft Access story. Microsoft Access, for example, is still an asset with depreciating value. Some people still buy licenses for that, so it’s still generally earning revenue for Microsoft, although it is essentially a curve, a trajectory on the decline. Microsoft decided something like 15 years ago that they would not do further investment in Access except minimal updates, just to make sure that the software keeps running. But that’s it. Nothing really revolutionary, just minimal maintenance.
Conor Doherty: I also want to be clear with the nomenclature, with the naming that we’re using here, because again we talk about supply chain, we say that it is invisible, it can’t be touched—that’s chapter one—it can’t be directly observed. We’re now saying that it can be CAPEX, it can be an asset, an appreciating asset, yes. But we also then talk about systems of intelligence. System of intelligence is the asset, which is then your supply chain—is that—are these all basically becoming synonymous?
Joannes Vermorel: No, no. The reason why introducing systems of intelligence is really a software-centric discussion.
Conor Doherty: But it is the asset. That’s what you’re buying. That’s where someone might get confused.
Joannes Vermorel: No, no, no, no. The system of intelligence is a fraction of the asset, because for the company, that has to be explained.
Conor Doherty: Yes.
Joannes Vermorel: For the company, you need to have the people and the institutional knowledge to operate your asset. To have really a full asset—because we don’t have Skynet, because it’s not fully autonomous—you need people to operate the software. So if you think about your asset, you would need to include the people who are in charge of operating the software and maintaining the software as part of this asset.
But the system of intelligence is really about clarifying the responsibility boundaries of different pieces of your applicative landscape. It’s just about saying this piece of software should be responsible for this, but not that.
Conor Doherty: Yeah. And the system of intelligence generates the decisions, which is what we were just talking about in terms of the lines of code that generate decisions. So that’s where I’m just trying to clarify.
Joannes Vermorel: But what I would typically describe as a system of intelligence is really a class of software. I would not include in that the fact that you need people to maintain it. Again, when I discuss systems of records, reports, I’m really within software. I’m just classifying different sorts of software here.
I’m taking a software-centric perspective and thinking what sort of features belong to the software or not. Here, when I say asset, I’m taking not a software-centric perspective, I’m taking an economic perspective. One way to think of it is the opposite: a liability. Is it an asset or a liability? That’s the way I’m thinking. So you see, this is an economic perspective, an economic stance.
When I was discussing systems of intelligence, it was a software stance. I know that there is a lot of overlap, but fundamentally I’m not looking at the thing from the same angle.
Conor Doherty: Okay.
Joannes Vermorel: At the end of the day, there are massive overlaps, but again it’s a matter of perspective. Am I looking at the problem from a software angle, or am I looking at the problem with an economic angle?
Conor Doherty: Okay. Okay. Well then I do want to follow up on that, because again, from the perspective of a practitioner, there are sections in the chapter that are very technical. You start talking about stochastic gradient descent, for example. That’s all cool.
The thing that really stands out to me is the economics, because that’s stuff that, oh, that’s speaking my language. That’s my day-to-day: money in, money out. And if I am somebody who picks up the book, starts reading, and says, “Oh, I love this idea. I like the idea of finally converting from cost center to productive asset,” what is my first step? Is it system of intelligence? What’s the linear path between “yes, I like this idea” and “I’m making progress to institute it”?
Joannes Vermorel: The problem is that there is no incrementalism. You cannot linearly progress from horses to cars. That’s the problem. Again, here we are talking about something that is a radical departure from conventional wisdom.
We need to embrace the idea that intelligence is a spectrum, and within this spectrum, computers and machines are eating more and more of the spectrum of what was in the past the privilege of the human mind. And now we have reached, with coding agents, a situation where the philosophical position should be: anything, everything, should be done by a piece of software intellectually unless proven otherwise.
That is even stronger than what I put in my book, because I think those coding agents are now putting the bar so high that the correct interactional stance should be: unless you can prove that this thing is beyond what an agent can do, we should assume that by default an agent can and will do it. That is the correct stance.
Just like transportation: unless you can prove to me that carrying the stuff by hand is the right approach, by default I will assume that using a vehicle for transportation is better. You need to prove to me that not using a vehicle is the correct option.
In the world of blue collars, this is so obvious that nowadays people look at anything repetitive and they ask, “Can’t we use a machine for that?” That would be their default thinking. If I tell you that there is Charlie Chaplin, Modern Times style, and you have to strike a hammer 5,000 times a day, people would say, “But surely there should be a machine. Nobody should be having a heavy hammer in their hand and hammering 5,000 times a day.” Obviously a machine is needed.
What has happened, and I think it happened last year with coding agents, is that now the correct philosophical paradigm is: until proven otherwise, for intellectual work it should be a machine. And yes, it doesn’t mean that human intellectual work has disappeared. But the correct philosophical stance should be: you automate everything until you have very good reasons not to.
Conor Doherty: Well, okay. So to quote you, right on the heels of that, to quote you from chapter six: “At scale, unattended automation of the mundane,” or just the day-to-day simpler stuff, “the mundane operational decisions, has been within reach in supply chain for well over a decade now.”
Just two points to this question. I’ll let you address them in sequence. One, operationally, what does unattended mean? If you’re talking about end-to-end, sketch that out for me. But also, where do humans fit in that if something goes wrong?
Joannes Vermorel: So unattended means that—and you can take an example. If you want to take a company, sketch it out, any kind of normal business there. Think of Walmart. It’s just a normal business. There is nothing really extravagant going on. Yes, the news may say that the US has some geopolitical woes somewhere in the world, and there is a dictator that is doing crap somewhere in the world, but fundamentally just think of the Walmart sort of perspective today as a reasonable normal day in the US.
There is nothing really extravagant happening. It’s not like the snowstorm of the century. It’s just kind of normal weather.
Conor Doherty: Average day.
Joannes Vermorel: Yeah, it’s a normal day. What I’m saying is that if it’s a day that feels just like any of those days, and you had a hundred of those days in the year already, then your system of intelligence, this piece of software, should be able to take all the supply chain decisions on your behalf unattended.
And it should do it not with people who press a button to say yes, yes, yes. It just takes the decisions, and you have enough confidence. The system has been proven sufficiently reliable so that you can actually let the system do it for you.
People would say, “Oh, can we trust a computer with that?” And the answer is absolutely. There is plenty. For example, your ABS in your car, anti-braking system, it’s taking an incredibly life-threatening decision. It’s going to decide that you are braking too much. It’s literally not a system to brake more. It’s a system to brake less so that your wheels don’t start slipping on the road.
So fundamentally, if this ABS malfunctions, it means that you would lose your brakes. So obviously this is a very life-endangering system, because it’s literally at the point you need it the most that it has to be super, super critically reliable. We had these sorts of things for decades. It’s a non-issue, granted that we have proper engineering.
And so what I’m saying is that supply chain, by definition, most days are mundane, otherwise we have the wrong definition of mundane. I would say a system probably, at least 19 days out of 20—again, that’s a guideline—should be able to operate completely unattended. If you need to step in more than twice a month, then you have a problem. Yes, there will be months that are really, really crazy. Once a decade you have a crazy month where five seemingly impossible things happen in the same month.
But again, if we go to normality, most days are normal. Your system should just work completely unattended. And what do I mean by unattended? It means that if at the end of the day you look back at those decisions—what quantity you sent where—you say, “It was good. It was good. Nothing to see. It was good. It worked just as expected. There is nothing that I regret. The system operated exactly as it should,” and that’s it.
Conor Doherty: Well, so that’s the exception. That addresses the idea of what happens when exceptions arise. Again, like you buy a Rolex, you buy it as an asset, something goes wrong, well then you call a watch repairman. That’s fine in terms of improving the asset, because earlier you agreed that an asset can appreciate. Appreciate means it gets better. If it’s getting better, presumably that is from some sort of human intervention at some node in that process. Sketch out how that works for the listeners.
Joannes Vermorel: Again, we are in the software paradigm. So it’s about gradually refactoring your software so that it becomes a better version of itself. It is an engineering problem, a software engineering problem.
Conor Doherty: So a human insight problem.
Joannes Vermorel: Absolutely. That’s exactly what I’m saying. It is a creative problem. Fundamentally it’s something where you will have to invent edges against your competitor. You will have to do things that your competitor does not. So it’s a wicked problem.
There is no correct solution. A solution might be only correct because your competitors are not doing it yet. If they do, you need to differentiate. So we are in the territory of the wicked problems, the open problems, the problems that do not have what I described as formal boundaries. And yes, that’s the privilege of the human mind, at least for now.
But here we have a clear understanding of why we need a human mind. It’s because those problems are hyper-hard. Truly, they cannot be brute-forced. They cannot be enumerated.
Conor Doherty: For example, sketch an example, if you can. Again, you’re a retail company, whatever. You’re an aerospace MRO. Pick anything. Example of where you have a productive asset and a person at that company helps improve that asset.
Joannes Vermorel: So, for example, you would say: can I establish a partnership with one of my suppliers and frame the partnership in such a way that I have privileged access on the market, to have access to something at a cost that is lower than anybody else?
You see, what does this partnership look like? It can be anything. It can be a joint venture, it can be building a train line for this supplier.
Conor Doherty: Strategic insights.
Joannes Vermorel: Yes. It can be anything. It can be me handing over one of my technologies to this supplier to make them better, with a contract. It can be the supplier investing in my company. It can be just the teams going on a retreat together every month. It can take so many forms.
It’s not a proposition that can be enumerated with option A, option B, option C. No. It’s like the possibilities are endless.
Conor Doherty: Okay. And in terms of—again, to take an example—this is how I kind of framed it, and perhaps I’m wrong, you can correct me. But if you’re in a situation where at the end of the day—you took the example of Walmart—you look back, yeah, things could have been good, but you know what, I actually think that that decision could have been slightly higher, because that reflects something that I kind of know about that supplier or that store. I just happen to have a very unique human insight into that.
If that’s a repeatable event or a recurring event, that can be folded into the code, or expressed as code, absorbed into this asset, and then in future that has now trained the model to produce decisions, and you don’t have to repeatedly—
Joannes Vermorel: Yes. It’s exactly the way Microsoft improves Microsoft Excel. They look at what people say, their feedback, and if they see that something comes up very frequently, then at some point they decide to act and say, “This is not noise. This is not just a fluke. This is actually fundamentally correct feedback about the product, and thus we modify the product in this direction.”
And you see, here the main problem is that at the end of the day, you can see many areas where, ah, this decision could have been better. But the problem is that at the end of the day you know more than the software did at the beginning of the day. So you have more information. You have to sort out what is your genuine insight for the solution versus survivorship bias, which is just that you have postmortem information that wasn’t available to your system of intelligence at the beginning of the day. So you cannot blame this system for being ignorant about what only unfolded during the day.
Conor Doherty: Okay. Well, we’ve been going for just about an hour, so I do want to start wrapping up a little bit. But something that I think is worth clarifying is: if supply chain becomes an asset, what department owns that asset? You can say the company, but where does that show up? Is it finance?
Joannes Vermorel: The problem is that with the automation of white collars, all the white-collar jobs are being automated. For me, that was evident even 20 years ago. And by the way, I have even a lecture where I state, as the second lecture, “The 21st century will be the century of the mechanization of the white collar.” That’s literally my introducing statement.
So bottom line, this idea that we are going to convert companies, their divisions, into mini software companies for every single department, it’s coming. It has been coming for a long time, and now it’s coming strong. So the problem is that if you say software belongs to IT—
Conor Doherty: Mhm.
Joannes Vermorel: What is the endgame of that? It runs the entire company. So we don’t want that. The point is that we don’t want that, because it makes the notion of IT meaningless. If you say I have legal, marketing, supply chain, whatever, it’s a division of labor. It is a way to slice and dice people.
So if you say everything goes into IT, then okay, that means that it is now the company, and then the question becomes: how do I slice it? I’m just pushing the problem, kicking the can. If I put everything in IT, why not, but then the question will be how do I slice it?
So what I’m saying is that putting everything in IT makes the word meaningless. It’s just saying then it becomes the company. So if we want to preserve words that have a meaning, then it cannot be “everything goes to IT,” because then again we are losing the proposition, the division of labor is absent, and our words stop having any meaning.
So what I’m saying is that we are in this world where pretty much anywhere where you used to have lots of white-collar workers, now you’re going to have a lot of software instead. And the interesting thing is that coding agents give a fascinating aspect on what a future company would look like. Maybe every single division will have people operating coding agents to create their own assets for their own division.
It means that you have a legal department, those people are using agents to operate at scale and be extremely productive. Same for HR. Same for every single division. So what I’m saying is that if you say that supply chain becomes essentially a software game, and that’s what I advocate in this book, then it just means that the division of supply chain becomes kind of a specialized software division within the company, and their goal is to create, operate, maintain, and develop an asset that is productive and that generates profits by playing the game of supply chain very profitably.
And this will be a parallel track, because marketing will play the exact same game, and they will have the same problem, and they will need to also develop their own productive assets so that, at the sort of branding level, market-awareness level, they cultivate something that is generating money for the company.
Again, with a real productive asset where the human labor force is still there, but it is not dominant. That’s kind of the future of many large corporations. It is already the case for manufacturing companies. Most manufacturing companies, what dominates is not the labor component. If you look at Nvidia, their spending on labor is not dominant. What dominates is capital, the price of the machines that they operate. That is way more significant than the blue collars that operate them.
And if you go to Microsoft, the valuation of their software assets is much higher than the price that they pay for the employees. So fundamentally software companies have been operating in this paradigm for decades, which is that the valuation of your intangible software assets is literally the bulk of what you do. Yes, you pay salaries on top of that, good salaries for software engineers, but comparatively to your assets, they’re small.
And now we are just looking at the next stage, where pretty much all companies start to become a little bit more like software companies. That is a strange world, but I would also say that it is a world where a lot of people had seen this world coming for decades. So it’s not super new either.
Conor Doherty: All right. Well, my last question, and it kind of comes full circle to the idea of the supply chain practitioner reading this. For the 10 million or so supply chain practitioners who might get to the end of chapter six, what are they supposed to take away from what you have just said?
Joannes Vermorel: They need to rethink their own human intelligence and where their value-add is compared to machine intelligence. That’s exactly the question that, two centuries ago, blue collars were asked. This very same question: what is your added value?
And by the way, there are plenty of excellent answers. You have a lot of people who, with their hands, are still being valued by the market quite a lot. If you go to a grand restaurant in Paris, you have a chef. The chef is a blue-collar worker, and usually this chef is earning very well.
So there are plenty of very good answers. The answer is not pain and misery. The answer is: you have chefs who are earning—if you look at the people who are earning the most money in France, many of those chefs are quite high in terms of percentile of wealth compared to the general population.
But nevertheless, this is a real question that needs to be addressed. All workers, blue collars, have been forced to ask themselves the question: what is my value compared to the machine? And now white collars have the exact same need to really think about what is their value compared to machine intelligence.
My quick reassurance is: don’t worry, you still have plenty of room. But it’s going to be different, and you just need to think about that carefully, and not assume that the world is going to stay the same, because it’s really not.
Conor Doherty: So if I were to summarize that, the general advice 10 years ago was “learn to code,” and today it’s “learn to cook.”
Joannes Vermorel: Again, or learn to meta-code, because a coding agent is just coding for you at a much greater speed. If you already knew how to code, it’s good. People who already know how to code, guess what, they are the ones for whom it is easiest to adopt those technologies. So you see, it’s just that the mix of skills, what you need to learn extra, becomes kind of different.
But fundamentally, being very knowledgeable about, for example, coding doesn’t make you inefficient to go to the next stage of this revolution. Quite the contrary. So 10 years ago, learning to code would have been good advice. It’s still, by the way, good advice if you want to really embrace coding agents. It’s better if you yourself are at least a little bit fluent with a programming mindset.
Conor Doherty: Yes.
Joannes Vermorel: The syntax of languages is much less relevant. But again, the world is vast, opportunities are very, very numerous, so I cannot suggest one size fits all. It will be a journey that will be very personal, just like, again, two centuries ago, blue collars who were mechanized away. Some decided to go a path of becoming an artist, some became something completely different, et cetera, et cetera. There is a zillion different paths.
But you need to pay attention, because the mechanization of basic, mundane intellectual operations is already there. It’s already there.
Conor Doherty: All right. Well, Joannes, thank you very much. I don’t have any other questions. I’ll see you soon for chapter seven.
And to you for watching, thank you very much for your time. As always, I say it every single week, every single video: if you want to continue the conversation, feel free to reach out to Joannes and me. The easiest way is on LinkedIn, or failing that, you can send us an email at contact@lokad.com.
And with that, we’ll see you for chapter seven next time. And yeah, get back to work.