00:00:00 Introduction of the show and AI’s impact on white-collar jobs
00:02:20 Introduction of Large Language Models (LLMs)
00:03:27 Joannes’ realization about generative AI and GPT-4
00:05:44 LLMs as a universal templating machine
00:07:05 Joannes on automation potential and change of his perspective
00:10:38 Challenges in automating supplier communication
00:12:24 Impact of LLMs summarized
00:14:11 Mundane problems in automation
00:15:59 Scale of extinction level event
00:17:48 AI’s understanding of industry jargon
00:19:34 AI’s impact on jobs questioned
00:21:52 Automated response system and importance of end-to-end automation
00:28:08 Companies already automating processes
00:30:36 Improving efficiency and gaining competitive advantage with AI
00:34:28 Market readiness for AI
00:36:27 Origins of AI development
00:38:55 Automating hard parts in supply chain
00:41:36 Society gets richer with job automation
00:44:23 Future of studying supply chain science
00:46:54 Transition to audience questions
00:50:12 Rethinking Lokad’s technological roadmap
00:52:29 Q&A session begins
00:55:00 Financial perspective in supply chain
00:58:46 Question about pricing and bottom line impact
01:01:00 LLMs for small, mundane problems and tasks
01:07:17 General intelligence and extinction events
01:09:49 AI’s impact on small and medium enterprises
01:13:02 Benefits of automation in Lokad
01:17:55 Predicting extinction events for non-adopters
01:20:32 Paris as an example of progress through automation
01:22:20 Question about successful AI and human collaboration
01:26:19 Limitations of high-level human intelligence
01:28:31 Question about AI in small countries’ supply chains
01:32:25 AI as an equalizer for developing countries
01:34:05 Sewing machine simplicity and impact
01:35:31 Evolution, not revolution
01:36:19 Closing thoughts

Summary

In a conversation between Conor Doherty and Joannes Vermorel of Lokad, Vermorel predicts that artificial intelligence (AI) will lead to a mass extinction of back office white-collar jobs by 2030, much sooner than previous predictions. He attributes this to the success of Large Language Models (LLMs), which he believes will impact all back office jobs, particularly in supply chains. Vermorel argues that the goal should be complete automation for repetitive jobs, unlocking significant productivity improvements. He predicts that many companies will start removing people from these positions within months. Vermorel suggests that tasks requiring high-level human intelligence, such as strategic decisions, are still beyond the capabilities of LLMs.

Extended Summary

In a recent conversation between Conor Doherty, Head of Communication at Lokad, and Joannes Vermorel, CEO and founder of the same company, the two discussed the implications of artificial intelligence (AI) and supply chain optimization. Vermorel expressed his belief that AI will lead to a mass extinction event for back office white-collar employees. He referred to past predictions that AI would eliminate 90% of white-collar jobs by 2050, but he believes this will happen much sooner, by 2030.

The game-changer, according to Vermorel, is the success of Large Language Models (LLMs), which will impact all back office white-collar jobs, especially those in supply chains. Vermorel admitted that he initially missed the revolution of LLMs and only realized its potential about 18 months ago when he started to work with generative AI. He shared his experience with GPT-4, a model by OpenAI, and how it made him realize the production-grade potential of this technology. He explained that GPT-4 is an order of magnitude smarter than GPT-3.5, and once you understand how it works, you can adapt it to work with GPT-3.5 as well.

Vermorel described LLMs as a universal templating machine, which is incredibly powerful and resilient to noise. He shared that he rewrote the entire roadmap of Lokad more than a year ago and they have been automating one task after another over the last 12 months. He expressed his amazement at the magnitude of what can be delivered through automation and the difficulty in finding problems that can’t be automated. He shared that Lokad automated the most difficult part, the quantitative decisions, a decade ago, and the rest has been automated in the last 12 months with the help of modern LLMs.

Vermorel explained that automation, particularly in linguistic tasks, has already surpassed human skill and intelligence in terms of speed and accuracy. He gave an example of how large language models (LLMs) are good at avoiding common mistakes, such as misinterpreting the color in a product’s name as the actual color of the product. Vermorel likened LLMs to a servant that is familiar with the terminology and jargon of every industry, making it superior to an average person who may not be familiar with specific technical terms. He stated that the real question is what can’t be automated, as so far, everything they have tried has worked.

Vermorel disagreed with the view that AI will act as a co-pilot to help humans make decisions, arguing that the goal should be complete end-to-end automation for repetitive jobs, which can unlock significant productivity improvements. He responded that non-repetitive jobs would survive, but many tasks that seem non-repetitive on a daily basis are actually repetitive over the course of a year. He confirmed that the timeframe for automation is going to be very compressed and that many companies are already moving quickly towards automation.

Vermorel explained that previous technological revolutions were limited to specific industries, but large language models apply to almost all white-collar jobs, especially back-office ones. He explained that the market will move slower than the back office in terms of automation, as the wider audience’s expectations dictate the pace. He emphasized that clients do not care if the execution of production is completely automated or done by clerks.

Vermorel predicted that the automation of white-collar jobs will be a surprise, but he pointed out that blue-collar jobs have been undergoing similar changes for the last 150 years. He used the example of water carriers in Paris, a job that went extinct with the introduction of plumbing. He discussed how automation has already made certain tasks extinct, such as comparing contract drafts line by line, a task now done by Microsoft Word. He described the pace of these changes as gradual until the introduction of LLMs, which he says feels like walking 20 years into the future in just one year.

In response to a question about the future of supply chain science, Vermorel asserted that the fundamentals taught in his supply chain lectures will not be automated. He encouraged focusing on these fundamental questions and not on the trivialities that will be automated by LLMs. He summarized that LLMs represent an extinction event for back office corporate functions, predicting that many companies will start removing people from these positions. He described this as a matter of months and urged quick action.

In response to a question about which job titles might become obsolete, Vermorel listed supply and demand planner, inventory analyst, and category manager. He suggested that roles like supply chain scientist, which involve crafting numerical recipes and strategic thinking, will not be automated. Vermorel explained that Lokad has automated not only fundamental decisions like planning, scheduling, buying, producing, allocating, and pricing updating, but also the surrounding processes such as master data management, communication, and notifications to clients and suppliers.

In response to a question about advice for young people entering the supply chain, Vermorel suggested focusing on strategic understanding, critical thinking, and programming skills. He believes that LLMs will not replace these skills but will enhance productivity. Vermorel predicted that AI-driven supply chain solutions will have a more pronounced effect on smaller companies than larger ones. He explained that the high productivity of these tools makes automation accessible for small companies, allowing them to compete with larger companies.

Vermorel shared that the automation at Lokad has improved quality and increased productivity. He noted that it’s too early to see the impact on subscription rates and other metrics due to the slow sales cycles of enterprise software. He warned against relying on numbers, using the example of Kodak’s failure to adapt to digital photography. He predicted that companies that can automate their processes will be more agile and reliable, and those that don’t will not survive. He likened this to an extinction event.

Vermorel emphasized the importance of liberating people from tedious jobs for the betterment of supply chain and overall company growth. He believes this will allow people to think strategically and not be distracted by minor tasks. He explained that the cooperation between humans and machines is not as envisioned. It’s more about automating tasks completely, such as the Lokad website translation and Lokad TV video timestamps, which are now done automatically. He suggested that the real question is what cannot be automated.

Vermorel suggested that tasks requiring high-level human intelligence, such as strategic decisions and macro questions for the company, are still beyond the capabilities of LLMs. He explained that LLMs are incredibly accessible and do not require a super talented workforce or high bandwidth Internet connection. He emphasized that this technology is cheap and can be a great equalizer for countries with limited resources.

Vermorel agreed with Conor’s comparison and warned companies not to miss out on this evolution. He suggested that companies that do not adopt this technology now may not be able to catch up in the future.

Full Transcript

Conor Doherty: Welcome to Lokad Live. My name is Conor. I’m the Head of Communication at Lokad. And joining me in the studio is Lokad founder, Joannes Vermorel. Today’s topic might be the most serious one we’ve ever had on the show. A frank, dispassionate discussion on the true state-of-the-art of AI and supply chain and importantly, what it means for people in this space. This is intended to be an interactive discussion, so if you have questions, please submit them in the live chat and we’ll answer as many as we can in the time that we have. So, Joannes, let’s not bury the lead any longer. Why are we here?

Joannes Vermorel: I believe that we are facing something that will probably be characterized as a mass extinction event for back office white-collar employees. Five years ago, there were a lot of consultants making studies and saying that by 2050, AI would have eliminated 90% of the white-collar jobs. The reasons and the technologies that were invoked in those reports were completely bogus, and it turned out that the timeframe was also completely bogus. But the only thing that was true is the 90%. And the timeframe, as I see it now, is going to be vastly accelerated. It’s not going to be 2050, it’s going to be 2030. What changed everything was the success of LLMs (Large Language Models). This will impact pretty much all the back office white-collar jobs, especially the ones in supply chains. The change is coming super fast, much faster than what I thought 18 months ago.

Conor Doherty: You referenced consultants before, speculating on the trajectory of this evolution or extinction as you put it. What exactly happened last year with the emergence of LLMs to expedite this evolution?

Joannes Vermorel: The true revolution was about three years ago. I missed it. I only realized what was happening about 18 months ago. At that time, I started to play with generative AI. We did an interview about it almost a year and a half ago. At the time, I was looking at these technologies. Generative AI had been around for 20 years and was making progress every single year. I was starting to realize that it could be used for production purposes, but that was small touches like generating a few illustrations for lectures.

Then there were LLMs and chatbots. They were kind of nice, but I thought they were just a fancy gadget. I hadn’t really realized what they could be used for. Then I came to my senses when I started to work with GPT-4, the beta, a little bit more than a year ago. I realized that this technology is production-grade. There was a jump that was absolutely enormous. I realized with GPT-4, a model by OpenAI, how GPT-3.5, which had been around for several years, was intended to be used. The interesting thing is that it took a second breakthrough, GPT-4, for me to understand. GPT-4 is an order of magnitude smarter than GPT-3.5.

But once you start to understand how it works, and it’s much easier with GPT-4 because GPT-4 is so much better, then you can adapt what works and make it nice and smooth so that it works with GPT-3.5 as well. The realization was that the LLM is incredibly powerful, but if you want to use it for production purposes, for enterprise, for corporate purposes, it’s not about having a chatbot. The interesting thing is that you have a universal templating machine. This is what is completely incredible. This is very resilient to noise. A little bit more than a year ago, I realized that it was production-grade. We had missed something that was an absolute stunning breakthrough 18 months ago. I rewrote entirely the roadmap of Lokad more than a year ago. We have been frantically upgrading pretty much everything for the last year. We have been quite quiet in terms of communication on that, but over the course of the last 12 months, we have been automating one thing after the next. Things that seemed almost impossible to automate a few years ago have been automated.

When I look at jobs in supply chains, I can see that the magnitude of what can be delivered as something that is completely robotized is just stunning. It’s even hard to find problems nowadays where you can’t automate. In the past, automating every single task was a challenge. At Lokad, we did automate a decade ago what was the most difficult part, which was those quantitative decisions. The quantitative decisions were like figuring out how many units to purchase, to produce, should you increase or decrease your price point. Those quantitative answers were automated a decade ago. But what happened during the last 12 months was all the rest. All the rest became cheap, super fast, and easy with these modern LLMs.

Conor Doherty: I don’t want to get this backwards because the next question would probably be, “Well, what jobs are going to disappear?” But actually, I kind of want to go back a step. As you explicitly stated a year ago, we sat here and you called GPT-3.5 a BS artist and you likened it to that of a cat. So, the question is, what exactly, when you describe that vertigo sensation, what caused you to go on that journey from “it’s a cat” to “we are in an extinction event”?

Joannes Vermorel: It is still pretty dumb, but it is not what you want. The thing is that LLMs are not about having an intelligent discussion. It’s nice, and GPT-4 goes quite far in this direction. This was quite stunning. But again, the strength is those universal templating machines I was describing. Let’s go with an example. You want to pass a purchase order to a supplier and you realize that you don’t have the MOQ. You should send an email to the supplier saying, “By the way, for those products, what is your MOQ, your minimal order quantity? Give me the figure.” Then the person will reply and then you just need to add this value somewhere in your system so that you can compute that. This is part of the decision-making process.

Lokad was automating that. What we were doing is if we know the MOQ, we give you and all sorts of other data, we give you the correct answer for how much you should purchase. But getting the MOQ value itself, that was like a phone that, how do you deal with that? It’s not a difficult problem. You can certainly create an automated system where you have an email template and then you will have to pass the answer and passing the answer is tricky because the supplier might reply something that gives you like two MOQs. For this product, it’s this one, for this one, it’s something else. How do you deal with that? This is not fundamentally a hard problem. This is not like a fancy calculation. But this was a sure that was something that was preventing us from robotizing the entire execution of the process end to end.

We could automate the decision-making part but not the execution end to end of the decision taking into account what needs to happen before the decision-making step and after the decision-making step. Now with LLMs where you have those universal templating machines, if you get an email, say what was the MOQ reported by the supplier, blah, blah, blah, and you can literally automate those things super, super fast.

So if you ask GPT, an LLM, to invent stuff, they will hallucinate. That’s what they do. But if you use them right and using them right say I have an input, I want a transformation, and I get the information out of the transformed input, there you get something that is incredibly robust and production grade. This is just, and it turns out that when you look at what back office workers, white-collar workers are doing, they are fetching little tidbits of information here and there. This is like 90% of their time is spent doing that, a little bit of chitchat with the environment. And now you have a universal machine to just automate all of that and it’s very, very straightforward and cheap.

Conor Doherty: So again to summarize all of that up until this point, and again Lokad has been doing this for years, the more quantitative decision-making aspect was automated using other forms of AI. Today you’re talking about the more qualitative interpersonal elements that are also subject to automation through LLMs.

Joannes Vermorel: Yes, just think you want to pass your purchase orders. There is compute the quantity, that’s what Lokad has been doing for now more than a decade. But then you have all sorts of small things that need to happen. What if there are two products that are duplicates? So you have like two products, it’s twice the same reference. How do you see that? The answer was in the past that was complicated. You could engineer a little bit of machine learning and a little bit of specialized NLP, natural language processing techniques, something to de-duplicate automatically your catalog. Yes, it would in the past it used to take let’s say 50 hours of software engineering to get something that works.

Now with LLM, this de-duplication that I’m talking about is literally 20 minutes of work and then you will have a production-grade solution to de-duplicate. So you see that’s the magnitude of it is absolutely stunning. And you have like all those little things that were in the way and that’s why companies need to have all those people because it’s not like the grand problem that takes a lot of time. The grand problem like computing this quantity was already kind of mechanized but it was the small super mundane problem that were kind of in the way, small data quality issues, duplicated things, a missing data point like an MOQ.

Some supplier is late, you want to send an email to get a revised estimated time of arrival and then get the answer. These sorts of things, they’re not like super complicated but before they it was already possible to automate those but again every single question and microtask like that costed something like 50 hours of engineering to get it solved and that is a lot because if you have like 100 of those, we are talking of thousands of man-hours and then you end up with a project that’s something that I discussed in the lectures is that the cost to manage a software product is not linear, it’s super linear.

So if you double the complexity, you tend to multiply the cost to maintain this thing not by two but by four. So by adding all those things, you were creating like a monster software that was very difficult to manage and to update and to extend. If you can as a building block, you have something LLMs where those things are centralized then not only those tasks can be solved in like 20 minutes but the overall complexity of your software product grows much slower than what it used to be because it’s still a very, very simple product and it’s the same trick, this LLM, that is used at every step to solve all those small accidents that you have along the way.

Conor Doherty: What is the scale then of this when you talk about extinction level event? Can you unpack the implications?

Joannes Vermorel: I’ve been working with LLMs and we have been automating tasks right and left. I see some other companies who are doing that, especially when it comes to linguistic tasks like rearranging information, summarizing information, extracting information from an email and so on. The incredible thing is that we are already beyond human skill, beyond human intelligence.

When I say beyond human intelligence, I mean under a limited amount of time. If I give you an input email and ask you to extract the key information in 20 seconds, you will sometimes get it wrong as a human. If I give you a thousand emails and ask you to extract the key information from each one in 20-30 seconds, you will have a hit rate of maybe 98% and sometimes you will get it wrong.

The LLMs, however, will not only do it in a second instead of 30 seconds, but their accuracy will be way above what an average person would do, even one that has training. That’s why I say we are literally beyond human.

For a lot of things, like not making newbie mistakes, LLMs are incredibly good. For example, if there is a color in the name of a product, maybe the color doesn’t mean that the product is that color. Maybe it’s a device to do a check and the color in the name has nothing to do with the actual color of the product. This is the sort of things where LLMs are actually incredibly good.

It’s like having a servant that is very familiar with the terminology and the jargon of pretty much every single vertical that exists on Earth. So, suddenly, you have something that is a little bit beyond human because if you take a random person, this person is just not readily familiar with the specific technical terms of your industry, and this person is likely for months, if not years, to just make mistakes out of ignorance and not knowing that this term is misleading and, for example, this is a term that is a color but doesn’t refer to a color in this context. It is more like a characteristic of the product or whatnot, and the examples go on and on and on.

And at Lokad, literally, we have been mechanizing tons of things, and the real question is literally, “What can’t we automate?” And it’s a tough question because, so far, pretty much every single thing that we have tried has pretty much worked out of the box. That’s the thing that is mind-blowing is that once you kind of understand what those LLMs are for, you can automate so much, so much.

Conor Doherty: To take the position of a potential naysayer, if you contrast white collar and blue collar jobs, for decades people have talked about how robots and machines and other forms of technology will take the tools out of the hands of mechanics. Yet in many sectors like in MRO, there are parts of the world where there’s a critical shortage of technicians to work on planes. That extinction level event was wrong. How confident are you in what you’re saying today in the context of similar proclamations that people have made? What makes you so sure?

Joannes Vermorel: I mean, first, what makes me confident and sure is that we have been doing it for a year, and literally, I mean, just to give you an example of the stuff that we have mechanized at Lokad, RFPs, you know, requests for proposals. We get gigantic Excel documents sent by large companies with an insane number of questions, like 600 questions. And earlier this year, I think it was in May or something, I said, “Okay, we have yet another 600-question RFP. It takes literally a week, 10 days, full days, to answer all of that. I mean, it’s such a pain to go through those massive documents.” Sorry.

And then I decided, “Okay, I’m just going to mechanize that and recycle all the knowledge base that we had already at Lokad and make an answering machine.” You know, an answering machine. So, we have already the documents, we have already tons of things, and the job of the machine was just, “Write an answer to the question just like Lokad would. Reuse the knowledge base that we have. And if there is a gap in the knowledge base, just answer ‘fail,’ and then we will actually do it manually.”

And literally, doing one RFP was taking more than a week, and automating the RFP thing took me a week. So literally, by the time I had engineered the robots, I already had a positive payback. You know, by the time I completed my automation, I was regenerating the answers and the thing with something like less than 10% of the questions where I still had to do it manually and extend the knowledge base of Lokad.

But the interesting thing was when we submitted our answers, the system online, we had like a submission process online, you submit and then you have 600 questions say, and you had like an automated response that was based on your answers and the boxes you’ve ticked. Here are 100 more questions that had been generated. And so, we reapplied. So, we thought, oh, we had done 600 questions.

By the way, it was in the end that was more than 100 pages worth of answers, so it was like a very, very long document to submit. And then you submit that and then the system managing the RFP process just gets back to you with 100 more questions. And again, we reused the tools and within a few hours, we were done finally. And then all the RFPs that we have done ever since, you know, we just use this tool and it has been, and that’s just one example amongst literally dozens.

Conor Doherty: There are companies though that have a very dissimilar attitude to yours. They believe that generative AI, the large language models that we’re describing, will basically become something of a co-pilot for people currently in the space, like demand planners and supply chain practitioners. They believe that you won’t be replaced by AI, but it’s going to support you, it’s going to be a co-pilot to help you make all the decisions, both quantitative and qualitative, that you’re saying will disappear. Why is that wrong?

Joannes Vermorel: So, that was, by the way, that was the sort of things that I was envisioning prior to my epiphany, you know, 18 months ago. If you think like that, no, LLMs are going to be complete crap. They are going to be a gadget and you’re completely missing the point. You’re completely missing the point. Why? Um, so this is the thing. If you want, first, having a conversational user interface, it is a toy. It is not, I mean, it’s nice to have GPT as a placement for your search engine, where okay, that’s fine. But if you want to do a repetitive job, what you want is just complete end-to-end automation.

So, in this case, LLM has to become a programmatic component of your software. And, as I said, just let’s go back to this automated RFP answering machine. The goal is not to have a co-pilot that chats with us to answer this 600 questions RFP. What we wanted was a machine that takes the document in and spills out all the questions out and be done with it, you see. And just have like a short list of questions where the FAQ needs to be extended and be done with it, you see. It’s not about having a co-pilot that you can interact with and whatnot. That’s a complete waste of time. This is not what automation, true automation, is about.

And so, my take is that people who are thinking like this, they are not thinking clearly. They just think in incremental terms. They just think of adding a new technology to improve a little bit the way it’s done, as opposed to rethinking entirely how it’s done and unlock a 100x productivity improvement in the process.

Conor Doherty: What jobs in particular do you think would survive the extinction event that you’re describing? What’s left if the quantitative is gone, the qualitative is gone, and everything’s automated?

Joannes Vermorel: So, what’s left is everything that is strictly non-repetitive, okay. But at a high level, because you see, if you look at a back office worker, you know, white collar, what people would say, “Oh, it’s not repetitive. Look, I have to send an email there, I have to ask a colleague over there, I have to do plenty of things that are a little bit heterogeneous.” Yes, but over the course of one year, it’s always the same that repeats over and over and over. And in the past, and by the way, that has been always looming ahead. I mean, for the last four decades, the software industry was, and I have always been convinced that it was not a when, it was not an if all those mundane stuff would be automated, it was just a when.

And back in the 80s, there had been like several AI winters where people made grandiose claims and it didn’t pan out with expert systems, with data mining. So, expert systems were the late 80s, early 90s, data mining was the year 2000, etc. So, there was like a series of waves, but the difference is that now it works. I mean, it works and literally, Lokad has been one year down the road and we have automated things that I would not have thought possible to be done so easily and so fast. And I’ve seen, again, other companies doing it as well and the net result is absolutely staggering and it works. And also, it works without, I would say, that much specialized skills or that much technical overhead. Those technologies are also LLMs very straightforward to adopt.

Conor Doherty: I just want to clarify a phrase that you just said there. You said you’ve seen so many companies doing that. Are you suggesting that behind the scenes, that’s kind of already what’s happening?

Joannes Vermorel: Yes, and that’s why I believe that the timeframe is going to be very compressed. That’s why I said 2030 for the end game, not 2050. Quite a few companies are already moving at maximum speed on that. If you follow the news, you will see that some companies announce like 5,000 people being laid off here and there. This is happening very fast and the speed of the market will not be defined by the speed of the usual company or the average company, it will be defined by the fastest. Because you see, the savings are so large that if you’re late, this technological turn will end you.

And I believe that AI is less significant in terms of technological achievement than the Internet itself. You know, the Internet itself is bigger. But in terms of competitiveness, the Internet took kind of two decades to be set up. You know, it was a slow process, laying out the cables, getting fast, reliable Internet connection everywhere, gradually updating everything, the OS system, making the most of email, etc.

So, it was a slow process where even if you were a laggard, it was not clear that there were immediate productivity gains. So, if you were late to the Internet party, and instead of adopting email in 1995, you only adopted email in 2002, you were seven years late, but that was kind of okay. Your competitors had not slashed their cost by a factor of 10x thanks to the Internet.

And by the way, the Internet created also a lot of bureaucracies because you needed a lot of system administration. It created tons of problems. So, it took literally two decades for companies to digest and become really better with it. Where it’s different here is that to become better with it, it’s a matter of months. And you can slash the amount of manpower that you need for a lot of tasks, and I would say pretty much every single back office task, supply chains being one of those, in months. And that’s where it’s going to be, I believe, very different this time.

Conor Doherty: When you said there are different departments, you’re not including IT, you are specifically focusing on supply chain centric activities?

Joannes Vermorel: Every department needs its own discussion. IT, to a large extent, is going to be tougher to automate. First, because robotizing the sysadmin creates all sorts of potential security problems. It’s going to be difficult. It will come, but I suspect it will take more time. And most of the decisions done by IT are already quite complicated. So, I will say for IT, I suspect it will be productivity savings of the order of 50%. And here, it will be the sort of co-pilot. By the way, that’s what happens at Lokad.

Now, you have a question about something like an obscure piece of software. You used to spend half an hour on the web to kind of get to the bottom of the technical documentation of the vendor. Now, with ChatGPT, you can do that but much faster. Fine, that’s the sort of co-pilot assistant we are talking of. Yes, that’s going to be IT. But I believe for other functions, it can be much faster and on a scale that is going to be much bigger.

Conor Doherty: So, basically, it sounds like the ROI for embracing the technology now is the difference between actually having a return on investment and basically going extinct.

Joannes Vermorel: Yeah, I mean, exactly. I think it’s a sort of sharp technological turn. I believe it’s really a mistake of thinking of that of ROI because the ROI is so large that if you don’t take the turn, your competitors will end you. So, you see, it’s stop thinking. Just think of you are in the garment industry and somebody just invented the sewing machine. That’s so, so you have like people that were with needles, you know, doing the garment and they would take like three days to do one shirt. And then, you have somebody who invents a sewing machine and they do a shirt in 5 minutes. That’s the scale of difference. So, what is the ROI of the sewing machine? The answer is either you have a sewing machine or you’re out of business. This is it.

Conor Doherty: You gave examples before on Lokad TV about Kodak. You mentioned explicitly Kodak. They invented the digital camera and didn’t embrace it and went bust.

Joannes Vermorel: Yes, and the thing was those revolutions were kind of limited to one vertical. You know, there was digital camera, yes, a lot of, I mean, 90% of the players in this market for argentic photo equipment just went bust. But again, that was something like a verticalized event. So, it was an extinction event but limited to a specific vertical.

Now, the interesting thing with LLMs is that they apply to literally pretty much all the white-collar jobs and more specifically to the back-office ones. Because you see, front office, if you’re talking to someone, if you need this personal touch and whatnot, even if you could in theory mechanize, it’s not clear that the market is willing to do that.

For example, Amazon, back in, let’s say, the year 2000, you could technically buy furniture online. But people were not ready. E-commerce was still, they did not trust e-commerce enough to buy a $3,000 sofa online. It took a decade later. So, a decade later, now it’s part of, I would say, yes, people say, yes, you can buy a sofa online. You can even buy a car online. It has become part of the culture.

You technically could have sold cars and furniture in the year 2000 online. It was not a technical problem. It was more like, are people willing to do that yet, or does it take time? So, I would say for the front office, things will happen a little bit slower because even if you could robotize, just like Amazon could have sold furniture in the year 2000, but this segment only took off a decade later.

The market will move, I would say, slower because you will move to the pace of the expectations of the wider audience. So, it’s going to be a little bit slower. But for the back office, absolutely no, there is no such limit. Nobody cares in the slightest if your execution of your production and your production schedule is completely robotized or if you have an army of clerks to do that. Your clients do not care, nobody cares, except internally.

Conor Doherty: Except the clerks who you’re arguing will be made redundant, is what you’re saying.

Joannes Vermorel: Again, Lokad was not the one who invented LLM. It was done by other people. I think it was invented by people like OpenAI. They went into that, they didn’t know what they were doing by the way. It’s very funny because there were interviews of Sam Altman who is now saying, well, if we had known, we would not have set up OpenAI like a nonprofit. We would not have published every single trick that we uncovered along the way.

So, you see, they were really into this idea of an LLM. It was just a sequence continuation. You have a piece of text and you continue. I think I will do a lecture on that. There was a series of innovations that made the LLM the truly, I would say, technological marvel of our time. But the bottom line is that it was, I believe, unexpected even for the companies who did invent that. The Transformer architecture originated from Google, but Google was not the one to unlock that, that was another company. So bottom line, it was a little bit of an accident. Obviously, chances happened to people that were well prepared. There were people that were really looking, doing very smart things, looking in the right direction. But the consequences were incredibly surprising.

It’s very interesting because even AI researchers, like let’s say Yann LeCun at Facebook, are very skeptical about the power of LLM. My firsthand experience using them is that it’s a real deal. So, that’s very interesting. It was such a surprise that even the people that have pioneered the field do not see the landmark that they represent.

Conor Doherty: It is worth dropping a pin just there because, like to make a meta comment, Lokad’s role in this is just observer. As you’ve described, we’ve been using both sides of AI, both for the quantitative and qualitative.

Joannes Vermorel: Lokad is not doing research on improving LMS. It’s a super specialized topic and you have companies that are doing it very well.

In France, we have like Mistral AI, which is a very strong team that is doing it, and they rival now OpenAI. So yes, good, I want to see a lot of competition in that. But for Lokad, it has a lot of consequences. We were automating the hard part in supply chain, the quantitative decisions: forecasting, orders, allocation, pricing, all the quantitative stuff. But now, the end-to-end execution of that, with all the sort of minute things that you need to do before, so acquire the missing tidbits of information, you have a few things that miss, you need to look it up, either shoot an email, either look it up online, etc., etc., you know, plenty of things that are small tasks.

In the past, we used to say to clients, well, if you have this problem, please do it. We could automate that, and sometimes we did, but it was kind of expensive. So now we can really robotize that, and same things for what happens after the decision, such as follow up with suppliers, follow up on small issues and whatnot, all this kind of mundane, repetitive stuff, it can be robotized as well.

Due to the fact that it’s already out there, we cannot afford not to do it. This is what our very pressing clients demand from us. Because again, it’s not, when you have a sewing machine, not using the sewing machine is just not an option. You can’t just say, you know what, we’re just going to pretend we have never heard of sewing machines and we’re going to keep sewing shirts with needles. No, you have to use it.

Conor Doherty: You’re talking about it, it’s survival on every level. It’s employees, it’s companies, it’s the entire industry, market sector, everything. What can be automated, will be.

Joannes Vermorel: Yes, and by the way, this will be a surprise, I think, for white collars. But if you look at blue collars over the last 150 years, they have been going from one revolution to the next. The introduction of electricity was a mass extinction event. There were like a thousand different things where suddenly they were automated. Again, 150 years ago in Paris, the most common job, that was almost something like 10% of the population, was people who were carrying water. So there was like 10% of the people had buckets and were carrying water, and that was the number one job, and it went extinct.

So obviously, the positive side is that every single time you kind of eliminate those jobs, society as a whole gets richer because it means that people are doing things that are more interesting, of more value, and things just sort out. The things will sort themselves out just like they did for over the last 150 years for all the industrial revolutions. It’s just the only thing that is surprising is that it touches a class of people, that is white-collar jobs, that were, so far, had been relatively, and I say relatively, protected. So now, well, it just happened here, but you know, it will happen again.

Conor Doherty: You don’t even have to go back 150 years. I mean, in the last few decades, most people have lived through a few extinction events in certain areas, like VHS was made redundant by DVDs, email made fax redundant.

Joannes Vermorel: For example, my parents, who started at Procter & Gamble, that’s more than 40 years ago, but what they were doing when they were telling me as entry-level employees, for example, one of the things that they would do is, and that was a job of a person, that when there was like contract negotiation, they would take a young employee and have this person compare line by line the two documents, the one that was a draft and the return, the counterproposal of your supplier, partner, or whatever, and just mark with a pen the sections that had differences. And that would take hours.

And so they were paying a lot of people just to find the difference. And now, where Microsoft Word just does a diff of documents, or you do track changes, and it’s done. So literally, there were quite a few tasks that have already gone extinct. But it happened, I would say, gradually. That was the thing, is that those things happened at a pace that was slow. The interesting thing with LLMs is that it’s quite a step, and that’s a step where, I mean, literally, we just walked 20 years into the future just in one year. That’s what it feels like after putting these technologies into production for the last year.

Conor Doherty: Lokad often collaborates with universities, training people to enter the supply chain. Does that mean we’re just going to abandon all of this? Is it a waste of time to study any supply chain science because it’s going away?

Joannes Vermorel: No, I don’t think so. What we’re teaching is not the mundane tasks, like how to send an email to a supplier to get the latest MOQ. If you look at the content of the lecture, it’s more about understanding the cost in dollars of a stock out and how to think about it.

If you want an answer from Chat GPT, they are going to hallucinate nonsense. Even if it’s GPT-4, we are not there yet, not quite. The sort of stuff that I touch in the lectures is not the stuff that gets automated. But when I look at supply chain companies, the percentage of time that people in supply chain spend on thinking long and hard about fundamental questions like what does quality of service even mean in the eyes of our clients is very small.

Most of what I cover in my lectures are the fundamental questions that are very frequently deceptively simple such as what does the word future mean? What does that mean anticipating correctly the future or adequately? These are genuinely difficult questions and if you’re thinking about those questions and you’re capable of bringing relevant answers to your company, you’re not on the verge of being automated. That’s why I say, and I still stand by this position, the LLMs are still incredible bullshitters.

So if you want to get this level of understanding, we’re not there yet. You’re going to have like hallucination and whatnot. But if what you want is to get this humdrum out of the way automatically, then it’s a done deal. That’s why I say, focus on the fundamentals, focus on the questions that require you to think long and hard. That is not going away. What is going away is the ambient noise, the endless trivialities. That is going to be solved with Large Language Models (LLMs).

Conor Doherty: All right, well there are some audience questions to get to and we’ve been going for 50 minutes. A lot of those questions actually address what I actually would finish with. So before we get to the audience questions, I guess I would like to, if at all possible, give an executive level summary for anyone who missed the first few minutes. And I think importantly, your call to action for all segments of, well, so again, people working in the back room, CEOs, just across sector.

Joannes Vermorel: The short answer is LLMs, Large Language Models, represent an extinction event for back office corporate functions that operate only for white collar jobs where you have people that just take data in, transform, reshuffle and push that out. You know, and look at your organization at all levels. You have literally armies of people who are doing just that. They take some emails and some maybe 20 different small disparate heterogeneous data sources, they do a little bit of crunching, and they move the stuff one step.

The message is all of that is already possible to automate. And quite a few companies are moving at full speed doing it right now. You can already see in the news that not only they are in production, but they have already started to remove people from these positions. And I’m not talking of like a few people right and left. I’m talking of large companies making announcements like, we had 6,000 people to do this, now we have 50. And there was like a gigantic layoff of that. And they proceed at full speed and I expect these things will just intensify.

Again, back office white collar jobs, that’s going to be the target. Supply chain is one of those functions. I suspect there is like half a dozen of other things that will. Accounting probably is going to be also massively impacted because in accounting, you have the smart, super smart high-level reasoning which is how you want to organize your accounting structure and whatnot. But you also have the humdrum. Somebody sends me a PDF, I need to extract half a dozen of relevant information just to generate the clerical entry that matches this PDF document. Okay, that’s done. That can be entirely automated.

So all of that is going to disappear super fast. And for some companies, it’s not even the future, it’s already done. It’s already present day. And we are talking of months. So, executive summary is, extinction event, it’s a matter of months. Yes, you have to act fast. And by the way, Lokad, I mean, when I realized that, I spent the first three months of 2023 just rethinking the entire technological roadmap of Lokad because everything that I had been envisioning before was toast.

So for us, it has been quite a drastic turn. I mean, internally at Lokad, and we have been automating quite a few things. And literally, we were so busy just doing it that we were not communicating that much about it. But that has been my daily schedule for a year now.

Conor Doherty: Before we transition to the audience questions, it is really worth pointing out that traditionally we would talk a lot about probabilistic forecasting, stochastic optimization, all of that. That wasn’t even part of this conversation because that’s already settled. That’s been the state of the art for years. So those quantitative decisions, worth highlighting, inventory, purchase orders, allocation, pricing, as far as at least Lokad’s concerned, that was settled years ago. That was already done. People are at least semi or ambiently aware of that. The thrust of today is you’re talking about everything else that was left.

Joannes Vermorel: Yes, exactly. The humdrum, the noise, the ambient small stuff, all those thousand small accidents, that were not where you didn’t need like 10 years in supply chain. Again, just think of asking one random question to a partner, a transporter, a supplier, and whatnot. And by the way, companies were routinely hiring like hundreds of people with just a month of training. Those people would be able to operate. But think of anything that requires less than six months of training, most likely something that an LLM can operate, can automate.

You know, if it’s something where somebody gets it in just a few months, okay, that can be automated. If it takes like ten years and skill and dedication and patience, no. But the easy stuff, that would probably be more strategic anyway, it’s the distinction. Yes.

Conor Doherty: All right. I recommend taking a sip of water because there are some questions to get through. So everyone, thank you very much for your questions. Our producer has collated the questions. I don’t have access to the YouTube chat, so I don’t know how many were asked, but any that were similar were lumped together. Any questions that we don’t answer today, we will provide more detailed responses to either in a follow-up video or on LinkedIn.

The first question is from Constantine. He asks, “Which job titles might become obsolete? Do you see a future for forecasting and planning roles?”

Joannes Vermorel: Roles like supply and demand planner, inventory analyst, category managers, all of that, I would say it’s gone. It’s literally already gone. The people watching might not agree, but you’re just reporting. The forecasting part, for example, Lokad automated that almost a decade ago. The thing that was not automated was usually the small data accidents like duplicate products or identifying which product is the descendant of another product. Like this product is just, for example, you just take two product descriptions and you say, oh, this is just generation four of a device and that’s generation five. So this one is just kind of the same, a little bit better. That’s the sort of thing an LLM can definitely automate. So, all of that is completely automatable and that’s gone.

What cannot be automated would be something more like a supply chain scientist where you craft the numerical recipes that automate everything that is not automatable. You can automate the work, but you can’t automate yet crafting the numerical recipes and you can’t automate yet the strategic thinking that goes into turning everything into dollars of error, dollars of profit. You need to be able to have this financial perspective that an LLM is not able to do. But all those mundane management jobs where you have a lot of repetitive stuff, that is already gone. And for forecasting, it’s the same thing. That has been Lokad’s position for the last decade. So I mean, there is nothing new.

Conor Doherty: Well, I’m sure everyone will be satisfied with that response. Moving to the next one. Thank you. Moving to the next question from Sherar. “Joannes, could you please elaborate on the benefits of AI and supply chain with real-time examples?”

Joannes Vermorel: First, let’s define real-time. In supply chain, we’re not talking about real-time in the same sense as keeping an aircraft flying, which requires millisecond response times. In supply chain, even if you want to give instructions to a truck driver to steer the traffic, a minute lag is not that bad. Real-time in supply chain would be a robot within a warehouse doing automated picking. Most problems in supply chain are not real-time. The vast majority of problems can afford an hour of delay. There are very few supply chain questions that need an answer within less than an hour.

So again, with LLMs you want to get information, do a web search, grab the results, get it back. You want to know the address of a supplier and automatically retrieve that. It’s very straightforward to have some logics that do it automatically. It is easy. We are talking about all the fundamental decisions, everything that was like planning, scheduling, all of that, buying, producing, allocating, pricing, updating. And then what we have added is all the drum around it where the sort of master data management, communicating with the network, notifying clients of delays, notifying suppliers that they have problems and whatnot, all of that can now be automated. It’s not super smart, but that’s a second layer that can be automated as well.

Conor Doherty: But the point being that it doesn’t necessarily require lateral thinking. If it’s just, the term you use was templating, if it’s just, “Look for this type,” it’s almost Boolean. If this, then.

Joannes Vermorel: Yes, exactly. And where, and the big difference between, let’s say, the expert systems of the ’90s is that when I say LLMs are universal templating machines that are noise resilient, is that it doesn’t matter if the email is poorly phrased. It doesn’t matter. I mean, it doesn’t even matter if the email is in Russian or Japanese. Those things can just read pretty much every single language. Unless somebody sends you a message that has been written in a rare language like a Zulu dialect. If it’s written in any language that is spoken by 100 million people worldwide, it’s done. And I’d say 100 million is high. Any language that is spoken by at least 10 million people worldwide, you’re good.

Conor Doherty: The next question is from Tamit. “Isn’t the pricing itself representative of the perceived bottom line impact, particularly pricing of Chat GPT relative to Gurobi or CPLEX?”

Joannes Vermorel: Gurobi and CPLEX are mathematical solvers, so they are not even in the same class of products. They are completely different tools and they don’t even remotely address the same things. They are mathematical solvers. So, Gurobi and CPLEX, for the audience, is you state a problem expressed as a list of constraints and an objective function, and that gives you the answer. It’s a mathematical component.

nd I’m not sure if the episode has been published yet, but we have just shot one about stochastic optimization. No, it’s coming soon. The bottom line is the reason why Gurobi and CPLEX is kind of a non-starter for supply chain is that they don’t deal with the stochasticity. We will discuss that in a different episode, but it is a completely different class of tools. LLMs, it’s about templating text and doing all sorts of text reformulation, extraction, quick analysis on text data. And when I say text, plain text, sequence of letters, and numbers, and characters. So they are completely different problems. And they don’t even remotely address the same things.

Conor Doherty: Like my previous attempt to plant a flag, many people still don’t fully understand AI’s impact. It’s an extinction-level event for nearly all jobs, except for those at a very high level.

Joannes Vermorel: The reason why, I mean, for example, Gurobi and CPLEX is like a non-thing. I mean, those things have been around for like four decades, and the problem is that, again, we will discuss that in another episode, they don’t solve, they don’t address frontal stochastic aspect. So this is a non-starter. And then, even if they did, you still need someone like a supply chain scientist to use them. So it is not like a quick win. It’s not something that can happen in hours. LLMs, for the sort of, you know, like very mundane, small problem, small phones, it’s the sort of things where you can get solutions within literally minutes.

Conor Doherty: Could the increasing number of court cases for royalties or financial compensation from individuals whose data contributed to the training of LLMs potentially hinder progress and adoption by driving up prices?

Joannes Vermorel: Forget about it. Some companies are showing that they can have LLMs that are close to the performance of Open AI by using corpus that is much smaller, like just the Wikipedia. So, the answer is no. We are not talking about gen AI for images that could infringe on the intellectual property of Disney or whatever. We are talking of stuff that is super mundane like here is a piece of text, give me who sent the email, what is the MOQ, does this person give a definitive answer or a vague answer, or is this person confident in the accuracy of the answer being given. Those are the sort of things that you can get the answer right from an email automatically. That’s what we’re talking about. So that’s going to be a complete non-issue.

Even if the people have to retrain their LLMs because they have to discard 3% of the input database, it’s fine. That’s what Mistral, this French company, proved is that you can retrain a production-grade LLM, say Open AI level, with just a few hundred thousand Euros. So, this is already done. There is no going back. There is work around those things will at best be a little bit of noise but it’s already a done deal that’s not going to change anything.

Conor Doherty: And ultimately, you’re using it as a templating machine. Again, you’re telling it exactly what you want to find and giving it the input like, “Find in this email the information.”

Joannes Vermorel: And again, we’re talking of back-office jobs. You’re not talking of writing the next Harry Potter and getting sued by the lawyers of J.K. Rowling because your stuff is just hallucinating a close copy of Harry Potter. That’s not what we’re talking about. We’re talking about just think of the sort of emails, just think of the last 100 emails that you’ve written and how much ingenuity, originality, and absolute, I would say, human-level intelligence that go into that. And chances are, not that much. I mean, even if I look at what I wrote on a daily basis, most of it is not like, you know, it is very mundane. And that is what is being automated like super fast.

Conor Doherty: It is worth just appending that with a little comment and that if the impression has been that Chat GPT itself is what we’re talking about, no we’re talking about LLMs as a technology in and of itself not specifically when you interact with online.

Joannes Vermorel: And more specifically, LLMs as a programming component. Just like you have a relational database, you have subsystems in your software, you have subsystems the transactional database, the web server etc, and here you have one that is LLM and that’s just one way to do certain steps in your program. Do not think of LLMs as something that comes packaged with a chat interface. Nearly everything that I’ve automated for the last year, there is no interface. It’s literally a script that just does something end to end, no user interface. It’s an element of an overall stack in other words.

Conor Doherty: James asks, what advice would you give to a young person looking to enter supply chain in terms of skilling themselves and how do they best sell this skill in the context of the extinction event you’ve described?

Joannes Vermorel: LLMs force you to up your game in terms of strategic understanding. Mastering dumb recipes like ABC analysis or whatnot, it’s a non-starter. Equip yourself with the final instruments that are still lacking to GPT-4 like the capacity to conduct in-depth thinking. Thinking long and hard about problems end up with lines of reasoning that are generally correct. Be able to produce a synthesis about a problem that is superior to the one that GPT-4 produces. Those are very valid skills that will remain where you really go for your upper-level intellect capabilities that are just beyond the machine.

And that are the sort of skills that I believe we are still not close to automating. Even for example, if you look at voices in the research community that are very critical, you could look at what Yann LeCun is saying for example. He’s saying yeah LLMs, you know, this is not the answer to general intelligence and on that front, I agree with Yann LeCun. Where I disagree is that I believe that we don’t need general intelligence to still face an extinction event for back office jobs. We just need LLMs and LLMs it’s like a lower level of intelligence but it’s way enough to just be like 90% of the manpower and that’s going to be quite radical. For the 10% that are left we will see.

So for a young person that enters the career, go for example to my lectures, you will see that most of it is not about trivial, this is not about the humdrum, this is not about small detail, this is about fundamental questions like “What is fundamentally the problems we are even trying to solve?” That’s I have a whole chapter about personas, it’s difficult, what is it the problems that we are trying to solve for this supply chain and the answer varies from vertical to vertical. This is difficult.

What are the programming paradigms that are relevant? Because again I said LLMs can automate tons of things but the numerical recipes they don’t write themselves. Yes, LLM can help but they lack the sort of higher-level judgment to seize the adequation. The adequation of so even if the code is written with the help of a machine, you still need to a large extent and I think that will stay, human level judgment to really see if it’s really adequate.

So here LLMs will be a booster but it will not replace programming skills. So if you have programming skills, your skills are going to be even more valuable because now you will actually be even more productive with LLM technologies. So my focus is fundamentals, critical thinking, strategic level analysis and then all the hardcore topics like programming paradigms and even relevant mathematical instruments. For example, a probabilistic forecast still requires, if you want to be able to reason correctly about that, you need to have a high-quality understanding of those mathematical instruments and again that is not going away. The GPT-4 is not automating that.

Conor Doherty: Lionel asks, how do AI-driven Supply Chain Solutions affect small and medium-sized Enterprises compared to larger ones or larger corporations?

Joannes Vermorel: I believe that the effect will be even more pronounced for smaller companies. Why? Because large companies could afford to have large specialized bureaucracies, small companies could not. Small companies knew that they could not compete with the large companies because they could not have a department with 200 people and 10 different specializations and whatnot.

But the interesting thing is that the productivity is so high with those sort of tools that suddenly going crazy on automation becomes super accessible even for small companies. And by the way, Lokad, we are like a 60 employee company now and I’ve been automating tons of things right and left and it can be done super swiftly.

That’s the interesting thing is that it doesn’t take a project with 20 software engineers to get things done. It can, the sort of achievements that you can obtain within a matter of hours is staggering if you’re doing it right.

So my take is that for medium-sized companies, maybe not like very small but let’s say medium-sized company, any company that is like $50 million and above, they will be able to mechanize things at an incredible pace and rival what companies that are super large are doing because the thing will be very quickly the bottleneck will be just the LLMs and you have access to the exact same LLMs than let’s say Samsung or Apple or whoever the giant you have in mind.

You have access to the same tools. So you’re literally, if you were competing in terms of analyst, yes Apple has probably vastly more talented demand analysts than you do but they have access to the exact same LLMs than you. So it’s kind of a great equalizer in terms of capacity for automation.

Conor Doherty: Next question from Nick, how has the utilization of LLMs, as one of the pioneering methods at Lokad, influenced your business’s performance metrics such as churn rate, new subscriptions, and customer satisfaction?

Joannes Vermorel: Overall, I would say we are still 12 months into production grade. What we have automated now has a quality beyond human capability. You can literally see the sort of things that we have automated. It is done better than before and very frequently with 100 times less manpower than we used to.

For the audience who is not familiar, at Lokad, as an enterprise software vendor, we are talking of sales cycles that are long. I would love to be able to close deals with clients in three weeks, but unfortunately, it’s more like a three years process. I’m sorry, with an 18-month RFP process that just drove me nuts. That’s with AI, and that’s with AI to do the RFP. So, I mean, it is slow. But again, that’s why I say the front-facing stuff like sales and whatnot. It’s slow, but the feedback from our clients has been incredibly positive.

It can be just mundane things like automatically generating a report for a two-hour discussion with a client. The report is very well organized and captures all the key things discussed. We have our own internal technology on how to craft these high-quality memos after a meeting. It works beautifully and we’ve received very positive feedback from our clients on that.

My perception is that the tasks that we have automated are done better than before and at the bare minimum, we have achieved something like 20x productivity. That’s absolutely stunning.

As for subscription rate and other metrics, it’s too early to tell. Being an enterprise software vendor, my sales cycles are impossibly slow. We will discuss that in a few years.

I believe it is wrong to follow the numbers. The numbers come too little too late. Just think of Kodak. By the time digital photography was nothing until it was everything. If Kodak was like the guy who is in free fall and says so far everything is right, no, you’re in free fall, you’re going to hit the ground hard. You’re not right.

By the time people will see the numbers, companies will have robotized armies. My prediction is that there will be companies moving forward on that and I see those companies being the Amazon of the next decade.

So, the bottom line is that they are moving at full speed, and if I think five years down the road, I can already see that those companies will be able to overcompete all their peers with prices that their competitors would just not be able to rival. And then in terms of agility the problem when you have an army of people is that by definition, if you have your own organization, I’m talking about a large company, you have hundreds of people that are involved in planning, S&OP, forecasting, all of that, you’re slow.

You’re a big bureaucracy. You’re not, by definition, if you have like 200 people, you cannot be agile. It’s just, you have way too many people. If you can trim that down to 20 people, then you can be like a tiger, you know, super agile, super fast. And again, that means that those companies, they will outcompete on cost massively, they will outcompete on agility massively, that’s a lot, that’s really a lot.

Beyond human intelligence, they will outcompete on the quality of the execution. There is a saying in the software industry that everything that depends on manual intervention is unreliable. You cannot achieve reliability if you have a manual intervention in the middle.

So, what I see is that, even in terms of quality of execution, the reliability will be off the chart compared to manual processes. So that means that, you know, agility, cost, reliability, performance, I mean, that’s again, that’s why I say extinction events. That would be the companies who do will survive, the ones who don’t will be gone, you know, within a decade.

So, it will be slow to unfold because, again, there was some inertia. In France, for example, I was discussing with many retailers, and I remember vividly, that was prior to starting Lokad. I was telling them, and it was 2004, I was a student, I was telling them, I was coming back from the states, I spent two years in the US, and I was telling retailers in France, and I was telling them, “Amazon is going to end you.” And people were saying to me, “Oh, e-commerce is just a fad. This is just, you know, they don’t have 0.1% market share, we don’t care, it’s nothing.”

And for me, it was already written. There was no question, it was just, again, that was just a matter of timing. It was already written. Either as a retailer, you take the turn of e-commerce, if you don’t, then Amazon and their peers would just end you. And that has unfolded, by the way. I’ve seen quite a few of those companies just going bankrupt. It took a decade to unfold, but it did. And that is what is going to happen for lots of other companies.

And the thing that makes it very interesting for LLMs is that it’s not specific to verticals. Some verticals will be more impacted, but the bottom line will be anything that has those back-office support functions will be impacted massively.

Conor Doherty: It is important to also add to the point that you made, in Lokad’s example, that the functions that you described that have been automated with LLMs is in excess of everything else that has been done with AI. So again, it’s not just, “Oh, we have a few things.” What you’re talking about is a highly trained workforce where all the mundane, humdrum stuff, both quantitative and qualitative, has been, as much as humanly possible, automated, thus liberating all that smart stuff to focus on the actual issues that matter. So, and if you have a company that’s doing that versus one where they’re not, it’s Darwinism, basically.

Joannes Vermorel: Exactly. The beauty of it is that it’s Schumpeterian destruction at play. It is for the greater good for companies to become richer. For example, if Paris was still having 10% of its population carrying water, Paris would be a very poor city.

Paris has become a first-world city, only because, I mean by our modern-day standards, only because we don’t occupy 10% of the population doing stupid things. It’s by liberating people of the incredibly tedious jobs that we can afford to do art, to be creative, to be inventive.

In those companies operating supply chains, if everybody is firefighting all the time, dealing with small, minute messes, grains of sand in the machine that just derail everything, but not in grand, epic ways, just in stupid ways, and it’s just consuming all the oxygen.

So just think of it as all those small things, it’s things that just consume all the oxygen, and then there is no, people can’t even think just because there is so much of that. So it’s, I believe that it’s going to be really something that will be for the betterment of supply chain because suddenly people will be able to think strategically, to not be entangled into this zillion of tiny distractions that do not deserve their human attention.

Let’s have a million semi-dumb assistants, because that’s what LLMs are, a million semi-dumb assistants that just deal with these things that do not deserve human intelligence.

Conor Doherty: Last two questions. This one’s from Lionel. What examples of successful AI and human collaboration in supply chain operations can we learn from?

Joannes Vermorel: Do not think in terms of collaborations. That’s a mistake. There will be no generic co-pilot.

In the end it’s always humans, obviously, and machines. So yes, there is a form of cooperation, but it doesn’t take the form that you envision. It is not a co-pilot. When I’ve been automating the RFP answering machine what was the cooperation look like? I sat down at my desk, I spent a week coding this answering machine, and then I have an answering machine.

Every single time there is an RFP that comes in, we run the machine, get the answers. That’s what cooperation looks like. And then when OpenAI releases like a GPT-4 turbo or whatever new models, I do a little update in my code to take advantage of the latest thing, and we are back in business.

This is a cooperation, but in the sense that I’m coding some stuff, and when stuff changes, I revise a little bit my code. That’s the sort of cooperation we’re talking about. It’s not like I’m dialoguing with a machine. I don’t dialogue with GPT, or anything. This is not how it works. This is not how the game is played out.

So, do not think of LLMs as something cooperative. Most of the stuff that we automate, we just automate it completely, and then there is nobody involved anymore. This is just done.

To give examples, the Lokad website is entirely translated automatically, and the beauty of it, and you can look it up online, the beauty of it is that we translate not the English, we translate the HTML directly. It’s like, take the raw HTML and retranslate, and we have saved 90% of the effort because suddenly we can iterate everything, and the LLMs is smart enough to know what is the HTML that should not be touched because it’s a tag, and what is actual English that needs to be translated. Beautiful.

So that is already done. For the audience, all the pages that we have for the Lokad TV videos where we had timestamps, we used to, for example, timestamp to do it manually, now it’s done automatically. I did, now it’s done automatically.

So, so that’s the thing where you want to take a one-hour discussion, create timestamps automatically, done. I could mention stuff that are more arcane because the stuff where we have the most benefits are back-office jobs at Lokad, so it’s not like customer-facing, so, but it’s like arcane stuff.

The point is that it would take for me too much time to just explain why we need that in the first place, but the bottom line is that the examples go on and on and on. Usually we try and within the day it’s automated. That’s what it looks like. And yes, there is a little bit of messing around with the prompts, but again the question is, what cannot be automated? It’s a more difficult answer nowadays than to answer to what can be automated.

Conor Doherty: It’s interesting you give that point because when you gave the example of summarizing discussion, and this speaks more to what you just said, like how much further could we go? I mean, in the back office, the discussion now is, how can we take summaries of discussions, let’s say with clients or with prospects or whatever, and based on what has been discussed, auto search the website, insert relevant links in relevant parts of things, just, what can’t we do? Well, it turns out we can. We’re working on that. But it’s just, what can’t be done? It’s hard to shortlist what can’t be done with an LLM.

Joannes Vermorel: Right now, everything that is truly high level, I’m making the stuff up just in terms of wording because we are lacking the words, but I would say high-level human intelligence, or higher forms of intelligence, things where you need to think long and hard for potentially hours to get the answer, not something where you can have like an instinctive, if it’s something where you can have like an instinctive answer, the LLMs can do it too.

But something, again, um, what should be, um, the, just think of very difficult questions such as, what quality of service means for our clients? This is a very difficult question. What should be our priority target segments? Macro questions for the company, that’s the sort of questions where you can literally spend weeks to get the answer, and that’s where LLMs still fall short.

If you have a question that is so important that you can spend weeks answering the question, high-level human intelligence will give you a better answer than GPT-4. But if it’s a question where you have only like 60 seconds of brain time to get the answer, then the answer that you’re going to get from a human is not going to be super good. The clock is ticking. If you give me 60 seconds to give an answer about anything, it’s not going to be a good answer.

Conor Doherty: The point being perhaps once, but not every 60 seconds every hour seven or eight hours a day, 300 days a year, 50 years. That’s the difference.

Joannes Vermorel: That’s the difference. Obviously, if I rest for 30 minutes and then yes. But the LLM, it doesn’t get tired. You can run it and you can literally automate millions of operations a day and it’s not even difficult.

Conor Doherty: This is the last question from Lionel. How can small countries leverage AI in Supply Chain management to overcome their unique geographic and economic challenges, and what are the implications for local job markets?

Joannes Vermorel: The beauty of it is that LLMs are incredibly accessible. The bandwidth requirements to use LLMs is nothing. You can literally send kilobytes of data, and it works. LLMs are operated at a distance, so if you’re in a poor country, as long as you can have a halfway decent low bandwidth Internet connection, you’re good.

These things do not require high-speed connectivity. So it’s okay. It does not require a super talented workforce. That’s the beauty of it. Prompt engineering is probably out of all the quasi-engineering skills that I had to acquire over the last two decades, it’s the easiest. It’s literally something where in a matter of hours, you will get it.

That’s why there are now children who are making all their homework being done by Chat GPT. I mean, it is easy, like child-level easy. And that’s where I say, well, the adoption is going to happen fast because it’s not, I mean, don’t buy somebody who tells you, “I have a prompt engineering degree.” What are you talking about? It’s the sort of things where if you work a little bit on it to get the hang of it, you will get the hang of it within literally days. It’s not, it’s not a difficult, I mean, it’s more difficult to master Excel than to master prompt engineering.

So, bottom line, if you’re in a poor, remote country, it’s super accessible. By the way, did I tell you this technology is cheap? It’s like dirt cheap. Dirt cheap. Just think of it, just for the audience to get a sense, look at our website, it’s gigantic. We are talking of a thousand pages, thousand web pages. So if we were to print it, it’s probably like 3,000 A4 pages. Those FAQs are enormous, more than that.

So, we are talking of something that is a big, fat website. We translate that into seven languages. The cost to do one batch of from English to all those languages, and we are talking of again 3,000 pages worth of text if we conserve it, it’s $150 with OpenAI. That’s what I pay. And by the way, the cost to do that with freelancers that we used to do it was like $50,000 per language.

So the cost went from something that was in the close to zero, a quarter of a million or above quarter of a million to get the translation to $150. That’s, and by the way, the cost is going to be even lower because OpenAI just lowered the price recently. And I have, and by the way, we are to do that, we are not even using GPT-4, we are still using GPT-3.5. And with Mistral, we should try, but Mistral is even cheaper.

So probably, three years down the road, translating those massive 3,000 pages will be something like $50. So the beauty of it is that I think for developed countries, this is a massive opportunity, this is a massive equalizer. Just think of it, for not pennies but dollars, you can play with the same tools than the big guys and you’re at the same level than the people who have like millions at their hands like Apple. You’re playing with the same tools.

So, that’s going to be an incredible equalizer. And if you’re smart and have some passion, you will learn along the way. And by that’s again, that’s not super difficult. It’s probably one of the most incredibly accessible revolutions. And I believe that even poor countries now have like a, even again, crappy internet connections are enough to get advantage of LLMs. You don’t need broadband even. If you have 20 kilobytes both ways per second, reliable, you’re good.

Conor Doherty: I believe we’ve spoken for north of an hour and a half. So, if I could just summarize all of this, Skynet?

Joannes Vermorel: No, not Skynet. That was my wrong expectation 18 months ago. I was saying, oh, it’s as dumb as ever, so it’s nothing. No, it is a universal templating machine, and that is a game-changer. That’s the sewing machine of what will be for white collars, what the sewing machines did for the garment industry.

The beauty of it is the simplicity. Even at the time, a sewing machine was orders of magnitude simpler than a clock. By the standards of the 19th century, it was not a complicated machine. There were already machines that were orders of magnitude more complicated. Yet, it was deceptively simple and yet, almost overnight, it sped up the garment industry by 100 times. If you think that sewing machines were not a revolution because suddenly you don’t have a cloth maker end to end, you’re missing the point. With a sewing machine, you can still make garments 100 times faster.

We don’t have Skynet. GPT-4 is not going to replace high-quality strategic thinking. But all the humdrum, yes, it will. This is an evolution. My message to the audience is, don’t miss the train. A lot of companies have already boarded the train. Some people, to my shame, did that earlier. But there are a lot of people on board of the train and the results are so fast that if you don’t act now, you won’t be able to catch up four years down the road. The discrepancy will be so great that it will be like a Kodak effect where you’re toast, even if you were not such a bad company in the first place.

Conor Doherty: All right, well I have no further questions, Joannes. Thank you very much for your time and thank you very much for watching. We’ll see you next time.