00:00:00 Introduction to the interview
00:01:01 AI’s impact on traditional jobs
00:04:36 AI automation in supply chain
00:09:08 Need for unified system in 2012
00:10:30 Re-injecting decisions into systems
00:13:08 Avoiding hallucination problem in AI
00:16:11 Impact and delays due to IT backlog
00:20:04 Comparing Lokad and other vendors’ setups
00:23:06 Discussing LLM hallucinations and confabulations
00:30:38 Emphasizing progress over perfection in AI
00:33:00 Fetching missing information and order ETA
00:36:17 Quantitative tasks and LLMs in supply chain
00:38:28 Future of AI Pilots in supply chain
00:41:18 Value of conversations and automating low-value tasks
00:44:57 Leveraging AI Pilots to reduce backlog
00:49:00 AI Pilot vs copilot and lockdown scenario
00:53:36 Skepticism towards conversational AI and process analysis
00:57:18 Understanding business reality and AI replacing processes
01:00:12 Challenges of open sourcing Envision
01:06:21 AI’s approach to bottlenecks and supply chain
01:09:17 Inefficiency of verbal commands and automating orders
01:14:12 Supply Chain Scientist as copilot for AI Pilot
01:17:32 Checking data correctness and automating checks with LLMs
01:20:15 Making Envision friendly to git
01:21:14 Free resources for learning Envision

Summary

In a dialogue between Lokad’s CEO Joannes Vermorel and Head of Communication Conor Doherty, they discuss the impact of AI on supply chain management. Vermorel highlights the advancements in AI and large language models, which have revolutionized task automation. He introduces AI Pilots, a Lokad offering that automates decision-making and clerical tasks, facilitated by Lokad’s proprietary programming language, Envision. Vermorel also discusses the potential of AI to automate tasks related to Master data and contrasts Lokad’s approach with competitors. He predicts that AI Pilots will become the norm in supply chain management, leading to significant productivity improvements. The conversation concludes with a Q&A session.

Extended Summary

In a recent conversation between Conor Doherty, Head of Communication at Lokad, and Joannes Vermorel, CEO and founder of Lokad, the duo delved into the transformative role of artificial intelligence (AI) in supply chain management. The discussion, a continuation of a previous conversation on AI’s impact on employment, focused on the potential of AI to act as a standalone pilot for supply chain decision-making.

Vermorel began by highlighting the landmark achievement of generative AI and large language models (LLMs) in 2023. These advancements, he explained, have revolutionized the automation of tasks involving text, such as reading emails or categorizing complaints. The year 2023 was particularly significant as it saw a substantial reduction in the operational cost of natural language processing techniques for companies. Vermorel predicted that this would lead to the automation of many internal support functions, with supply chain operations at the forefront.

Vermorel then introduced AI Pilots, a Lokad offering that automates the decision-making process and handles mundane clerical tasks. He emphasized Lokad’s unique approach, where one Supply Chain Scientist can take full ownership of an initiative. This is facilitated by Lokad’s proprietary programming language, Envision, dedicated to the predictive optimization of supply chains. However, Vermorel admitted that Lokad had previously struggled with data hunting and dealing with various SQL dialects.

The introduction of GPT-4, Vermorel explained, has been a game-changer for Lokad, enabling the company to automate SQL query composition. These queries can then be proofread by a Supply Chain Scientist and tested to ensure accuracy. This development, coupled with a secure cloud-to-cloud connection, allows Lokad’s team of Supply Chain Scientists to chase clients’ data on their own, thereby reducing delays.

Vermorel also highlighted the potential of LLMs to automate many tasks related to Master data, including surveying, monitoring, and improving. He contrasted Lokad’s approach with that of competitors, stating that Lokad typically involves fewer people in an initiative, with each person having competency across the entire pipeline. This, he argued, is a stark contrast to competitors who often involve many more people in an initiative, including project managers, consultants, UX designers, database administrators, network specialists, and programmers.

The conversation then shifted to the role of Supply Chain Scientists in validating or monitoring scripts generated by LLMs. Vermorel acknowledged that LLMs can sometimes produce inaccurate or “hallucinated” results, but these are usually directionally correct and can be corrected with a few iterations through a feedback loop. He suggested that while LLMs can make mistakes, they can still provide a lot of value, and their rate of false positives and false negatives can be measured.

Vermorel further explained the day-to-day orchestration between the Supply Chain Scientist, the AI Pilot, and the client. The AI Pilot, composed by the Supply Chain Scientist, handles the daily operations of the supply chain, managing the minutiae of data preparation and purchase order decisions. The client, in this setup, is likened to the captain, giving overall strategic directions.

In terms of takeaways for supply chain practitioners and executive teams, Vermorel predicted that in a decade, AI Pilots will be the norm in Supply Chain Management (SCM). This, he believes, will lead to a massive productivity improvement, with a potential 90% reduction in headcounts for previous functions. He encouraged supply chain practitioners to spend more time on strategic thinking and in-depth conversations with suppliers and clients.

The conversation concluded with a Q&A session, where Vermorel addressed questions on a range of topics, including the role of AI Pilots in reducing IT backlog, the difference between an AI Pilot and a copilot, the importance of process analysis before implementing an AI model, Lokad’s plans to open source Envision, and how AI addresses random bottlenecks. He also confirmed that Lokad is working on a Lokad copilot and plans to make Envision more friendly with GitHub.

Full Transcript

Conor Doherty: Welcome to Lokad live. My name is Conor. I’m the Head of Communication here at Lokad. And I’m joined in the studio by Lokad’s founder, Joannes Vermorel.

Today’s topic is how AI can act as a standalone pilot for supply chain decision-making. Feel free to submit your questions at any time in the YouTube chat, and we’ll get to them in approximately 30-35 minutes.

And with that, Joannes, AI Pilots in supply chain occurs to me that this conversation is very much an extension of one that we had, I think about four weeks ago, where we talked about the implications of AI on employment and the future of traditional jobs versus AI in supply chain.

So before we get into the specifics of AI Pilots, could you give for anyone who didn’t see that, could you give a refresher, just an executive summary, what is our perspective on traditional jobs versus AI in supply chain?

Joannes Vermorel: The refresher is that there has been a landmark achieved pretty much in 2023. This landmark is generative AI and more specifically, large language models (LLMs). In terms of pure research, it’s just a continuation of almost like four-five decades of continuous improvement in machine learning. So if you look at this from a research perspective, 2023 is just a year like the others with a long sequence of progress. And there has been relatively fast-paced progress for the last two decades.

Now, in 2023, what comes to the market is packaged generative AI for image and more importantly for text. And there is one product that popularized that, it’s ChatGPT by OpenAI. What does that mean? It means very specifically, especially those large language models, that you have a universal noise resilient templating machine.

That means that all the sort of steps for enterprise software, I’m talking in the context of workers as if white-collar workers in corporate environments, it means that all the sort of steps that could not be automated in the past because we had to deal with text in any shape or form, that means read an email, extract a reference or price from an email or quantity or categorize the type of complaint or request from a partner or supplier client from an email, identify if a product label is nonsensical, for example, if the product label is literally product description is missing, okay we have a problem, all those sort of things in the past could not be done easily. They could be done in other ways, but they could not be easily automated.

If we had to go back let’s say five years in the past, text mining was already a thing. It was already possible to have text classifiers and to use all sort of natural language processing techniques, but they were costly. 2023 was a landmark because all those problems due to the degree of packaging that was achieved with essentially GPT-4 serviced through an API, meant that all those NLP techniques, natural language processing techniques, had their operational cost for companies slashed by a factor of 100, if not 1,000. Which means that not only the cost but also the sort of timeframe it takes to just set up the thing.

So the bottom line, and that’s my prediction, is that a lot of support functions in the companies that are just internal functions, that take some data in, produce an output for other divisions, will be automated. Supply chain is on the front line because it’s not exactly a customer-facing function. It is very much an internal function, an important one, but internal. And so the cases of it were large language models were the missing brick to largely automate end-to-end the vast majority of mundane operations in supply chains.

Lokad has been automating the quantitative analysis and quantitative decision-making process for a decade, but there are a lot of mundane operations that come before and a lot of mundane operations that come after, and that’s the one that can be automated thanks to large language models, and quickly and cheaply.

Conor Doherty: Well, thank you. And we do already have a video where we talked for I think about an hour and a half on that topic, so I won’t dedicate any more of today to that, but that sets the table for the rest of the conversation. I kindly invite anyone who wants to hear more on that to review the previous video. Now, on that note, AI Pilots, how do they fit into everything you just said? What are they? What do they do in reality?

Joannes Vermorel: AI is generally speaking has been used consistently for the last two decades by vendors as a catchphrase and umbrella term to throw what they had. So when I say AI Pilots, it’s very much an offering of Lokad. It’s an evolution of offering, it’s probably the biggest we had in years. And what is the difference? Well, the difference is an AI Pilot is something, is a piece of software, what we call a series of numeric recipes, that not only execute the decision-making process, so that’s the pure quantitative aspect of supply chain, so that’s literally figuring exactly how much to order, where to allocate the stock, if you have to move the prices up or down, exactly how to schedule the production with all the steps, etc.

So that we were already doing, plus everything that comes before and after in terms of mundane clerical tasks that are mostly master data management prior to the data analysis, and then execution of the decision which may involve unstructured channels like emails for example, where you want to send an email to a supplier to request something to be expedited or on the contrary to an order to be postponed.

And the gist of this offering is obviously large language models that Lokad did not invent, that we have been extensively using for 14 months, a little bit more than that now. And the key insight in the way Lokad operates is that it should be one Supply Chain Scientist that is able to take full ownership of an initiative.

For very large companies, we may have several people on the case, but unlike most of our competitors, they are typically not specialized. So it’s not like we take a team of three with Mr. Database, Mr. Algorithms, and Mr. UI and UX user experience. That’s absolutely not the way Lokad operates. A Supply Chain Scientist is able to do everything from start to finish.

And that’s one of the reasons why Lokad has engineered the technology its own technology the way it does, and we have our own programming language, a domain-specific programming language named Envision, dedicated to the predictive optimization of supply chains. It may sound very weird to have come up with a bespoke programming language, but the gist of it is we, and that’s a decision I took back in 2012, we really needed something that was unified so that one person could do the whole thing, start to finish.

Up to a few years ago, that was really getting the raw transaction data from the ERPs, CRMs, EDI, and all those transactional systems, complete that with an array of spreadsheets for all the structured data that unfortunately are part of the shadow IT rather than the regular IT, and then craft the decision-making numerical recipes. And that was the responsibility of the Supply Chain Scientist to do all of that, then to craft all the instrumentation including dashboards and reports, to make sure to convince himself or herself that the numbers were correct but also to reassure our own clients about the validity of what we’re doing, plus all the instruments to monitor the quality of the decisions over time, plus the plumbing to get the data out of the systems but also re-inject the decisions into the systems as well.

So that was the scope that Lokad had, and there were two things that we couldn’t really do. First, we had to be the recipient of the data, we couldn’t really hunt for the data. And when I say hunt, the Supply Chain Scientist could request the data, we were not asking IT divisions from our own clients to do any kind of fancy transformation, that was just dumping the tables pretty much, you know, select star from table, bam, you do that once a day and you’re done. So that was just super simple extracts, but still, it was pretty much the IT division of our clients that was doing that.

The Supply Chain Scientists were not expected to hunt in the applicative landscape of our clients the data that the initiative needed. The reason for that was very simple, there are about 20 something SQL dialects out there, so Oracle SQL, Microsoft SQL Server T-SQL dialect, MySQL, PostgreSQL, DB2 from IBM etc. So there are something like 20 SQL dialects. Up to a few years ago, a Supply Chain Scientist would have struggled immensely just because even if what this person wanted to accomplish was extremely straightforward, like just dump a single table, the problem was this person would have spent literally tens of hours searching online to compose trivial queries, and whenever there was an error message, it would have been again this person, even if this person was generally familiar with SQL databases in general, it was I would say a massive hurdle to deal with dialects of system that you didn’t know.

2023, with ChatGPT, the problem is solved. ChatGPT, as an assistant programmer, is not naturally super great to let you compose sophisticated apps, but when it comes to composing super simple SQL queries in dozens of dialects, it’s super fast. A Supply Chain Scientist is going to request a SQL query to be composed. This person is also intelligent and will proofread the query to make sure that it reflects the intent. It’s just about removing the hurdle of discovering the syntax, but once you have the correct syntax that is presented to you, it’s pretty much self-explanatory.

If you want to test it out for yourself just ask ChatGPT to give you a walkthrough to set up git on your machine and set up a git repository or whatever you will see what sort of super high quality answer you can get.

That’s really a game changer because that means that suddenly Lokad, who is training Supply Chain Scientists, can take the responsibility of hunting the data. And I know that we have, through ChatGPT, the tooling so that we don’t overcommit ourselves in saying that we’re going to hunt for the data. It’s a game changer. Instead of asking IT to send us the data, we can just have an IP address whitelisted and then have a very secure cloud-to-cloud connection, and then let the team of Supply Chain Scientists chase their own data.

Why does that make such a difference? Well, the reality is that even if Lokad only needs days of work for given initiative, we’re talking of maybe 10 to 20 days of work for a sizable initiative to get the 20 to 50 tables that we need it’s just dumping the table no crossing no joints no fancy filtering it is very straightforward still the problem is that many of our clients have their own IT divisions that have huge backlogs. I mean literally years of backlogs and when you have like three years of backlog even if Lokad is only asking for 10 days it’s 10 days plus three years of backlog so even if we are not exactly put at the very end of the queue if we are just in the middle of the queue then those 10 days of work from the IT may take anywhere like a year to get done and that was I would say a frustration that we had is that the majority of the delays that we were getting was from IT not being incompetent or slow, it’s just that they had so much backlog that it was very difficult for them to allocate those 10 days.

So here we have, we’re talking of something where instead of requesting 10, 20 days of work, we are talking of something like maybe less than a day of work, something like just a couple of hours, just to open a very secure, tight access to the few systems that we need. And then the Supply Chain Scientists themselves are going to survey the table, set up the data extractions logic and make sure that the data extractions are really, I would say, light touch.

The way we can do that is typically by monitoring the performance. Whenever the performance of the database drops, it means that there is a lot going on on the database and thus you typically want to relieve the pressure and delay your own data retrieval process. Because typically, at Lokad, we need to refresh the data on a daily basis, but it’s not like super urgent. I mean, again, it depends. There are some situations where we have really tight schedules, but very frequently, as far as supply chains are concerned, if we delay the retrieval by 30 minutes just because there is like a spike of activity on this database right now, it’s a nonissue.

The first block of commitment is to chase the data ourselves and thus eliminate the number one cause of delay and vastly accelerate the initiatives. Again, those delays were very frequently becoming like the bulk of the delay for the whole Lokad initiative to go to production was just waiting for IT to be able to allocate those days.

The second block of commitment is Master Data Improvement. And here, again, in the past, when you’re facing a catalog where there is, let’s say, 100,000 product descriptions, some of them are garbage, you know, maybe 1%. It is a lot of work to go through those 100,000 references, identify the descriptions or product labels that are incorrect or sometimes it can be just a price point that is just completely inconsistent with the description. If it says a screw and the price point is 20,000 dollars, it’s probably not just a screw, it’s probably something else, etc. There was a lot of basic sanity checks where it seems like obvious and simple but it was very difficult to automate that and there was frequently no alternative but just to have a person screen the entries to look for things that were really bad.

With an LLM, and potentially an LLM that is able to digest images as well, you can do a lot of things when it comes to surveying, monitoring, and improving everything that is like Master data related. In the specific case of Lokad, the Master Data part that is needed for piloting the supply chains.

Conor Doherty: Well, there’s a lot there, thank you. I have a lot of questions that I want to follow up on. I am going to take a slight step back, because if I can collapse all that down you’re describing, and correct me where I’m wrong, one Supply Chain Scientist with access to one good LLM, a tremendous amount of work can be done. Work that up until this moment would have taken a lot of time and involved many people. Now, in a non-Lokad style setup, how many more people would be involved? How many more fingers in the pies would there be? And then you can speak to the efficiency of it, but just in terms of the headcount, what’s the difference between Supply Chain Scientists with LLM and, I don’t know, S&OP for example?

Joannes Vermorel: Our clients are typically baffled by the fact that even a large initiative is like two or three people, and always the same. We have Lokad, and I’m very proud that Lokad, as an employer, manages to keep people for quite a while. So, the bottom line is that Lokad is typically 1% start to finish. If we have several people, it is again to have redundancy. One day you focus on this part of the pipeline and I do this other part of the pipeline, and the next day we switch. It is not like the people don’t specialize, each person has a competency across the entire pipeline. There are some variations and some people have some particular specialty, but still, on the whole, people can really substitute for another.

Our competitors, it’s very different. Even a small initiative involves literally half a dozen people. You will have the project manager that is just there to coordinate the other guys, then you have the consultant, the UX designer, the configurator, the database administrator, the network specialist, and potentially a programmer, a software engineer for the customizations that are non-native. Again, Lokad is a programmatic platform, most of our competitors, their platforms are not programmatic. So whenever you want to have something that is like programmatic behavior, you need to have a full-fledged software engineer that will literally implement with a general-purpose programming language like Java or Python the missing bits. So, Lokad is really not like that. I would say our competitors, pretty much by default, it’s a dozen. S&OP initiatives may involve several dozens of people, but it’s not necessarily that many different skills, it’s mostly different people from different departments and very frequently it’s not very much IT driven.

So, Lokad would be, when I say one person versus a dozen, I compare that to our competitors who are selling APS, advanced planning systems or inventory optimization systems, this sort of enterprise software.

Conor Doherty: Thank you. And to come back to another point that you mentioned at the start when you talked about Supply Chain Scientists, you gave the example of different SQL dialects and then the Supply Chain Scientist who may or may not be fluent in that specific client dialect would validate or monitor the scripts that were automatically generated because LLMs occasionally will hallucinate.

Well, to that point, LLMs do hallucinate very often. Now again, it’s improving, but you can ask an LLM with a piece of text, “Find this word hidden, can you see it?” and it will say yes, well it’s not there. I know it’s not there, you know it’s not there. How at scale can you have security and monitor quality control when you’re talking about leveraging LLMs in an automated fashion?

Joannes Vermorel: Hallucinations, or confabulations as I prefer to call them, really happen when you use the LLM as a knowledge base of everything. If you are using LLMs like a knowledge base of everything, then it does happen. If you ask for medical papers and say, “Give me a list of papers about this pathology,” it’s going to give you something that is directionally correct. The authors frequently exist, they have published, they have stuff that were in this area, they have published papers that were kind of in the same vein but not quite. That’s again, think of it as if you were asking a scientist to remember stuff on top of their head.

It’s very difficult, you would say, and if you say you have to do it, they would probably come up with something that is halfway plausible with correct names of colleagues or researchers, correct semi-correct titles, that’s the sort of things. So that’s a situation where you get confabulations, but you’re kind of begging for it. I mean, you’re asking LLMs to behave like a database of everything, so it’s very difficult, you’re going to have this sort of problem.

Same thing with the SQL dialects, you try that and you get something, this thing will be approximately correct. If you ask, “I want to read a table,” it’s going to do a “select star from table” this and that. It’s not going to give you, when you ask for let’s say with GPT-4, when you ask to read a table, it’s not going to give you “drop table”. It may give you a syntax that is slightly inadequate, so you test your SQL query and you have an error message and you do a small tweak and it works. But you see, it’s still directionally correct. If you ask to read the database, it’s not going to produce a command that deletes a table or modifies the permission rates of the database.

Same thing when you ask for made-up knowledge. For example, if you ask, “Okay, I have an industrial compressor of 20 kilowatts of power, what is the price point of that?” If you ask GPT, it may say probably something like $10,000. It’s just guesstimating something. This is literally made up. It is plausible, but is it correct? Probably not, and it depends on hundreds of variants because there are so many different compressors for various situations.

So the bottom line is that confabulations do not happen at random. There are really specific sorts of tasks where they happen much more frequently. So I would say when you’re begging for it, when you’re using the LLM like a database of everything, then it’s better to double-check. It can be extremely useful, it gives you, for example, for the SQL dialects, it will give you a hint about the sort of keywords that you should use, what the syntax looks like, and it may make a small mistake, miss a comma or something, but with a few iterations, you will get it right. Especially because once you have the SQL query, you can actually test it on the database, you will see the output and you will validate, so you have a feedback loop that is instant to validate that.

If you want to detect, let’s say, weird product labels, product labels that look fishy like this product description is missing, that’s your product label, okay, that’s obviously wrong. But you can have all sorts of wrong. For example, you can have a product label that says “tournevis cruciforme” it’s in French and so the problem is that it’s just in French, it’s a screwdriver with a Phillips head I think. And so again, you can ask for things but at some point, it’s not perfect, it’s a judgment call. A human would make a mistake just the same, so you cannot expect the LLM to be an endgame oracle that is able to answer every question right. At some point, if you’re doing a task such as reviewing the 100,000 products of your catalog for anomalies in labels, the LLM will have false positives and false negatives just like a human. But the interesting thing is that you can actually measure the rate of false positives and false negatives and then decide whether it is worth it or not. And very frequently, you get something that is quite good, something that gives you a lot of value even if it still does some mistakes.

Conor Doherty: Progress, not perfection.

Joannes Vermorel: Exactly. If you can reduce by 90% your master data problems with something that is very cheap and can be re-executed within literally hours unattended, that’s very good.

Conor Doherty: There’s also then the value out of the time that was not spent doing that manually that you might have done something else that generates or adds value. So again, there are direct and indirect drivers there of value.

Joannes Vermorel: Plus, the reality is that when you do a super repetitive task for one hour, you can have a certain level of quality. When you do that for 100 hours, the hour 67, I mean you typically, humans are not constant performance engines. After a few hours, the concentration would drop and so the amount of false positives, false negatives would probably skyrocket, even with a fairly diligent employee.

Conor Doherty: Thank you. And I want to be mindful of the fact that we do actually have some questions from the audience that actually address things I was going to ask, so I will skip some things but they will be addressed in the audience questions. But one final thought here, when you talk about the Supply Chain Scientists, the AI Pilot, and we’ll come back to that later in the questions, but the Supply Chain Scientist, an AI Pilot, and the client, how does that orchestration actually work on a day-to-day basis? Does the client have access, do they interact with the LLM?

Joannes Vermorel: Typically, we see it as all the numerical recipes composed by Supply Chain Scientists are the AI Pilot. This is the thing that pilots the supply chain on a daily basis. It’s unattended and it generates the decisions. Now, with LLMs, it handles the minutiae of data preparation and the PO decisions as well. For example, the pre-decision would be asking suppliers for their MQS. You need to fetch this information, it’s missing or it’s not up to date, you need to change that. LLMs can help you to get that. Post decision would be sending an email to ask for an ETA, estimated time of arrival, for orders if you don’t have an EDI in place or if you don’t have a bridge, or ask to expedite an order or postpone. That’s the sort of minutiae that comes afterward where Lokad can generate the decision to request to expedite an order but not do the minutiae which was just to compose an email and send it.

All of that is pretty much the AI Pilot and that’s the big numerical recipe that does the process end to end. All of that is implemented by the Supply Chain Scientist. So this is an extension of scope for Lokad. The Supply Chain Scientist is the copilot actually. And really think of it as when I say pilot, I really mean like an automated pilot in an aircraft. And by the way, aircraft nowadays, the most difficult maneuvers are done on autopilot. When you have very scary airports like the legacy airport of Hong Kong, with a new one that is much easier, but they have one that is literally in the middle of high-rise buildings, then the autopilot for those maneuvers is mandatory. So it’s done, it’s the machine all the way, people just monitor.

So here, the Supply Chain Scientist is engineering the numerical recipes and they are the copilot. They decide, they do pretty much navigation to direct the things and then they engineer the plan and the course plan for the pilot. But fundamentally, the Supply Chain Scientist is playing the role of the copilot, figuring the directions, thinking ahead and making sure that the pilot can operate as smoothly as possible. But fundamentally, all the high-frequency adjustments, it is the pilot, not the copilot. And then the client would be the role of the captain.

You know, a bit like in the old TV series, Star Trek, where the captain is sitting on the chair and giving the super high-level instruction to the crew. So the client in this setup is giving the strategy and giving the overall directions. Then it is the role of the Supply Chain Scientist to implement that and the role of the pilot is just to execute all the micro adjustments that are needed or all the daily decisions that are needed for the supply chain and then execute the supply chain per se.

Conor Doherty: And this is also, again, just to be clear because we didn’t talk about it, but that is in addition to all the automated quantitative tasks that are already done. They’ve been done for years. So in case anyone’s listening and thinking, “Oh well, that’s just the qualitative tasks”, we’re talking about end to end. Yes, quantitative like the factoring of economic drivers, generating allocation, purchasing, pricing, that’s also AI-driven already and automated.

So the Supply Chain Scientist is overseeing both these sort of consoles, the quantitative and qualitative.

Joannes Vermorel: Exactly. And the reason why Lokad started to embrace this AI keyword at last, that’s an umbrella term, was because we are now adding, throwing to the mix, LLMs. We already had machine learning with differentiable programming and stochastic optimization, but now we are throwing LLMs on top. And so that’s literally a very complete toolkit.

The effect is literally for clients, supply chains that can run unattended for weeks. People are surprised for how long when you have those economic drivers, you can actually operate fully unattended. The beauty of it is that you don’t need to do any kind of micro adjustments. For example, adjustments to the forecast are for most of our clients completely non-existent. And most of the other adjustments are also done completely automatically, such as the introduction of new products, phase-out of old products, introduction of new suppliers, and phase-out of non-performing suppliers.

So all of that is kind of business as usual and when the recipes are done correctly, they require already in the past, it didn’t require that many interventions. But with this AI Pilot that includes LLMs, adding a new supplier into the mix, all these sort of things can be even less manual intervention than it used to be.

Conor Doherty: Okay, Joannes, thank you. We’ve been going for about 40 minutes. There are questions to get to, so I will push us towards that now. But before we do, as a sort of summary or a closing thought, again in the context of the much larger conversation that we had a month ago and now that we’re having today, what’s the bottom line for both the supply chain practitioner and let’s say the executive teams who oversee the average, the normal supply chain practitioner? What’s the call to action or takeaway for them?

Joannes Vermorel: I believe that a decade from now, this vision of AI Pilots, maybe under a different name, will be the norm. Maybe it will be the norm and so people would just say supply chain and not AI Pilots for supply chain. And it will be kind of obvious that supply chain is, you have those AI Pilots. Just like when you say a computer, you don’t say I have a CPU, I have memory, it’s a given that you have a CPU in your computer so you don’t even mention that.

My take is that something like a decade from now, these functions will be extensively automated and Lokad, with those AI Pilots, it’s a package offering that just does it with a Supply Chain Scientist. For our clients, it means that the supply chain practice changes in nature. It means that they have those AI Pilots that can liberate a lot of bandwidth. And we were already liberating the bandwidth for the decision-making process or the complex calculation. But now we are also liberating the time that was spent monitoring the list of SKUs, the list of suppliers, the list of clients, just to keep the master data correct and clean and sane. All of that is also going away and removes all the need to have those manual interventions that were very frequently not even really quantitative in nature. You had to fix a label, fix an entry, remove a duplicate or something. All of that, again, Lokad will take care of that.

So the bottom line is that it’s a massive productivity improvement. I think with some clients we are literally achieving this 90% reduction in terms of headcounts for the repetitive tasks. The reality is that now you have more time and you actually do which is the things that are much more difficult to automate and I believe that adds more value. Which is really think carefully about the strategy, spend a lot more time to think about the options, what should you be exploring instead of again wasting your time on spreadsheets.

So spend a lot of time thinking long and hard about the difficult problems and then talk to suppliers and talk to clients and to have like genuine in-depth conversation so that you can organize your own supply chain to make your supplier happy and thus it will be willing to give you better prices, you will have better quality, better reliability, etc. If you organize yourself to suit the need of your suppliers, it may sound a little bit the opposite where, “Oh suppliers, I am the client, they have to accommodate me.” But if you can accommodate your suppliers better, it’s a team effort and you can have more reliability and better prices.

And you can do the same effort with your client because there is the same collaboration that should happen. And again, it takes a lot of intelligent discussion and that’s the sort of things where it is beyond what those LLMs can deliver nowadays. So I believe that Lokad, we can automate the task that we like, low value minutiae and mundane clerical task, and have people do the high-level semi-strategic thing. I say semi-strategic because you will talk to one client at a time and then do the strategy which is to summarize all of that, create a vision, and then support the supply chain leadership so that they have a very clear, well-founded strategy for their company.

Conor Doherty: So again, just to take two examples just to conceptualize that, like the low-level decisions, going through Excel spreadsheet saying, “Oh it says block in colors that it must be black,” like you’re autocorrecting that, that’s trivial, it’s mundane, it’s not worth your time. “Should I open a warehouse in Hamburg?” Strategic.

Joannes Vermorel: Yes, it’s strategic. Plus the problem is that there are so many options. You could say a warehouse, should I buy, should I rent, what sort of contracts, what is the degree of mechanization, do I need to have a contract for the equipment so that I can give it back, should I lease the equipment or not. I mean there is like hundreds of questions and very frequently all those questions, I mean again when people have to spend 99% of their mental bandwidth on the clerical task, they don’t have any time left to spend on those big questions.

You see, if I were to apply the Parkinson’s Law, people would say, I’ve seen many companies where if I were to compute the sum total of minutes spent on something like ABC classes, the time would be every year we are talking of man-years invested into ABC classes. And when it comes to decide for a new warehouse, we are talking of weeks of time. But you see the imbalance where people spend literally collectively years of human time for something that is completely nonsensical like ABC classes. And when it comes to a 50 million euro investment to open a warehouse, it is literally weeks of time and then bam, a decision is made. You see, it should be the other way around.

Conor Doherty: Alright, well thank you for that. On that note, I will switch over to the audience questions. Thank you very much. Feel free to submit them up until basically I stop. So, Joannes, this is from Massimo. Could you elaborate please on how IT can leverage AI Pilots to reduce backlog and why you believe this approach should be proposed?

Joannes Vermorel: AI Pilots is not about reducing the backlogs of IT. It is about coping with the fact that IT has years of backlog. So, our plan at Lokad is not about reducing the backlog of IT. It requires to rethink IT the way Amazon did. That would be another episode. I would say look for the 2002 memo of Jeff Bezos concerning the APIs at Amazon. The bottom line is that all the departments in a modern company need tons of software stuff. Every single division - marketing, finance, accounting, supply chain, sales - they all want tons of software tools and all of that falls onto the shoulders of IT. IT is collapsing under that.

So, my point is that at Lokad, we are supply chain specialists. We are not going to address all the things for marketing, sales, accounting and whatnot. What we say is that with LLMs, we can liberate IT from taking care of Lokad. Bottom line is that we go from needing let’s say 10 to 20 days of work from IT to get the Quantitative Supply Chain initiative up and running, building the pipeline, to just a few hours. So, we are not solving the backlog. IT does what IT does. They can also benefit from LLMs, but in their case, the situations are much more diverse, much more difficult.

So, my proposition is not that LLMs can actually help IT to slash their backlogs. It’s just a way for Lokad, in this specific case, to say, “Well, instead of adding 20 more days to the backlog, we just add something like four hours and we’re done.” That’s how we cope with the backlog. And then more generally, the solution to those years of backlog is that every single division needs to internalize most of the software stuff. You see, the years of backlog is that companies overall are demanding too much from IT. It should have digital practices in every division - marketing, sales, accounting and whatnot. You should not ask IT to solve all those problems for you. You should have digital experts in every area to do that. And that’s exactly the essence of this memo of 2002, if I don’t confabulate, of Jeff Bezos to his team. It’s a very famous memo. You can type “famous memo Bezos” because he was saying, in essence, “You have two weeks to have a plan to be able to every division to expose all your data so that we don’t have this sort of siloing and power games being played in the company, in Amazon.”

And Bezos was concluding, “Every single manager that doesn’t have a plan on my desk two weeks from now is fired or something.”

Conor Doherty: Okay, well thank you. This next comment, and it’s a question I didn’t ask because I saw that it was mentioned in the comments, so it’s phrased as a comment but take it as a question. This is from Jesse. “One Supply Chain Scientist plus one LLM still sounds like a copilot. So again, delineate AI Pilot versus copilot.”

Joannes Vermorel: The reason why we say it’s a pilot is because we have some clients where for weeks, all the decisions are generated and then executed unattended. And when I say unattended, I really mean it. During the lockdowns of 2020, we even had a company where for 14 months, all the white-collar workers were literally staying at home, not working, paid by the state because the state was giving subsidies in Europe. Several states were giving subsidies to essentially stay at home and not work. And so, that was the situation. So, we had that and when you have a supply chain that operates pretty much unattended for 14 months, I call it a pilot, not a copilot. If there is nobody to supervise the numbers generated by the system, I really think it’s a pilot.

But we weren’t using an LLM back then. And it was a situation where the data was clean and there was no dramatic need to improve this master data management. And that was a client that had very high maturity in terms of EDI integration and whatnot. So, the sort of things that were needed before and after were very, very limited.

Anyway, back to the question of the copilot. Most of the competitors of Lokad are saying that they are doing a copilot. And indeed, the way they do it, and this is a completely different thing. Lokad, the Supply Chain Scientist, is using a programming language. So when we’re using an LLM, it is to generate, to help us generate pieces of a program. That’s what we’re using it for.

So, LLM is used to generate essentially pieces of programs that can be the SQL dialects, that can be a few other things. And then we engineer the pilot and then the pilot runs unattended.

Our competitors, especially those who are saying they are going to bring conversational AI, conversational UI, user interface, to the market, they do something completely different. What they do is typically retrieval augmented generation (RAG). So what they do is that they compose a prompt. They will, that’s literally all our competitors that say that they do AI with LLM right now in the supply chain space. They compose, let’s say, a dozen prompts fitting various scenarios. Then after the prompt, they inject data retrieved from the database, you know, that can be basic descriptive statistics. So they would inject a few dozen numbers, average sales over last week, last month, last year, or whatever, you know, basic statistics that fit the scenario. And then they would add the extra query from the user and then the LLM will complete the answer.

You see, so this is again, LLMs are just about text completion. So you compose a text and it completes. And retrieval augmented generation, the retrieval part is just retrieving some numbers from the database and then compose. But the bottom line is that, okay, now you get something where you can ask a question. But the reality is that if you’re not clueless, you can read the numbers directly from the screen. There is no magic. So LLM just sees the numbers just like you can see them on your report. So fundamentally, it can only answer questions that are readily answered by a dashboard.

So yes, if you’re not really familiar with the definition of every number, it can clarify that for you. But you can also have, that’s where Lokad does it, have like a cheat sheet that just gives you the one-liner description attached to every dashboard, for every number that is present on the dashboard. And that will effectively do the exact same role, no AI involved.

So, bottom line, I’m very, very skeptical of those conversational AI, those copilots, because essentially they are gimmicks that are overlaid on top of existing systems that have never been designed for any kind of machine learning system, not even the classic sort of machine learning, let alone LLMs.

That’s why I say, to my knowledge, all our competitors are doing copilots where essentially they have something that is, let’s say, a chatbot that is on top of dashboards. But it doesn’t do any automation. It doesn’t let you automate any kind of AI Pilots. It is, yeah, a gimmick on top of a legacy system.

Conor Doherty: Okay, well thank you. I’ll push on. This is from Kyle. “Can you please discuss the criticality of process analysis before instituting an AI model?” I’ll take it as a supply chain context.

Joannes Vermorel: As surprising as it may be, the process analysis is very important. But not necessarily in the ways that people think it is. The reality is that, especially in supply chains, companies had four or five decades to invent a lot of made up processes. And I say “made up” intentionally. Supply chain is a game of bureaucracy. It comes with a bureaucratic core. The game of supply chain for the last five decades has been a way to organize the labor because you cannot have one person that deals with all the SKUs, all the warehouses, all the locations, all the products. So, you need to divide and conquer the problem, spreading the workload over dozens, possibly hundreds of people for large, very large companies.

So, you end up having a situation where 90% of your processes just reflect the emergent complications that you have due to the fact that the workload is spread over a lot of people. So, you see, it is accidental processes, not essential processes. So, I would say yes, you need to do the process analysis, but beware, 90% of the existing processes are not addressing the fundamental problem that your supply chain faces, but accidental problems created by the fact that you need a lot of people to solve the 10% of the problem that needs to be addressed.

In industries like chemical processing, where you have a lot of flows and dependencies, it’s very complicated. For instance, when you have chemical reactions, you’re going to have byproducts. So, whenever you produce a compound, you’re going to produce something else. This something else can be sold or used for another process. You need to coordinate all of that. You have tons of constraints, you have processes that operate through batches, and processes that operate continuously. It’s very complicated.

But the reality is that most of the processes, instead of focusing on the physicality of the problem, the fact that let’s say, in a process industry, you have chemical reactions that have very specific inputs and outputs and all of that is very, very clearly defined and known. Instead of focusing on the base layer of the physical reality of your business, the process analysis might just reverse engineer processes that will be gone when you set up the AI Pilot.

The interesting thing is that when you do it AI Pilot style, you don’t need this divide and conquer approach anymore. It is a unified set of numerical recipes that solve the problem end to end. So all those coordination problems that were solved by as many processes are just gone.

My experience is that 90% of those processes are just going to be gone by the time we are finished. That’s why I say it’s very important to keep a sharp focus on the base physical layer of your supply chain, as opposed to the made up processes that are just there to coordinate numerous teams. Those things are not going to be upgraded, they’re going to be replaced by the AI Pilot.

Conor Doherty: Thank you. And actually, an example that you listed in that answer provides a nice segue here. So, from a viewer Durvesh, do you have plans to open source Envision for educational or small business use? And can it be programmed with rules to benefit B2B businesses like chemicals that require extensive manual input?

Joannes Vermorel: Yes, we have plans to open source Envision at some point. But let me first explain why. In this world of enterprise software, we have public documentation for Envision. Most of my peers have domain-specific languages (DSLs), but they are not publicly documented. Dassault Systèmes has purchased another company called Quintiq. At the time, it comes with a DSL, it is not documented publicly. So, there is literally in the supply chain space, other companies that have DSL and they are not public. At Lokad, we document everything publicly and we have a free sandbox environment for Envision. We even have free workshops available so that you can actually teach or learn Envision with exercises. So we are doing a lot more.

Now, when it comes to open sourcing a language, it comes, it is part of the plan but it is too soon. Why? Because Envision is still under rapid evolution. You see, one of the problems that you have when you open source a compiler, the compiler is a piece of software program that lets you compile your script into something that executes. As soon as you open source your compiler, it means that people are going to operate Envision code, in the case of Lokad, in the wild. And Lokad loses the possibility to upgrade automatically those scripts. The reality is that over the last decade, Lokad has modified hundreds of times the Envision programming language. This language is not stable. If you look at my book, the Quantitative Supply Chain book that is now something like six years old, the syntax of Envision has evolved dramatically. You can have a peek at vestigial syntax that does not exist anymore in Envision.

And so, how do we deal with this constant change of syntax? Well, every week at Lokad, we do have weekly releases on Tuesdays and what we apply is, we apply automated rewrites for all the Envision scripts that are operated on the Lokad platforms. So it is one of the key properties of Envision is to be very, I would say, have a very high affinity to static analysis. And static analysis is, by the way, a branch of language design and language analysis. When I say language, I mean computer language, that lets you have properties of programs without running them. And through static analysis, what we can do is literally rewrite automatically an existing script from the old syntax to the new syntax. And we do that automatically on Tuesday. And typically, when we do an upgrade, we will have for a few days, we’ll have the old syntax and the new syntax that are both accepted. We do the automated rewrites and then when we see that the ancient syntax does not exist anymore, we, with a feature flag, we lock in the fact that only the new syntax exists.

And Lokad, we have already deployed over 200s of those automated rewrites. It’s typically, we do it like we release every Tuesday, but typically we have like two rewrites a month for so and we have been doing that for a decade. So as long as this process is going on, Lokad, we cannot realistically release Envision as open source. It will come in due time but I don’t want to repeat the massive blunder of Python. The upgrade from Python 2 to Python 3 took the Python community a decade and it was so incredibly painful. I mean, companies went into years of upgrade, it was a nightmare that lasted for a decade. So that was really, really wrong. Even Microsoft with the upgrade from C# the .NET framework to .NET core took half a decade and that was a big, big pain. So that’s, and that’s again the problem of once you have a compiler that is open source in the wild, you do not control the code. So if you want to bring changes to the language, you need to collaborate with your community. It makes the process super slow, super painful and in the end, you never really eliminate all the bad features of your language.

If we look at Python, the way for example, object-oriented programming was introduced in Python, the syntax, ah, it’s clunky. It’s really, you can feel that Python was not designed with object-oriented programming in mind. It was a later addition in the late 90s and the syntax is kind of crap and now it’s there forever. And by the way, every single language has that. C#, you have the keyword volatile that serves no purpose anymore. C++ is stuck forever with multiple inheritance. That was a mistake. Having multiple inheritance, it was a bad design decision that complicates everything, etc. The only way to be able to avoid those big mistakes, Lokad, we did a lot of big mistakes in the design of Envision, but we fix them after one and we still, we’re still in the process, especially when we have like new paradigms coming in. For example, differentiable programming was like a big new paradigm and we had to re-engineer the language itself to accommodate this paradigm.

By the way, there is like a major mega proposal for Swift, proposed by Apple, to make differentiable programming a first-class citizen in Swift. But it’s probably not going to happen anytime soon. It is like a major, major revamp. Right now, the language that is the closest to having differentiable programming as a first-class citizen would be Julia and even there, it is a lot of duct tape.

Conor Doherty: Thank you again. Still three more to get through. Next one is from Victor. This is broadly about AI. How does AI address random bottlenecks given that it operates on large data sets to predict plausible scenarios or recurring issues?

Joannes Vermorel: Let’s be clear, when we say AI, it’s a collection of techniques. At Lokad, we typically have LLMs, differentiable programming, and stochastic optimization. Differentiable programming is for learning, stochastic optimization is to optimize under constraints in presence of uncertainty, the probabilistic model that are typically regressed with differentiable programming, and LLMs is to do this universal, noise-resistant templating engine.

When you approach supply chain with probabilistic tools, most of the tasks that are hinted by this question just go away. That’s the beauty of probabilistic forecast, those forecasts are not more accurate, they are just much more resilient to the ambient noise of the supply chain. When you couple probabilistic forecasting with stochastic optimization, you largely eliminate all the need for manual intervention. And when I say largely, I mean for most clients, it completely eliminates that. And now we are left with tasks that require to go through text and to deal with that and that’s LLMs. And then again, what I described is Lokad, we have those AI Pilots that are really automated and if there is a manual intervention, it is not to do like a clerical entry in the system, it is to input a strategic revision of the numerical recipe that will typically be a profound modification of the logic itself to reflect the revised strategy. It’s not going to be, you know, a minor thing. It’s typically going to be something that is kind of fundamental and changes the very structure of the logic that has been implemented.

Conor Doherty: This one is from Ahsan. Could you please explain how AI, specifically, would expedite an order? Would it be able to execute transactions in the ERP system based on verbal commands?

Joannes Vermorel: Verbal commands is not the right approach to this problem. If what you want is faster data entries, the voice is a very low bandwidth channel. You type faster than you speak, unless you’re very bad at typing. So, this is not the sort of gain that you can get. If your UI is correctly designed with a keyboard, you will be faster than voice command. I know this very well because 20 years ago, I was working at AT&T Labs on the forefront of production-grade voice recognition systems. There were tons of applications where it didn’t work out. The voice recognition was working, but the reality is your hands on the keyboard were just faster. The situations for voice were either dirty hands or busy hands. Otherwise, the keyboard is just faster.

Back to the question, first you want to filter the orders. Here we have a decision-making process where you want to decide which orders need to be expedited. That’s classic Lokad, that’s a pure decision-making process, quantitative. You need to decide yes or no, is this order that is ongoing warrant a request to expedite the process. We would do that with differentiable programming, stochastic optimization. That’s how we get to the correct decisions.

Once we have that, we have automatically every day the decisions for the orders. It’s not about having someone that gives verbal orders for that. It’s going to be part of the set of numerical recipes we are going to compute the optimized orders. As time goes on, we realize that some orders were overshooting or undershooting and we will request to postpone or expedite respectively. The part of the LLM is just about using this quantitative decision, where you have like a binary flag that says “please expedite”, to generate an email with a proper context, send it to the supplier with something like “please acknowledge that you can do”, and then the supplier is going to acknowledge hopefully and say “yes”, “no”, “maybe I can” or “this is what I can offer”.

The LLM will automate the chat with the supplier. The AI is not about deciding to expedite the order. That’s a pure quantitative decision that needs to be approached with quantitative tools, differentiable programming, stochastic optimization. The LLM is there for the interaction with the supplier, which frequently is going to use an unstructured channel like email.

If you’re thinking about voice commands, it’s not going to work. It is way too slow. I’ve had the privilege to work with the teams that literally brought to the market the first production-grade voice recognition systems 20 years ago. But the bottom line is, you’re not going to use those AI technologies for that. Voice commands don’t have the bandwidth to do what you want to do.

Conor Doherty: On a related note, when we talk about stochastic optimization and differentiable programming, we do have extensive video resources on those topics. We’re not unpacking those because they are a three-part series (part 1, part 2 and part 3) on differentiable programming, but we’re not ignoring those. They’ve been covered and I kindly direct viewers who want to learn more about that to review those and then piece together these pieces.

Last question, and it is from Isaac. As a customer currently learning Envision, I’m interested in its integration capabilities, specifically with GitHub. Could you discuss the potential for Envision to support GitHub integration, particularly for applications such as explaining code blocks in natural language or identifying changes between versions? Lastly, is there any plan to introduce an Envision copilot in the near future?

Joannes Vermorel: The short answer is yes, yes, and yes. The timelines vary very much depending on the components we are talking of. On using LLMs to do essentially copilot, like the GitHub copilot but that will be the Lokad copilot on Envision codes, we are already working on that. The very interesting thing is that due to the fact that it’s a DSL where we control, we have complete control on the training materials. That’s very cool because that means that the day we successfully manage to bring this LLM to production, whenever we change the syntax, we will rerun our training process with the updated syntax and we will always have a copilot that gives you the up-to-date Envision syntax. As opposed to GitHub copilot that gives you a Python syntax, a C# syntax, a Java syntax.

Because you see, again, Java has been around for 25 years, Python has been around for more than 30 years, C# has been around for 22 years or something. So whenever you’re asking GitHub compiler to write code for you, the problem is that it gives you one flavor of semi-recent of those languages, but not actually super recent. And sometimes you don’t want the recent flavor because your environment is not in line with those super recent versions that you don’t support yet.

We are working on a whole series of super natural features such as comment my code, explain my code, complete my code. We are also thinking of a lot of extended code actions that are very specific to the sort of workflows that happen within Lokad. For example, we are working on being able to automate the generation of data health dashboards. That’s a very typical task.

Data health dashboards are essentially instruments that check that the data that we ingest is sane. And we have a lot of ticks and know-how on what to check because the sort of problems that you will find in data of ERPs, they’re kind of always the same. When we check data for correctness coming from an ERP, we Supply Chain Scientists have cultivated, literally, we have our own training methods to know what to look for and we have our own recipes, I mean human recipes, of what should I implement, what should I check, and we could largely automate that with the LLMs. So this is something that is in progress at Lokad.

We are working on a Lokad copilot. To make Envision more friendly with GitHub, we have already released an open-source Visual Studio code extension. You can already put Envision code into a git repository. You just create an .nvn file and you commit and you’re done. If you want to edit the code with nice code coloring, you would need a Visual Studio code extension. If you look for Lokad Visual Studio code extension for Envision, there is one that is completely open source and you will get code coloring.

In the future, we plan to expose the Envision code that sits into a Lokad account as a git repository. The way the Envision code is stored in a Lokad account is pretty much the same as a git repository. It’s versioned pretty much the same way. It’s not organized exactly the same way as git, again I’m not going to go too far into the technical reasons. Git is very much agnostic to the language. If you are only dealing with one language in particular, you can be smarter and do all sorts of things that are not possible in the general case. But the bottom line is that the Envision code is entirely versioned. We could expose a git repository that lets you export all your code from a Lokad account to a git repository and maybe later the other way to have a two-way synchronization. Git is a decentralized system where every git repository is like the whole thing, you have a complete copy and so you can get changes from a remote repository but send your changes to a remote repository. And so at some point, we might introduce, we probably first introduce the export and then introduce the reimport, but that will take time. We are not there yet. It is part of the roadmap, but we don’t have any timeline for that yet.

Conor Doherty: It is worth pointing out that a few people in the comments did say that they were learning Envision. We do produce a series of tutorials that are done in collaboration with the University of Toronto and a few others that are in the pipeline. There are free resources and we can provide answers if people want. For anyone who wants to learn, our Supply Chain Scientists put a lot of effort into those workshops. They are freely available on our website.

Joannes Vermorel: For those who are not interested in becoming Supply Chain Scientists themselves, Lokad can provide the Supply Chain Scientist as part of the AI Pilot offering.

Conor Doherty: That’s all the questions, Joannes. Thank you very much for your time and thank you very much for watching. I hope it’s been useful and we’ll see you next time.