00:00:00 Introduction to the interview
00:01:00 Rinat’s Lokad journey and supply chain challenges
00:03:59 Lokad’s evolution and simulation insights
00:07:07 Simulation complexities and agent-based decisions
00:09:15 Introducing LLMs and simulation optimizations
00:11:18 ChatGPT’s impact and model categories
00:14:14 LLMs as cognitive tools in enterprises
00:17:10 LLMs enhancing customer interactions and listings
00:20:30 LLMs’ limited role in supply chain calculations
00:23:07 LLMs improving communication in supply chains
00:27:49 ChatGPT’s role in data analytics and insights
00:32:39 LLMs’ text processing and quantitative data challenges
00:38:37 Refining enterprise search and closing AI insights
In a recent dialogue, Conor Doherty of Lokad conversed with Joannes Vermorel and Rinat Abdullin about generative AI’s impact on supply chains. Vermorel, Lokad’s CEO, and Abdullin, a technical consultant, discussed the evolution from time series forecasting to leveraging Large Language Models (LLMs) like ChatGPT. They explored LLMs’ potential to automate tasks, enhance productivity, and assist in data analysis without displacing jobs. While Vermorel remained cautious about LLMs in planning, both acknowledged their utility in composing solutions. The interview underscored the transformative role of AI in supply chain management and the importance of integrating LLMs with specialized tools.
In a recent interview, Conor Doherty, the Head of Communications at Lokad, engaged in a thought-provoking discussion with Joannes Vermorel, the CEO and founder of Lokad, and Rinat Abdullin, a technical consultant at Trustbit and former CTO of Lokad. The conversation centered on the burgeoning field of generative AI and its implications for supply chain management.
Rinat Abdullin, reflecting on his tenure at Lokad, recounted the early challenges the company faced, particularly in aligning technology with customer needs and making complex supply chain data comprehensible and trustworthy. Joannes Vermorel confirmed that Lokad’s roots were in time series forecasting, a critical element in supply chain optimization.
As the dialogue progressed, Abdullin delved into the evolution of Lokad’s technology, highlighting the tension between the explainability and performance of machine learning models. He shared his experiences in using simulations to demystify complex systems, which paved the way for more optimized computational methods.
The conversation then shifted to Large Language Models (LLMs), with Vermorel noting their recent surge in popularity. Abdullin shared his early experiences with language models and their evolution into user-friendly tools like ChatGPT. He emphasized the transformative potential of LLMs, likening them to a personal department of assistants capable of performing a variety of tasks, from drafting documents to automating the search for information within large data silos.
Abdullin addressed concerns about LLMs replacing jobs, asserting that they enhance employee efficiency rather than replace them. He cited examples where productivity increased tenfold to a hundredfold. He also noted that while supply chains have been slow to adopt LLMs, marketing departments have been quick to utilize them for customer interactions and cost reduction.
Joannes Vermorel expanded on the potential of LLMs in automating open-ended communications with supply chain partners, saving time on routine emails, and allowing focus on more complex tasks. He praised LLMs for their linguistic finesse in adjusting the tone of communications, a task that can be time-consuming for humans.
Abdullin highlighted ChatGPT’s advanced data analytics capabilities, which empower business decision-makers to analyze complex data without needing programming skills. However, Joannes Vermorel maintained his skepticism about generative AI in supply chain planning, emphasizing that LLMs are more suited for generating disposable analyses and reports.
Rinat Abdullin suggested that LLMs could be used in conjunction with specialized tools for better results, particularly at the intersection of numerical, textual, and code domains. Joannes Vermorel concurred, clarifying that LLMs are better suited for composing programs to solve problems rather than solving them directly.
In closing, Rinat Abdullin encouraged businesses to embrace LLMs, as they can add significant value when combined with specialized tools. Conor Doherty concluded the interview by thanking Joannes and Rinat for their insights into the dynamic field of generative AI and its role in shaping the future of supply chain management.
Conor Doherty: Welcome back to Lokad TV. The progress made in generative AI in the last 12 months is an extraordinary feat of technological progress. LLMs, or large language models, have gone from niche to mainstream in under a year. Here to explain the significance, particularly in a supply chain context, is Lokad’s very first CTO, Rinat Abdullin. Rinat, welcome to Lokad.
Rinat Abdullin: It’s a pleasure and an honor to be back. I was at Lokad when it was just starting in a small tiny room, I think, in the university. And of all the companies I’ve worked with since, including seven startups, Lokad was the most challenging and rewarding place in my life.
Conor Doherty: You don’t have to say anything about Joannes directly, but when you say it was the most challenging, what exactly made Lokad so challenging? And then, in contrast, the difficulty of future projects?
Rinat Abdullin: We were a startup back then, and it was an interesting combination of trying to find a match between the technologies and what the customer wanted and actually needed. Balancing this triangle was always a challenge because technologies back then were nascent. We were one of the first big customers of Azure, just starting to build out a scaled-out library for processing lots of time series from customers. There was no support; everything had to be built from scratch, and that journey took many years. It continued with creating a custom DSL to empower experts at Lokad, and it’s still ongoing. That’s one part of the triangle. The second part was that customers want better numbers; they want their business running in a predictable way without any frozen money on the inventory. At the same time, they want these numbers to be understood because if you provide customers with some numbers that come out of a magical black box, the executives might say, “Yep, it works,” but the supply chain experts at the local warehouses will say, “I don’t understand these numbers. I don’t trust the formulas, and my gut feeling, based on 10-20 years of expertise, is going to say nope, it’s not going to work, so I’m going to ignore this.” And you can’t fire everyone, obviously. Balancing these three has been a challenge at Lokad and with all the customers I’ve been working with ever since.
Joannes Vermorel: Listening to Rinat, we used to work with time series, is that true?
Rinat Abdullin: Yes, Lokad was literally founded as a time series forecasting service, so I know a thing or two about time series, even if we did depart from that path years afterward. We have been doing time series, and it’s a very basic building block. The tension that Rinat mentioned about explainability is also something that was finally addressed, but more than a decade after Lokad was founded. We had to embrace differentiable programming so that we finally had models that were machine learning but explainable. It came very late. For years, we had the choice between crude models that were white box but not very good, or machine learning models that were better but black boxes, creating tons of operational problems. Sometimes they were not naturally better according to all the dimensions of the problem. That was an immense struggle, and the Lokad journey has been almost a decade of uphill battles. Rinat did the first half a decade of uphill battles, and then there were other people who kept fighting for the other ones. It has been a very long series of massive problems to be addressed.
Conor Doherty: Thank you, Rinat. Coming back to you, when we try to explain what Lokad does, it’s through a series of very long articles, lectures, discussions like this. But when you’re trying to white box machine learning in this context, how do you approach that?
Rinat Abdullin: One of the approaches that worked quite well when I was helping to create a hackathon for an international logistics company was through simulations. When you talk about international logistics, there are lots of variables at play. You have cargo that has to be transported between multiple locations using multiple modes of transportation. You have trucking companies and different other companies competing on the open market for cargo deliveries from location A to location B. Then you have actual delivery pathways like roads, rail networks, maybe last-mile delivery somewhere. As trucks bring cargo between those locations, you get delays, traffic jams, and the cargo might arrive at a warehouse outside of the working hours, or the warehouse unloading area might be packed.
We wanted to model all this complexity in a way that is approachable for students or new hires to the company. What we’ve done was rather brutal. It’s maybe very similar to how ancient researchers were trying to model the number of pi by flipping a coin through a simulation. So we built a virtual map of Europe with primary roads, and in this virtual map, roads had lengths, time would be passing, trucks would drive back and forth, and trucking companies would be able to decide which cargo they are picking and if they are going to deliver it on time. That was the entry point for the hackathon participants because they could code agents that would make decisions like, “I’m truck driver A, and I’m going to pick this cargo from location A to location B.” But there was a trick: when a truck drives cargo from one location to another, just like in the real world, it costs money. To earn money, you have to pay taxes, you have to pay for fuel, you have to make sure that the driver rests.
Because it’s a simulation, you don’t need complex formulas; you’re brute-forcing reality. You just execute like a batch script to NPCs or to a game sequentially, and you can have lots of explainable rules on a sheet of paper. This entire world was so understandable to people that we actually created two levels of hardness. In the first level, companies would be simply driving trucks trying to make the most money. In the second level, gas prices went a bit higher, companies had to compensate for CO2 emissions, and truck drivers could get tired. If the truck driver was driving for more than 12 or 14 hours, then there is an increasing chance of an accident. When there is an accident, the truck driver goes into rest, and that machine does nothing, essentially wasting time. We built this environment, participants were able to code their agents, and when you run a discrete event simulation at an accelerated rate, you essentially get months of virtual time that pass in seconds of real time.
We were able to quickly spin lots of simulations and say, “Hey teams, the decisions that your agents were making in this virtual world, these were the lead time distribution, this was the price distribution, these were the margins, these were the number of accidents that your agents were getting.” That’s essentially the approach that I normally take when I need to try to explain a complex environment. It’s first start with the simulation because it’s game-like, it’s easy to explain the rules, you don’t have to do any differential programming. But when you run this simulation, it’s essentially a Monte Carlo analysis that tracks through the dependencies in complex systems. It means that, for example, in some cases, you don’t get a simple distribution on the outside, but because there is interference between multiple elements of the system, you get interference patterns on the outside distributions. It looks like a black box, but people can understand the rules, they can change the rules of the play, and then, for example, if a company finally understands how this environment works and they like the numbers that come out in a slow way because simulation still takes time, then there is a way to optimize the computation, saying, “Okay, these are the numbers that we’re getting out of the simulation, and let’s swap to differential programming directly with the probabilities to get the same numbers but in a faster way.” It’s just performance optimization. So that’s how I would normally approach that.
Joannes Vermorel: What is very interesting is that during the last year, a new class of tools, LLMs, have become available, and that’s very interesting because that’s literally an entire class of technologies that have been around for half a decade or so, but they were very niche and only experts would really fathom their potential because it was mostly about potential at the time. Maybe, Rinat, how do you see what change by introducing this class of tools of LLMs? How do you compare that? We had various classes of tools for machine learning for companies, like classification, regression, Monte Carlo simulations. They were classes of tools that could be put together, and now we have another class of completely different tools, LLMs. Maybe for the audience who might not be familiar with LLMs besides ChatGPT, how do you wrap your head around that in a context of enterprise software, enterprise workflows? What is your high-level vision on that?
Rinat: I’ve been in touch with language models since 2015, before the chatbot came out and made it popular. You’re correct that they were really niche. They would be used in language translators, voice recognition, and language models that fix spelling errors or help to find text in large corpora. When they came out through ChatGPT, their popularity surged. One reason for that is because they were trained to be helpful and obedient to people.
And that’s actually sometimes why they are so irritating because when you want to get results out of the model, and it starts apologizing, saying ‘I’m sorry’ repeatedly, it can be frustrating. In my mindset, I basically separate models on a large scale into two categories. One category of models works mostly with numbers, so we’re talking about regressions, Monte Carlo, neural networks. The other class of models, which are large language models, yes, they work with numbers, but on the surface, they work with text, with large unstructured text, and that’s where their core usability comes from.
These models allow a machine or automation to be plugged directly into human interactions. For example, with regressions or time series, you need to plug the model somewhere in the middle of business digital processes. There is a database on one side, a forecasting engine in the middle, and maybe a database or CRM or ERP on the other side. In the best case, you get a report, but it’s still numbers. With LLMs, you plug directly into the middle of the business process, in the middle of human workflows.
This creates so many possibilities, especially since it doesn’t take a lot of effort to implement something that was completely impossible or expensive a decade ago. For example, personally, when I’m working with LLMs, I start feeling right now that I’ve got my own private department of assistants. They are polyglot, they’re full-stack, maybe sometimes naive, but they’re also intelligent, and they will never complain. For example, asking them to move a button on a layout or rewrite a letter to a magistrate in Germany is very helpful, very obedient, sometimes stupid, but they can do great things.
In enterprise classes of LLM adoption that I’ve seen, it’s mostly in what they call business digitalization. It helps companies to automate workflows revolving around finding text in a large corpus. For example, a company has a lot of data, they have their knowledge bases, but these knowledge bases are essentially silos. It could be RFCs, questionnaires, or a Wikipedia that nobody really keeps up to date, and people need to do some activity that sometimes requires finding information in obscure places. This requires time, effort, and especially cognitive energy.
What LLMs can do is they can do some preparatory work. They can draft articles, they can do research on a company’s private data, saying, ‘Okay, you’re compiling this answer to the enterprise, so based on your company workflows and the coded prompts, this is my draft.’ For each item in this response checklist, they can show where they got the information. So, the person no longer needs to do the routine work and can shift into the more highly intellectually demanding work of checking if the model got something right. This allows scaling a company’s efficiency massively.
When ChatGPT came out, people were really afraid that LLMs and AI would take their jobs, but they do not. Trust me, I’ve been helping customers to make products that are powered by LLMs and ML for quite some time, and it takes a lot of effort to produce something that can replace a human. It’s nearly impossible. But what LLMs can do is they can make existing employees more efficient, sometimes even 10 to 100 times more efficient. These are exceptional cases. They just make people more efficient, but they can never replace people. There has to be always people in the loop.
Conor: If I can follow up on that point, because again, the context of the discussion is generative AI, LLMs in the supply chain context. From what you just said, Rinat, it sounds as if LLMs will be productivity boosters in general. But do you see there being any specific use cases within supply chain, or is it just, as you said, ‘I have a team of polyglots, I need to translate this RFP into 10 languages’?
Rinat: In my experience, supply chains are a bit slow to adopt LLMs at the core of the process. LLMs rather start creeping from the outside. So, a common case is that marketing departments are usually the first adopters. When a company has, for example, when they’re facing users, the edge between the company and the users, the customers, is where I have seen the biggest adoption. For example, there are marketplaces that are selling products to their customers, and they want to make this interaction more pleasing and maybe reduce the cost of having this interaction with the customers.
It’s already quite feasible to build systems that automatically crawl through the product catalogs one by one, restlessly, tirelessly, 24/7, and they see, ‘Okay, this is a product, but it was entered by the vendor of the supply chain incorrectly.’ Why? Because I’ve scoured the internet, I found this product’s specifications, which are similar, I also found the PDF description from the producer of the product, and according to my belief, like half of the internet has this number right, and you have this number wrong. These are the references. Please make a decision if you need to automatically correct it. ‘Oh dear manager, I saw that you corrected this product description, this product property. Do you want me to regenerate the product description to have the updated number, not just the number but also the text?’ And while you’re at it, I created three product descriptions so you can pick whatever makes sense. I also created an SEO marketing text, updated the keywords on your publishing engine, I also created a Twitter announcement and LinkedIn announcement.
Another edge between the customers and retailers that plug into the supply chain is the product listing on the marketplaces. Imagine you’re a vendor that has to work with lots of marketplaces, and your catalog is 10,000 items with small variations, like car parts or airplane parts. You want to automate this process, especially if your own inventory changes quite fast. It’s quite feasible, and I’ve already seen it done. For example, you get a couple of images of the product, especially if it’s reused products, especially in fashion, it works really well. You pass them through image recognition, which works best when it’s trained on fashion and styling. You get the texts, the descriptions, you pick the boxes, resize the images automatically, and out of that, you generate a description for the people.
And then, one of the nicest parts comes. You create also an LLM-augmented hidden description that is used for semantic search. What does it mean? So when a customer of a fashion platform tries to find a piece of clothing, they would not always search for, for example, a boho style shirt that has dragons on it that is size M and under $10. They would search, ‘Hey, I’m going to a party tonight, and my friends are there, what can I wear that will complement my shorts?’ If you have product descriptions and semantical explanations that were extracted by the LLMs from the product, and you search for them, but not for the full text because who knows how people write boho, but you use embedding-based search, which is essentially vector-based search, which is a search on the meaning of the text, not the exact wording, then you get results which on the outside look magical because the model somehow starts suggesting things that you meant to ask it, not what you’ve said.
Conor: Thank you, Rinat. Joannes, your thoughts? I mean, when I observe supply chains, I would say they work pretty much half and half. Half the people are talking with spreadsheets, and the rest is mundane communications with partners, suppliers, clients, and whatnot. The spreadsheets, it’s really about automating the quantity decision, that’s what Lokad has been doing now for a decade. The second part, it was mostly not automated because until the advent of LLMs, there was no real technology that was a plausible answer to that.
Joannes: Meaning that the stuff that requires communication, either it was a very tight workflow and then it could be automated, and it was automated through, let’s say, EDI to pass an order. We are going to have a bridge that passes the order, and then we have a non-textual problem. But that’s not exactly what people mean when they say people spend half of their time on spreadsheets and half of that time managing partners, clients, transporters, suppliers. It’s more like, ‘Could you expedite this order, and if yes, at which price point?’ It’s more fuzzy and open-ended.
One has to take this edge case and write an email on the case, kind of clarifying what the intent is, what is at stake, and that takes half an hour. Then you repeat with a different situation, a different problem, and you produce another email. You end up with a purchasing department where everybody, during eight hours of work, spends four hours on their spreadsheet and four hours writing 20 emails to 20 partners. Here, I see a huge potential to improve. Lokad is literally automating the first part already, but with LLM, there is a huge potential to largely, but not completely, automate the second part. Essentially, letting people, I would say, provide support to auto-compose communications that will be received by your partners. The LLM has been used to provide a reasonably contextualized version of the problem statement and what we expect from the partner.
If the problem statement has well-defined boundaries, then you have EDI; it just becomes something that is part of your fully mechanized workflow. But I’m talking about the rest, the things that are not quite aligned, such as when you ordered 1,000 units and they delivered 1,050. You’re not going to reject the order because they delivered 50 units too many. You like this supplier, so you will accept and validate the order, you will receive it, and you will pay for 1,050 units instead of 1,000. But you want to communicate in a polite way to your supplier that you would prefer if they stick to the original agreement, which was to ship 1,000 units and not 1,050. There is a bit of nuance here where you don’t want to disrupt the workflow; it’s quasi-correct, but you still want to communicate that it’s not okay to always deliver 5% more so that the supplier can charge you a little more.
This is the sort of thing where LLMs really excel, this sort of soft communication where you need to convey a message. It would take time to balance the phrasing so that it’s not too aggressive, but the partner still understands that you have a strong preference for them to adhere strictly to the initial quantity that was agreed upon. It’s the sort of thing where someone can agonize over the email for an hour to write this half, and this is the sort of thing where, with modern LLMs, that’s exactly the sort of thing that is not super intelligent. The sort of intelligence that you have in those LLMs is linguistic, and if you want to set the tone right, they have almost superhuman capabilities. They’re not necessarily super intelligent in the sense of getting the big picture right, getting the direction right, but if you want to have just the same text with a shade darker, like I want the same text but slightly more aggressive or slightly softer or slightly more supportive, they are super good at that.
It would take you maybe 20 minutes to do it for half a page, and an LLM can just do that literally in seconds. That’s exactly the sort of thing where you can have a massive productivity boost for those soft touches where people literally spend hours. If we take this a bit higher, imagine a company that has thousands of communications like that about the edge cases throughout the day. That’s a new capability that LLMs bring. For business owners, for stakeholders to get the big picture, it takes effort, but now we have LLMs which are very good at scanning massive amounts of unstructured texts and finding patterns. Imagine an LLM can actually read through hundreds of reports or emails or back-and-forth communications about not sending extra 5% and at the end of the day provide a succinct summary to the executives saying, “Hey, we seem to have a repeating pattern here that more and more suppliers in the last week are trying to send us more stock.”
Conor: About six months ago, we had a discussion on generative AI and its role in supply chain, and we were overall somewhat skeptical. When you listen to what’s been described about the advancements in just the last six months, do you still have the same perspective, or have you softened a bit?
Joannes: My position remains deeply skeptical on certain aspects. My skepticism was essentially a reaction to most of Lokad’s competitors who say, “We are just going to apply ChatGPT directly to terabytes of transaction data, and it will work.” My take is no, I don’t think so. I’m still very skeptical because it’s literally not that. If you say that what you can do is list a couple of tables with schemas, or have the tool auto-probe the schema of the database to compute disposable analyses like the average basket size, that’s a completely different proposition. In the past, this would have to be run through the business intelligence team. I’m talking about basic things like what is the average basket size, how long on average do we retain customers, how many units have we sold in Germany—very basic questions. In large companies, you usually have dozens of people in BI divisions producing disposable reports all day long. For these sorts of things, I believe LLMs can really help, but that’s absolutely not what our competitors are proposing. They say, “You have these models, you give them your terabyte database, you give them access to Twitter and Instagram, and you have your planning, your decision, everything, and it’s entirely automated.” I say no, not even close. We are in the fantasy world.
Rinat: Regarding your response to that challenge, I have two thoughts to share. First, about the process of using LLMs to process large quantities of data, I’ve been working with various LLMs for quite some time. One of the first questions customers usually ask is whether they can run something like ChatGPT locally on their premises. To answer that, it requires benchmarking the LLMs in different configurations and figuring out the costs. LLMs are quite expensive. To run one megabyte of text through the LLMs for prediction, it could cost a couple of Euros, depending on the model. If you want to run it locally on the best available models, it could cost you €10 or maybe €20.
And that’s what GPT-3.5 is doing; it’s very cheap. But the point is, it’s not even possible to run terabytes or petabytes of data through the LLMs. Secondly, LLMs are terrible with numbers. If someone asks an LLM to do mathematical computations or list prime numbers, that’s a misuse. LLMs are linguistic models; they have a large knowledge base and are very intelligent, although they still have limitations. You don’t ask an LLM a mathematical problem; instead, you ask it to phrase the problem, and then the computation is passed to a specialized Python kernel or something else that will do much better than wasting the operation on an LLM.
The most interesting things happen at the touchpoint between different domains. For example, we have the vast numerical domain on one side, text or soft and fuzzy edge cases on the other, and code as the third part. Code is not numbers, it’s not text, it’s structured yet verifiable, and LLMs are exceptionally good at dealing with it. This creates new cases which might be applicable for the supply chain, pushing the applicability of solutions like Lokad even further.
For instance, one case where I’ve been applying LLMs to parse large quantities of text outside of LLM capabilities is by phrasing the problem to the LLM. For example, finding text in hundreds of gigabytes of annual reports worldwide or helping to solve a numerical problem without doing the actual computation. You come up with a theory of how to approach it because you’re intelligent, you know the backstory, and these are the controls I give you.
When talking about searching across a huge database, I ask the LLM in a specific syntax to come up with embedding searches that I will work on, to come up with a list of stop keywords, or a whitelist of keywords that boost. Then another system that is dedicated to that and is very good at processing at scale will take this well-formed request from the LLM and run it. That’s where the best part comes from because LLMs are capable of refining searches.
You go back to the LLM and say, “Here was my original problem, this is what you thought about it, this is the query you came up with, and this is the junk it returned. Please adjust and adapt.” Because working with LLMs is pretty much free, you iterate maybe ten times, maybe you do a chain of thought, maybe a tree of thought, with good decisions and bad decisions, and then it gets better. The same is applicable to the numerical domains. For example, supply managers want to come up with an idea of how to better balance their stocks. In theory, they can say, “Here is a tiny simulation of my environment, which is maybe good enough, and this is how you can tweak it. Now please do fuzzy constraint solving and try to come up with ideas that might help me to balance my stocks better.”
This is the possibility that opens up when you start bridging multiple domains: numerical, code, and text, and you use the best tools available for each domain together.
Conor: Thank you, Rinat. Joannes, your thoughts on that?
Joannes: Just to clarify for the audience, the interesting thing is that for a lot of problems, the way you want to approach it with an LLM is to say, “Please compose a program that will solve the problem.” You will not say, “I want you to solve the problem.” You will say, “Compose a program, and then I will learn the program.” There are further tricks, such as giving the LLM a compiler to check if the program compiles or a tool that lets you run the program a bit to check that the output makes sense.
It’s not about having the LLM solve the problem directly; it’s mediated. The LLM produces a program, then uses something else that is still textual output because if you use a compiler, the compiler will attempt to compile the program. If it doesn’t work, it provides an error message. LLMs love to process error messages and fix the associated problems. We are very much into the realm of text.
For supply chain, most situations will be mediated. We want the LLM to compose the program that will solve what we’re trying to do. For example, with the initial problem of finding the turnover in Belgium for last year for clients above 1 million EUR, the LLM will not take the data from the database to do it. It will compose a SQL query that will be run by your database itself. Again, mediation.
What does that mean for enterprise software? Do you have, as part of your enterprise software environment, platforms that support your supply chain execution, at least the decision layer, with programmatic capability? The LLM will not take the raw transaction data to produce the output; it will take the problem statement, produce a program, and it’s very versatile in what sort of program it can produce. But then something in your environment must execute the program. What sort of programming environment can you provide to the LLM?
Most classic enterprise software provides no environment whatsoever. They just have a database with a language that you can use, but the only way to interact with, say, a big ERP that is supposed to let you optimize your inventory, is manually setting the min and max stock levels or the safety stock parameters product by product. The LLM can tell you the recipe you need to apply, but if you want to apply it, you will have to go through the manual ERP settings. If the ERP provides an API, it can compose a program that will let you do that at scale through the API, but it’s still very clunky compared to having a native programmatic solution. It will still be mediated through the framework.
It requires some profound changes and introduces programmability of the solution as a first-class citizen. Shameless plug, Lokad has a programmatic platform. We didn’t do it for LLMs; it was pretty much luck, but we still did it 10 years ago to have this sort of programmatic mindset as the core of the platform and as a first-class citizen. That was chance, not a visionary insight about what would happen a decade from now with LLMs.
Conor: Thank you, Joannes. I’m mindful of everyone’s time, so as is custom, Rinat, I will pass back to you for a closing thought. Is there anything you want to tell everyone who’s watching?
Rinat: There were a couple of bubbles in past history, like the dot-com bubble and the finance bubble. LLMs and AI might also be a bubble, or they might not. Even my mother knows about ChatGPT and how to use it, which is interesting. I encourage everyone to not be afraid of our machine overlords because Skynet will not work that easily. As someone who is trying to stabilize these things in production, it’s a lot of effort, and it does not work reliably easily. So first, don’t be afraid of LLMs, and second, just embrace them. LLMs plus humans and businesses can create a lot more value, especially when complemented by specialized tools like Lokad’s forecasting that plug really well into the environment.
Conor: Thank you, Rinat. Joannes, thank you very much for your time. Rinat, thank you very much for joining us again. And thank you all for watching. We’ll see you next time.