00:00:00 Intro; Bamboo Rose and retail PLM scope
00:02:21 Retail PLM problems: lifecycle, coordination, scale
00:04:15 Aerospace depth versus retail’s shallow massive assortments
00:09:45 From silos to adaptive, shared-context processes
00:12:56 Zero-shot tariff classification replaces manual workload
00:16:42 GenAI processes unstructured PLM documents at scale
00:21:38 LLMs orchestrate; emerging agent-to-agent ecosystems
00:24:26 Data sharing pragmatics; minimize PII liabilities
00:28:53 Logic/UI thin; data and metadata centralize
00:34:23 Automation collapses workflows born from labor division
00:37:48 Configure agent networks: goals, constraints, permissions
00:44:30 Toward auto-generated collections and supplier exchanges
00:49:55 AI impact: defend, extend, append
00:55:04 Quant optimization decides; LLMs handle docs and comms
00:59:08 Benefits: speed, accuracy; back-office automation accelerates

Summary

Rupert Schiessl (Bamboo Rose’s Chief AI Officer) joins Lokad’s Joannes Vermorel (CEO) and Conor Doherty (Director of Marketing) for a discussion on how generative AI reshapes Product Lifecycle Management (PLM). Schiessl explains retail PLM’s classic siloed modules (planning→distribution) and argues AI/agentic systems can dissolve silos, adapt processes dynamically, and automate text-heavy, unstructured tasks (e.g., tariff classification, tech-pack creation). Vermorel contrasts deep, decades-long PLM (e.g., aerospace) with retail’s shallow-but-massive assortments, noting LLMs excel at orchestration, document generation, and pre/post-decision workflows, while quantitative optimization still handles pricing/assortment. Both foresee logic/UI layers thinning as data and metadata become central, with agent-to-agent ecosystems emerging and large swaths of back-office work automated.

Extended Summary

Conor Doherty (Lokad Director of Marketing) frames the conversation around “rethinking PLM in the age of AI,” asking Bamboo Rose’s Chief Strategy & AI Officer, Rupert Schiessl, to define PLM in retail and how AI changes it. Schiessl describes PLM as coordinating the product journey—from ideation and design through sourcing and distribution—across thousands to hundreds of thousands of items and many roles (designers, developers, sourcing analysts, supply chain). Historically, vendors decomposed complexity into siloed modules (planning, design, develop, source, POs, logistics). That worked when processes were stable, but recent volatility (pandemic, trade shifts) makes rigid, pre-defined workflows brittle. AI offers a way to dissolve silos, make modules “speak,” and adapt processes dynamically.

Joannes Vermorel (Lokad CEO) broadens the lens: PLM varies by vertical. In aerospace, each product is deep (terabytes of test/compliance data) and long-lived; in retail, each product is shallow but assortments are vast and fast-rotating. Retail’s challenge is assortment complexity, multi-sourcing, and agility. Vermorel argues silos limit optimization; modern AI enables decision automation and removes many mundane tasks once requiring armies of coordinators (e.g., crafting store-tag labels under strict character limits). LLMs are not ideal for numeric optimization (e.g., pricing), but they excel at text-heavy work and as orchestration layers around other algorithms.

A concrete win: tariff (HTS) classification. Schiessl explains that selecting duty codes from sprawling, frequently updated regulations has been manual; with AI, models ingest the newest rulebooks and perform “zero-shot classification,” replacing workflows that previously required labeled datasets and constant retraining. Vermorel notes zero-shot shifts the economics: you encode rules and context rather than amassing examples. Similar gains apply to unstructured inputs—PDFs from suppliers, Illustrator/SVG design files, catalogs, and RFP responses—where AI can extract, reconcile, and route information.

Looking ahead, both Schiessl and Vermorel anticipate agent-to-agent ecosystems: suppliers expose agents with shareable catalogs; retailers’ agents query them. This promises fewer brittle PDF pipelines but raises governance questions: what may an agent share, and how do you prevent leaks or errors? Vermorel downplays the “data as crown jewels” trope for most retailers (product catalogs are often public; personal data is a liability to minimize), while insisting on strong data hygiene.

On software architecture, Schiessl advances a provocation: “AI kills the logic layer.” In the classic stack—data (bottom), business logic (middle), UI (top)—LLMs can now generate context-aware logic and adaptive UIs on demand. Value concentrates in the data and metadata that describe it; vendors must harden data, enrich it with semantic/metadata layers, and let LLMs/agents compose flows dynamically. Vermorel agrees, adding that much enterprise “logic” merely coordinates division of labor. When automation takes over, layers of workflows, KPIs, and permissions shrink to health indicators of automations. Anti-spam’s evolution—from parameter jungles to invisible background service—illustrates this collapse.

Practically, Schiessl sees three tiers of AI impact: (1) Defend—automate existing steps for speed/accuracy (e.g., tariffs); (2) Extend—re-shape processes (connect design↔costing in real time; merge steps); (3) Append—create new capabilities (e.g., near-automatic tech-pack generation across BOMs, compliance, and sizes). Vermorel splits the PLM workload: use quantitative/stochastic optimization for assortment composition, then deploy LLMs to generate the extensive documentation and to automate supplier interactions before/after core decisions.

For non-technical stakeholders, Schiessl positions AI as systematic pain relief: fewer delays, better availability and compliance, improved supplier relationships, and cost/time savings—delivered step by step, not all at once. Vermorel’s closing prediction is blunt: AI will automate a large share of back-office, white-collar PLM work over the next decade, especially tasks that merely shuffle information between formats.

Full transcript

Conor Doherty: Welcome back. Joining me in studio today is Rupert Schiessl. He is the Chief Strategy and AI Officer at Bamboo Rose. Today, he’s going to talk to Joannes and me about rethinking Product Lifecycle Management in the age of AI. Before we get to that, you know the drill: subscribe to our LokadTV channel here on YouTube and consider following us on LinkedIn. We recently passed 25,000 followers, and I’d quite like to make that 30,000 as soon as possible. In all seriousness, we’re trying to build a community—your support matters to us. With that out of the way, I give you today’s conversation with Rupert Schiessl.

Conor Doherty: First question. In my introduction I introduced you as the Chief Strategy and AI Officer at Bamboo Rose. Who is Bamboo Rose, and what do you do there?

Rupert Schiessl: Bamboo Rose is a software vendor. We are selling a solution called total PLM, which helps our customers—mainly retailers—manage the whole lifecycle of products from planning and design to distribution. That’s what Bamboo Rose has been doing for 25 years now. We’re doing that for big brands like American Eagle, Urban Outfitters, Wolf, Walmart, etc. My role is to bring, in the large sense, AI into this whole stack, because as we will discuss, AI is transforming a lot of what PLM vendors are doing, but more generally what software and software-as-a-service companies are doing.

There are a lot of opportunities to make the solutions better for our customers with AI, with GenAI, with agents. That’s my job: to bring the existing stack—the whole knowledge and intelligence packaged into our solution—together with AI and make our software AI-usable.

Conor Doherty: You mentioned retail specifically. When you’re selling the concept of product lifecycle management, what are the problems people are trying to solve, or how do you explain to them what the problems are exactly?

Rupert Schiessl: As the name says, you’re managing the whole lifecycle of a product—from its design and creation, and even before, “Should I create a product?”—to the distribution point, or even the recycling or the point where products become waste. There are a lot of different things happening during this whole cycle: a lot of people working together, a lot of customers, vendors, suppliers coming together, and our software has the mission to coordinate all this. Especially for big companies with a lot of products—thousands, tens of thousands, or companies like Walmart, hundreds of thousands—you can’t do that manually with an Excel sheet.

So you have to create processes, organize these processes, create access rights, and bring all these people together—the designers with the product developers, the sourcing analysts, the supply chain managers—to make them work together on this complex process in a coordinated and collaborative way.

Conor Doherty: Joannes, would you add anything to the definition of the problem statement of PLM in retail?

Joannes Vermorel: For retail, it makes a lot of sense to present it that way. PLM is a huge market, and depending on the vertical it can mean very different things. For example, aerospace: every single product is potentially a terabyte of information, with thousands of experiments done to prove that the thing is secure in every flight condition possible. You have all the records from experiments, all the proof, even simulations done in computers to prove your design is safe, sound, compliant, and everything. The thing is extremely recursive: a module has a list of submodules with submodules, and each has compliance proof and engineering. Those things are extremely long-lived, for decades.

At the other end of the spectrum, the challenge is different: the knowledge is relatively thin per product. For a retailer—even Walmart—I doubt that for most products they have more than a few dozens of pages of documentation. Maybe some media assets, but overall it’s relatively shallow, and tens of thousands of products keep rotating. So the challenge isn’t a product with an immensely complex lifecycle spanning decades; it’s tens of thousands of products rotating quickly, and you want to keep things extremely agile. The same product can be sourced from half a dozen companies; you may want to switch source and ensure continuity even if you keep the product sourced from somebody else.

My take is that PLM is very much vertical-dependent, and the challenge for retail is managing assortment complexity through the roof—Walmart is clearly in the hundreds of thousands of products if we look at their portfolio.

Rupert Schiessl: I very much agree on the vertical. That’s really our approach within Bamboo Rose: very retail-focused, very strong on fashion, and stronger and stronger on food where a lot of ingredients and legal constraints are coming in, and then general merchandise. Within fashion, it’s not aeronautics, but products can get complex: different sizes, different components, and more complex products like apparel, shoes, or sports products can get quite complex and require collaborative design and all these different steps that we merge within our software.

Conor Doherty: Before we get into how AI is disrupting PLM, historically how have companies in retail managed the problems you just described? What worked with that approach, and what was not so great?

Rupert Schiessl: Historically, the intuitive reaction to a complex process is to break it down into many processes—what became silos. That’s what has happened within PLM over the last 20–30 years. Within PLM you will have planning, design, develop, source, purchase orders, logistics, and so on. That’s typically how most software vendors in the PLM market are working today: different modules, customers buy one or the other, modules work together, and different teams use the modules. What worked is that this is a very good approach for complex processes because you have the process predefined, and once the solution is set up—sometimes with very long deployment projects—it works very well to reproduce the process it was set up for.

Now, in recent years with supply chain disruptions and so on, processes and products are changing very often, very fast. The products the software was designed for are not these processes anymore, and siloed, predefined products are difficult to adapt or may become obsolete because they were designed for processes that aren’t there anymore. This is one of the best opportunities we see within Bamboo Rose: bringing AI in as a solution to adapt processes faster, disrupt silos, make elements speak to each other, share information upstream and downstream, and make processes more flexible.

Conor Doherty: Joannes, you’re a huge fan of silos in supply chain, right?

Joannes Vermorel: Huge fan. Silos define the boundaries of what you can optimize. When you have silos, it means entire classes of optimization or pathways aren’t even open to you. If you want to keep things manageable with a high degree of division of labor across dozens of people, you kind of need those silos; the alternative—everybody speaking to everybody—doesn’t scale beyond a team of five or maybe ten.

But with more modern technologies, there is an opportunity to rethink support for decision, decision automation, or plain automation. Many steps were extremely mundane but time-consuming, and there was no way to automate them. By that I mean everything involving text until very recently—like picking a label that would fit nicely within 80 characters for in-store tags. You had to carefully pick a name that fits the limit. Not massive value-add, but there was no alternative, and you burned hundreds or thousands of man-hours per year on a mundane problem. Now, a lot of that is gone, and people can focus on the more strategic problems.

Rupert Schiessl: We have a very similar use case with our customers around tariffs—selecting the right HTS number in the US. Many of our customers are US-based. That’s the tariff number you use to import a product into the US; that’s the universal global system which still exists. They have to select the right number based on the product characteristics, and they have some flexibility to choose a better duty rate when different options are possible. Up to today that was, and still is, a fully manual process. Now AI can come in, read the most recent regulations, understand what the HTS numbers are conceived for, and bring this together. There is a full basic process completely replaced thanks to GenAI, which wasn’t possible before.

Joannes Vermorel: That’s called a zero-shot classification. The technical term means you’re building a machine learning classifier without any prior examples. You have the specification of the law compiled and wrapped as part of the prompt, then a reasonable pre-processing to markdown-ify PDFs and whatever sources you have about the products. Then your classifier—an LLM—operates without specific training.

The concept is interesting with GenAI: you can suddenly build classifiers without collecting a database of examples first, which is what was done five years ago. Back then people would say, “We’ll use a random forest: first manually classify, let’s say, 1,000 examples, and then use this classifier to classify the rest.” But those 1,000 examples were costly and slow. The beauty with GenAI is that you can have zero-shot classification: spell out the rules and logic—which can be fuzzy and judgmental—and then have a halfway decent classifier that works without prior data.

That unlocks many use cases that previously weren’t economically viable. You could build your database for a thousand products for tariffs, but tariffs evolve rapidly, and it would devolve into, “Let’s do it manually anyway; that’s faster.” So yes, absolutely.

Conor Doherty: The example of tariffs is easy to grasp because it’s immediate. But the underlying logic—introducing AI into the workflow—exists independent of an enormous macro event. If you’re talking about maximizing the return on an investment for a decision, that’s true with fluctuating tariffs and without them. Rupert, how do you explain the value proposition of bringing AI into PLM in general?

Rupert Schiessl: There are probably two classes. One is the tariffs example: we keep the same process, but as Joannes explained, we can adapt the data and work more easily without collecting data sources, because we have pre-trained models able to understand data sources like humans do. For the tariff use case, we just push the PDF with the 4,000 pages of regulation—if it changes every day, just upload the new version—and the model can read it, which no human would be capable of doing. That’s transformative not only for tariffs, but for so many unstructured data sources: PDFs, images, designs, updated all the time.

Designers build Adobe Illustrator SVG files and change them—moving buttons on shoulders a little to the left creates a new version that has to be seen and reviewed by a product developer. A lot of PDFs come in from suppliers when they answer RFPs and RFIs; they don’t want to fill out forms, they want to send the information already available: catalogs, brochures. Up to today, sourcing managers had to review all this. Now that’s replaceable by GenAI.

The second transformation is the organization—the way processes and decisions are made within the software. Now we’re able to adapt within the different silos and modules to what happened before. Let’s say a supplier goes bankrupt or can’t deliver anymore: you have to adapt the plan or your assortment, or adapt the design—maybe it’s a button supplier or a specific material. This whole process took a lot of time; maybe in fashion it just wasn’t done because the lifecycle is too short. Now that is possible with AI: you can monitor the database all the time, have processes that adapt, automatically replan, recreate an assortment, change pricing, change designs, and interconnect the process dynamically.

Joannes Vermorel: LLMs aren’t super great to conduct very structured activities. If you want to go through a list of 10,000 products, you need a loop, and it’s going to be slow with an LLM. LLMs are good whenever text is involved. For example, pricing optimization: an LLM is suitable to help you write the code for a pricing strategy, but not to input characteristics and say, “Give me the price.” It’s not a good tool for that.

What I see down the road is agent-to-agent. Google just released a protocol specification for agent-to-agent a few weeks ago. The idea is that, just like companies expose a website, they could publicly expose an agent. What you feed the agent is whatever you’re willing to share with the greater world. If you’re a supplier, you’d have your catalog readily available; this agent could be queried by your clients to get information, and you maintain the database feeding the agent. Your clients could query these agents—designed for other agents—removing the problem of going through PDFs and media not super-suited for LLM processing. Whenever you have PDFs, you have to convert to text first; this removes layers of hurdles.

Rupert Schiessl: That’s definitely interesting: compliance of different agent networks of customers and suppliers. What’s very related—and already a question we’re getting—is: “I’m the supplier and I share my data with you, the customer. How do I make sure you won’t take all the information? How do I ensure you take only what I want to share?” That will become one of the most important issues in the next years. How can we control what the agents share? How can we make sure they aren’t making errors? Agents will speak to agents, that’s for sure, but how can we control what data goes out, especially out of the company? There will be huge opportunities, huge challenges, and probably huge incidents in these fields—hopefully not involving either of our companies.

Joannes Vermorel: There are many IT problems right now—ransomware, for example—much more critical and dire. My belief: for most companies, data is not nearly as critical as people think, unless you’re ASML with super-advanced physical processes, or SpaceX with rocket engines—then you have extremely sensitive assets. But 99% of companies have very little data that is truly critical. The data that is kind of critical are more like liabilities than assets. If you’re a retailer, you probably don’t want personal information of your clients; it’s a liability. If you leak it, it’s a major press problem; it won’t fundamentally boost your business.

So, the product catalog—anybody can scrape it from e-commerce. There is value in having it internally well organized; I’m not saying otherwise. I’m saying if this data is shared a little too much with peers, it’s not the end of the world. Data you don’t want to share—personal data from your clients—is a liability. At Lokad, for almost all our clients, we make sure we have no personal data whatsoever, because it’s a liability. Even in the event where Lokad would be breached—which has never happened but could—the idea is we only have information that wouldn’t be damaging from a PR point of view: commercial information, list of products, list of flows, stock levels. You don’t want your stock levels shared with competitors, but if they were published on Reddit, it wouldn’t be the end of the world for your company.

Conor Doherty: It might not be the end of the world, but in “The End of Software as We Know It,” that LinkedIn article you wrote, Rupert, you wrote that AI kills the logic layer. What was the key argument you were advancing, and how does AI kill the logic layer of PLM?

Rupert Schiessl: The title is a bit provocative, but it’s probably what’s happening. I’ve been in software for 17 years; as a software vendor, the thing we’re most proud of is the logic and business logic we build—reproducing customers’ business processes in an elegant, generic way so you can resolve many problems with a minimum of code. On top of that, you have a beautifully designed user interface to interact with this logic. On the bottom of the stack is data, interconnected to reflect the processes and business logic of the customers using the software. That was the traditional way.

Now, as Joannes said, LLMs aren’t going to produce pricing, optimization, or forecasting algorithms—those are mathematically optimized—but they are able to produce code and describe logical flows. These algorithms can adapt the logic and processes predefined by software vendors in a more dynamic, interactive, context-aware way—based on the current economic environment, which supplier is responding, or which user is using the software. The process might not be the same and can be adapted dynamically by the LLM, without a programmer anticipating every process. That’s a major change.

It also impacts the user interface. UIs were designed in a solid, stable way, like logic, to anticipate cases and user requirements. Designers aren’t seeing the same thing as finance. Now, with code-writing agents and logic agents—these GenAI algorithms—you can generate the UI depending on what comes out of the database, what’s the process, and who is the person in front of me. Put this together and logic is sort of disappearing; user interfaces are disappearing. The only thing that remains is data and the business logic in the data. The value of the data layer increases. We have to improve the ways we protect this data and create metadata layers that explain our data so the LLM understands what’s happening and can make it accessible to internal agents and to agents from other companies—customers and suppliers.

Joannes Vermorel: If you look at systems of record—enterprise software designed to have an electronic counterpart of something in your company: products and their lifecycles, stock levels, orders, etc.—the vast majority of the logic exists to support the division of labor. If you manage to automate, you don’t have to manage this division of labor. Suddenly you don’t need layers of UIs, logic, workflows, and whatnot for something automated. In many pieces of enterprise software, division of labor is way beyond 90% of the logic.

You create stages, workflows, supervision, KPIs, calls to action, etc. If you’re able to automate, do you need those granular stages, workflows, KPIs? All of that disappears. Maybe you collapse it into a technical indicator saying whether your automation is running in good conditions—green or not—and that’s it. To understand the effect of automation, think of anti-spam. In the 90s, corporate anti-spam software was extremely complex: tons of rules, supervision, hundreds of parameters. Nowadays, what happens with your anti-spam? Nothing. It exists, it works, you don’t care. There’s a spam folder you never look at. Logic disappears by unlocking automation for things that were not possible before.

Yes, with LLMs you can autocompose SQL queries to produce reports, saving some logic, but that’s small compared to removing artifacts that were only there to cope with having many people. If you have lots of people, you need admins with different rights, managers, visibility scopes, etc. Enterprise vendors love to introduce features catering to many people, but if you remove the need to have many people, most usefulness of those features is gone.

Rupert Schiessl: What’s also changing is that the whole process becomes more input-oriented. Instead of setting up the whole process, workflows, and shared responsibilities, you set boundaries, limits, constraints, and objectives. Then the new type of software auto-organizes—like a human organization would—with an orchestrator and teams working together because you told them to. That’s necessary in human organizations: rules and hierarchies.

That’s the main movement: changing how we software vendors have to work. We have to build software that allows customers to set up rules and organizations for these automation tools, instead of building the whole processes.

Conor Doherty: In the article, you talked about AI and that new rules or goal-oriented procedures will emerge—that the AI will teach itself. Could you explain a little more?

Rupert Schiessl: Instead of anticipating every step, you set up an organization of different agents working together—LLM-based AI tools—describing how they work together, who can talk to whom, boundaries of what they can do, what tools they can access, internal or not, their user rights, and what data they can access. Once that’s set up, you let them act—still very basic tasks for the moment, but getting more complex—and let them work together to resolve the tasks or objectives you set at the beginning.

Conor Doherty: We contrasted the current vision of PLM with the old world of people-centric silos. How do you break those silos and get closer to what you just described?

Rupert Schiessl: At Bamboo Rose, we have six elements—the six silos from planning to distribution—and on top we bring what we call decision intelligence. We could call it AI; it’s the way we make decisions that are upstream- and downstream-compatible, collaboratively made across silos. Within decision intelligence we allow customers to set up rules and objectives they want to fix, and then the different processes run automatically.

For the moment we’re doing that on specific tasks. We talked about finding the right tariff number: before, a process with a few steps integrated; now you just say, “Find the right tariff number for this product,” and the agent runs through the parts, fetches the necessary information, fetches the most recent legal information, runs the classifier to put it together, validates and explains it, and then populates the right HTS number into the field for the user with an explanation.

That moves toward more complex processes like tech-pack creation—a very important part of PLM: technically describing the bill of materials and points of measurement in fashion and apparel. Sometimes 50–100 pages for a single product: all the compliance and legal constraints, organic cotton or not, all sizes, then you send that to a supplier. Today, when customers want a new pullover, they either copy from a similar product and adapt it—which still takes time—or create a new product and start from scratch. That’s very time-consuming: all those 50–100 pages of technical information must be produced.

An agent network can go into recent products—some customers have hundreds of thousands of articles in the database—find similar product information, fetch the most recent legal constraints, and bring it together to write the tech pack. That’s automation of a complex process done fully or almost fully by an agent network—by AI. Our North Star is: the customer wants to bring products to market and to distribution centers. What happens between all the steps is work that has to be done, but work customers would delegate to an AI if it were able. The end vision of PLM is: “Generate the collection for next year and bring it to market,” and it works in an automated way. Maybe there will be startups building the one-employee unicorn—maybe fashion distribution can be one of these.

Conor Doherty: Joannes, any thoughts?

Joannes Vermorel: For applicability, we must distinguish composing an assortment—what garments, size ranges, colors, patterns, styles—from the documentation around it. LLMs aren’t well suited to compose assortments: you can’t feed a catalog of 20,000 variants into an LLM and expect meaningful balancing of color depth across product types. You need other classes of algorithms—classic or stochastic optimization—for composing the assortment itself. Once you have the high-level description of your ideal assortment, then for every product starts the process of gathering requirements and putting together the documents. That’s where LLMs really shine.

LLMs are super good at dealing with unstructured stuff—mostly text, a little bit images—but not at calculations. They can be used for calculation by generating code that you then execute—different layers. So LLMs can help you write the code to analyze and rebalance the assortment, and then help generate the thousands of pages of documentation across many products—a huge timesaver. Think of a very smart pre-templating logic that adapts the easy things out of the box; then humans step in for adjustments where the model lacks information. Sometimes making all relevant information available to the model is more costly than a human step, so there are trade-offs. But for the bulk of the work and automating most of the exchange between what the company wants and what the supplier can propose, there is massive paperwork that LLMs can automate so people are on the same page.

Rupert Schiessl: Thanks for precise wording. LLMs should be seen as orchestrators for other tools and algorithms already there. Within Bamboo Rose or within PLMs there are algorithms to create a BOM, analyze or search within a BOM. LLMs aren’t built for that, but they can know these algorithms exist, decide when to use them as tools, and surface information to bring it to the next step of the process.

Conor Doherty: If I start summarizing across all stages of PLM, Rupert, where do you see AI—LLMs, generative AI, or any algorithms—having the most significant positive impact?

Rupert Schiessl: I’d say three levels. First, defensive AI: take a process and automate the same process with AI, making it faster, more robust, saving time for employees—no change to the overall processes, just faster, more accurate, higher performance. Second, extend: transform processes, adapt them dynamically as we’ve described—merge processes and automate them, redesign processes using AI. More complex to set up, but more value.

Third, append: create something new thanks to AI—wasn’t possible before—on top of your process. Companies can invent new products, offers, services with AI. One can think immediately about pharmaceuticals: their whole transformation and product creation process can now be automated, allowing creation of new products with GenAI in much shorter delays. For us at Bamboo Rose, we have a lot of automation features in the first level to roll out with immediate time and cost savings. We have processes that can be merged and transformed—tech-pack creation is a typical multi-people process. Another we rolled out recently: allowing fashion designers access to cost estimation immediately. Before, designers sent to product developers for cost, got it back, made changes, sent it back. Now these processes are connected, and we recalculate the cost estimation based on design changes.

Of course, the last level—append—lets us transform how customers design and roll out products and maybe give them the possibility to create new products or better services.

Conor Doherty: Thank you, Rupert. Joannes, how is AI helping Lokad’s clients?

Joannes Vermorel: We have a slightly different take because the full robotization of supply chain decisions started more than a decade ago for us, and most of it doesn’t involve GenAI at all. Predictive optimization for flows of physical goods—that’s what we’re doing: deciding when to buy, where to send, where to stock, what to produce, where to dispatch, and adjusting prices. Those decisions are highly quantitative and have depended on numerical recipes for over a decade. They were already robotized without generative AI.

GenAI makes it easier to maintain code and facilitates tasks like cleaning up the product catalog when people don’t have a nicely organized PLM and the data is a bit of a mess. But for us, full automation started long ago. With GenAI now, things before and after the quantitative decisions can be automated. Before: let’s say the MOQ you have from one supplier is out of date—you want fresh information. In the past, the process was manual: writing emails or using templates to collect information, and replies aren’t super clean. With LLMs, that can be completely automated.

After the decision is made: suppose you decide to attempt expediting an order already placed with a supplier. That means engaging with the supplier and maybe back and forth—“Yes, we can, but there’s an overhead.” The answer isn’t simple or binary. Post-process interactions with third parties can be automated with GenAI, even if the core decision—identifying candidates for expediting—does not involve GenAI and is a very analytical calculation.

Conor Doherty: I’m mindful of time. As a closing thought for non-technical people: how do you present the value proposition of introducing AI into a PLM workflow so that 100% of listeners can grasp the value?

Rupert Schiessl: For our customers, the PLM process is often time-consuming, painful at different steps, and not optimal. Customers lose opportunities because some suppliers aren’t there anymore and can’t ship products. There are many pain points across the whole product journey. AI won’t resolve everything tomorrow, but it’s a huge opportunity to fix these pain points one by one: generate more accuracy, better product availability, better product quality, better product compliance, and a better relationship with suppliers because retailers will be less time-demanding to suppliers.

There are benefits for all parties participating over the whole chain, and more quickly than before. Everything will run more smoothly. It will take time; our job as a software vendor is to bring this technology to our customers within the stack they already know, which describes how product lifecycle should be managed. Our customers will benefit from cost savings, time savings, and better product quality they can build with our software.

Conor Doherty: Joannes, before we close, anything to add?

Joannes Vermorel: AI is coming for probably 90% of the back-office white-collar workers. Many mundane operations in PLM will be automated away in the next decade. If a task doesn’t bring significant value-add—mostly shuffling or reshuffling information from one format to another—that will be automated in the coming decades. That will be a major transformation because, for those companies, we’re talking about hordes of white-collars that are going to be automated away—just one for today.

Conor Doherty: Well, gentlemen, I don’t have any more questions. Rupert, thank you very much for your time and for joining us in the studio for the conversation. And to the rest of you, I say: get back to work.