Summary
A focused session on why corporate data science has often underdelivered in supply chains and how to fix it fast. We will explore the real purpose of data science teams, how to use them effectively, and how to boost their impact starting today.
Full transcript
Conor Doherty: This is Supply Chain Breakdown, and today we’ll be breaking down why we believe corporate data science has failed.
Another uncontroversial take here on Lokad. My name’s Conor; I’m Communications Director here at Lokad.
And to my left, as always, Lokad founder Joannes Vermorel. Now, before we get into the discussion, comment down below: do you immediately disagree with our take? Has corporate data science, overall, failed?
We’ll find out over the course of the discussion. Get your questions and comments in. Alexey is running the live chat today—say hello, Alexey.
And with that, Joannes, let’s get straight into it. So, top of the bill: corporate—sorry—data science has failed. This is a provocative statement. It’s very on‑brand, but is it an exaggeration?
Joannes Vermorel: From my own viewpoint, it is not. I have had the chance to discuss with, I think, about 200 companies—what they are doing supply‑chain‑wise and data‑wise—and so far I would say that the quasi‑totality of them have not a single resounding success with data science.
More specifically, I’m talking about non‑tech companies—putting aside the big tech like Microsoft, Amazon, etc. I mean non‑tech companies operating sizable supply chains. And when I say “data science failed,” I mean the corporate team that operates under the name of the data science team.
Has this team delivered something that genuinely contributes to the profitability of the company? A litmus test for that is: if those companies were to liquidate this team overnight, would their profitability be substantially harmed—at the extreme, would they even go bankrupt?
If data science were really delivering significant value, then removing the team should deeply harm the company. My take is that for literally all the companies I’ve seen, we could remove the data science team and it would be barely an inconvenience.
Conor Doherty: Okay—but define the problem as you see it. How do you see a data science division? What do you see as the mandate—the remit—that you think they are failing at?
Joannes Vermorel: That’s the problem: data science is a means, not an end. If we take an analogy—end of the 19th century—this great new thing, electricity. Electricity is fantastic; you can do so many things with it. It’s an invention that will define the next century.
Now imagine a company that creates an “electricity department.” You can see that introducing an electricity department is wrong—not because electricity is bad, but because the electricity department as such is bad. That’s my point with data science.
I’m not saying that data science, as a sub‑field of computer science, has failed. Incredible things are being done with models, techniques, and perspectives to deal with data—this is a rocket ship, and GenAI is just the latest iteration of this rocket ship.
However, if you introduce a data science division, you end up with the same sort of nonsense as an electricity division in your company. This is a means, not an end. You don’t want electricity for the sake of electricity; you want what you can do with it—and it’s going to be extremely diffused.
It will have a completely different meaning for factories versus finance. Factory people would say, “Electricity is great; we can have engines moving heavy things.” Finance would say, “If we have a very fancy computing machine—that’s IBM—maybe we can do arithmetic for us.” It’s electricity again, but completely different.
Same with data science. Corporate data science fails because if you isolate it in its own division, you end up with something that never really manages to deliver anything truly impactful—because they never really get the mandate to do that. They are not inside a division that already has a mission.
They are not inside finance; they are not inside marketing. And the cases where it works are when people operate within an existing department with a clear mandate.
Conor Doherty: There are two ways to frame this: negative and positive. Let’s start with what’s wrong, specifically. How do you currently see data science teams being deployed—and why is that wrong?
Joannes Vermorel: The archetype is: top management sees in the press that data science is a thing. It gets people very excited: “Get me some AI!” The board gets together and says, “We can’t miss out. We need this thing.”
They hire a bunch of very smart, technically inclined people, and those people say, “Look at all those shiny open‑source projects: pandas, PyTorch—you name it.” There are all those shiny toys. Let’s bring those open‑source projects in; they’re incredibly high quality and bring sophistication to the table.
Those tools have been instrumental in letting big tech achieve resounding successes. So people say, “We have all the ingredients: incredible open‑source projects and smart people—we’re going to emulate big tech.” The consequence: it doesn’t work out.
People build stuff. The typical pattern is an extremely fancy, promising prototype—in just a couple of weeks. Three weeks down the road: bam, very fancy prototype, impressive. A year later, it’s still not in production. Five years down the road, there is still nothing production‑grade out of this department—and certainly nothing game‑changing.
Yes, left and right you will have minor things being data‑driven, but in French you would say it’s the fifth wheel of the car: something not critically important.
Conor Doherty: You’ve mentioned “production,” and I do want to get back to pilot purgatory. But you also mentioned purpose, goal, direction, function—words that get into teleology. So again, what do you see as the goal of a data science team—why does it exist, if it exists at all?
Joannes Vermorel: That’s the problem—it exists for the same purpose an “electricity division” would exist in your company. Frame it that way and you can see the problem: it sounds off because it’s a means, not an end—even if you are an electricity provider.
Even EDF, a national electricity provider in France, doesn’t have an “electricity division.” The entire business is electricity. You need to think of what you want to achieve. For example, if marketing says, “We want to optimize spending for Google Ads,” then do Google Ads spending optimization.
They don’t “do data science”; they do that optimization. It turns out there is a lot of data involved. As soon as you embrace an operational perspective, it ceases to be called data science. That’s why I say corporate data science has failed: anytime I see a team still called “data science,” it reflects people who are lost. They have no purpose—and usually there is nothing of significance in production.
They will have a few widgets here and there, but if they’ve been around for a decade and we look outside big tech, I’ve never seen those teams holding the fate of the company in their hands. At best, it’s extremely secondary.
Conor Doherty: You’ve audited tech companies and been around smaller companies—not FAANG. You’ve presumably seen successful data science teams. What differentiates the good ones from corporate data science failing overall?
Joannes Vermorel: I’ve audited over 100 startups. It’s completely different when I audit a company that does fraud detection.
Obviously, that’s the goal: detecting frauds. Then they segment the types of fraud; they might have a team dealing with pig‑butchering scams, with specific heuristics and algorithms to detect them, address them, and report effectively to relevant authorities.
Data science in those tech companies works because it’s literally their raison d’être. They start with a problem they want to solve—say fraud detection, or anomaly detection in mechanical engineering. In order to address the problem, at some point you invoke sophisticated tools.
They don’t start by saying, “We have this nifty open‑source package; we should use it.” They start with a big, under‑addressed problem to solve. They see simple solutions fail, and after validating that, they bring out the big guns to crack it.
Often, they realize the problem is unsolved because everything available is inadequate. Open source is fantastic, but inadequate for their specific problem. Thus they roll their own technological solution and, as a by‑product, open‑source components that are sub‑segments of their solution.
Exactly like big tech does with grand problems—then they decide to open‑source parts of their solution. For example, Airflow—from Facebook—is a large‑scale distributed task scheduling system with dependencies; they developed it internally, and at some point it was just a piece of a bigger solution they open‑sourced.
Typically, that’s the journey for those technologies. What’s wrong is thinking you can just grab those pieces and reuse them in a large company even if your journey is completely different. Those technological pieces are excellent but come from specific journeys.
Most of the time they won’t be appropriate for a large corporation that has problems completely different in nature from big tech like a company such as Meta. The vast majority of companies on Earth have nothing to do with Meta’s problems.
Conor Doherty: Anyone familiar with Lokad’s content knows you often disagree with how budget is spent in corporations. Is there something uniquely pernicious—uniquely negative—about how money is spent on data science, beyond mere waste?
Joannes Vermorel: No—again, lack of consequence. Due to the stereotype of data science teams being extremely isolated, there are almost no second‑order consequences. They don’t pollute the rest of the company; the damage is largely restricted to the money wasted on the team.
They’re so isolated they don’t even consume much bandwidth from top management; maybe one or two meetings a year. The good news is it’s a contained problem: wasteful spending that doesn’t go beyond that.
Conor Doherty: They’re so isolated—what do you mean?
Joannes Vermorel: You could imagine other phenomena—virtue‑signaling trends—that keep everybody busy, taxing mental bandwidth across the executive team. That’s very detrimental. Data science is not like that. It’s super isolated and not a cognitive load for top management.
For me, this is a long tradition; it repeats with changing buzzwords. Twenty‑five years ago, the keyword was “data mining.” Companies created data‑mining teams. Then “digitalization” teams, “innovation” teams, now “data science” teams—and soon, “generative AI” teams.
If you have a corporate team named after a means—like “electricity”—instead of an end—like “fraud detection”—this is wrong. You should not have a team named after a means; it should be named after the end.
Naming matters: it defines how people approach their jobs, the sort of people you hire, and how they craft their roadmap. If you have a “data science” team, they will come up with—guess what—a data science roadmap. If you say “fraud detection team,” the roadmap becomes, “How do we eliminate those frauds?”
At Lokad, we actually stopped hiring “data scientists” a decade ago. We now hire “supply chain scientists.” It may sound like a small twist, but at hiring we’re literally telling candidates: if you join us, your mission will be to get the supply chains of our clients to run as smoothly as humanly possible.
We’ll give you the best tools and training; you won’t be on your own. But ultimately, if you can do it with basic arithmetic and a few heuristics—great. We’re not here to publish papers on fancy algorithms. If an incredibly simple heuristic solves the problem, good job; maintenance becomes easier.
In contrast, when we hired “data scientists,” people would object: “We can’t solve this with a super‑dumb method—it’s not state‑of‑the‑art. I need deep learning for my résumé.” For us: no, you don’t. If something vastly simpler than deep learning solves it, you don’t need deep learning.
Conor Doherty: Before I push on, we are hiring supply chain scientists. If you’re interested, send your CV to our recruitment manager.
Adding some context: sources like Gartner report only 48% of digital initiatives meet or exceed business outcome targets, and multiple surveys show AI/ML projects stall pre‑production. There’s clearly “pilot purgatory.” Not saying it’s a univariate problem—but in that multivariate problem, how much weight do you give to data science divisions in that phenomenon?
Joannes Vermorel: It’s overwhelming. Usually, when you go for a digital initiative that is to digitalize—to put a system of record in place—it works. For example, expenses managed through spreadsheets and emails: you set up an expense management system. Six months later, the app and practices are in place, and it works.
Systems of record are quasi‑zero‑risk initiatives—though sometimes vendors are exceedingly bad. On the contrary, in the realm of systems of intelligence, the quasi‑totality fails. If it originates from a data science team, my experience is that it fails all the time.
At Lokad, we’ve even had clients with side‑by‑side setups: Lokad generating supply‑chain decisions and the internal data science team that had been doing that for half a decade before us—still there, generating things not used, just ignored. Corporate complexity keeps them around.
Data science is extremely useful—like electricity. It’s a versatile tool wherever there are data—and nowadays that’s everywhere. Is it worthy of interest? Absolutely. But it does not warrant its own team. It needs to be in every team where data exist: marketing, finance, production, purchasing, planning, etc.
Conor Doherty: Let’s be constructive. Based on what you said, are you suggesting each team member upskill in data science; or each team should have a data science member; or a central team loans out members on a ticket basis? What world are you advising?
Joannes Vermorel: For every function in the company, there’s potential in leveraging data to accomplish what the function does—better and faster. Take a simpler example than supply chain: marketing spend on Google Ads.
Google Ads are complex: you can bid varying fees for thousands of keywords; track performance, cost per click, cost per outcome. It’s fairly technical. You can build this competency in‑house with true experts, or outsource completely to an agency running the optimization.
Both approaches are valid as long as genuine competency exists somewhere—internally or externally. You need people with true affinity for everything under the data science umbrella. You can’t bypass deep understanding—but it’s seen through the lens of the function you’re optimizing.
Conor Doherty: A question I got privately—devil’s advocate to Joannes: isn’t “failure” too harsh if analytics still inform meetings and reports? If leadership feels better informed, doesn’t that add value?
Joannes Vermorel: That’s exactly the sort of case where data science is isolated and doesn’t have negative consequences—except here it’s worse: you’re distracting executives with feel‑good reports. If what you want is descriptive statistics for top management, that’s the role of BI.
You already have a Business Intelligence team; you don’t need to double that budget for data science. By the way, BI divisions have problems too. Marketing should carry responsibility for crafting their own indicators; they shouldn’t outsource that to data science.
If the only outcome is metrics that make management feel good—best case, it’s redundant with BI. Merge it back into BI; you don’t need a separate division. My criterion is hard: would the company be measurably harmed, financially, if the data science team were gone?
Improving management morale is nice, but I’m concerned with P&L—how many extra dollars we bring to the company. Producing thousands or millions of numbers a day is super easy and entertaining; producing ten numbers a day worth reading is extremely difficult.
Are you producing those ten numbers? My experience: no. They produce walls of metrics, yes—but not the sharply valuable ones which, if removed, would damage the company.
Conor Doherty: A lot of those metrics come from above. Marketing is a department; sales is a department. They aren’t responsible for indicators thrust on them. And if you already have BI producing descriptive statistics…
Joannes Vermorel: Exactly—best case it’s redundant. Merge it into BI; no need for a separate division.
Conor Doherty: One more from a private message: “I can’t just fire my data science team. How can I improve things on day one?” Realistically, what can be done?
Joannes Vermorel: I’ve done this for some fairly large e‑commerce players. I suggested splitting the team and redispatching under other divisions. If you have half a dozen people: put two in finance, two in marketing, two in planning. Tell those managers they’re now in charge of these competent people and their productive use.
If you manage this unit and want to be a hero, convince upper management to split and disperse the competency across divisions. The best approach I’ve seen—but it only works for very large companies—is to transform the central team into coaches and mentors for data science within each department.
Forget about delivering a working prototype for marketing; five of you will coach marketing so they can do interesting things with data; coach sales likewise. This works in companies €5B and above—large enough to support a team just to evangelize. It should be transient—12, 18, 24 months, no more than two years—then dissolve.
Otherwise, you end up with a semi‑hidden bureaucracy wasting money.
Conor Doherty: Questions from the chat. From Neil Knight: Do you think data scientists are ignored because they’re internal—or because they’re not useful? I find consultants are often listened to because they can come to the same conclusions management does (McKinsey, Bain, Accenture, etc.). Your thoughts?
Joannes Vermorel: There’s a French proverb: nobody is a prophet in their own country. Being an outsider helps, but to the credit of consultants, it’s not just that. One thing they get right is going hard on genuine, important problems.
We can disagree on technical skills for resolution, but when it comes to zooming into what matters dearly to top management, they’re good. That’s what data science isn’t doing. Because they’re isolated, they can’t tackle problems with big payback—those require big transformations.
I’ve seen data science teams pick pet projects that are inconsequential. We do a back‑of‑the‑envelope and agree there’s a problem twenty times bigger right next to it. They say, “Yes, but that problem is touchy; we’d need approvals across layers; we might ruffle feathers.”
That’s where good consultants win: they focus on what matters, not pet projects we’re afraid to touch. Even if what they do is crude, they focus on very real things. Data science, being on the side—not part of marketing, finance, etc.—never gets the authority to transform the company to leverage data.
Take fraud detection: when you detect a fraud, you need authority to say, “We will not serve this customer; reject the payment outright.” There will be false positives—honest customers harmed. The question is the balance of false positives and true positives.
If you tell the data science team, “As long as there is even one false positive per year, we can’t put this in production,” it will never go to production. This is a trade‑off. You need authority to say, “Overall it’s very good; the negatives are under control,” and then refine the methods.
Conor Doherty: Thank you, Joannes. Pushing on to Amarinder: What is your view on the role of the product manager or product scientist? Is this a better means to deliver the end value you’re talking about?
Joannes Vermorel: Product managers are very much in the realm of systems of record. Product management is critical there because the sky is the limit in terms of features; you need a roadmap and triage, and to say “no” to avoid a monster app.
For systems of intelligence—unattended decision‑making processes like anti‑spam detection—you have false positives/false negatives to improve. You cannot improve that by just discussing with users or arbitrating features. Another example is search ranking—how do you improve Google’s results page? Very difficult; not about features.
Product management is important, but belongs to systems of record. Data science that’s truly impactful has to be about decisions—thus systems of intelligence. Product management or “management scientists” have a role to play, but it’s minor and from the systems‑of‑record side.
Conor Doherty: Foreshadowing for next week: we’ll dedicate the episode to systems of record, systems of reports, and systems of intelligence. Watch for the promo and attend.
Closing thought, Joannes. A call to action for people who agree with you: what’s my day‑one step—what do I do now to affect change?
Joannes Vermorel: Fundamentally: there is a lot you can do with your data. The vast majority of businesses are under‑using their data. The intuition behind data science comes from a good place: you have so much data not leveraged to make the business better.
At Lokad it’s predictive optimization of supply chains—that’s what we do—but there are many other things. Another good place is “mechanical sympathy”: people who embrace the technicality, so it’s not just educated guesses—people with real affinity for the challenge.
What is not good is taking that and saying, “I need to create a division.” That’s wrong—an anti‑pattern; a lazy way to approach the problem. Think of the “electricity division.” Electricity was going to be super important for all divisions—same for data science.
As a leader, challenge every single division to make the most of the data that exist in the company for their function. Every division leader needs to be responsible for making the most for his or her function. This will require data‑science‑type skills—internal, external, with or without consultants.
It’s a more difficult message because leaders will be challenged on something uncomfortable. Think electricity and factory floors: with light bulbs, you can operate at night. It changes everything. Will you do it? Tons of questions—but you need to figure out the answers within the function.
Bring people in, but you cannot escape the fact that technology transforms what you do in ways so deep that you can’t realistically outsource the problem to another division and call it done.
Conor Doherty: Final choice, Joannes: we can stop now, or answer a question from someone I know is a longtime fan. Let’s go for it. Joshua asks: from your experience running “supply chain as a service,” if most clients still have ERPs, spreadsheets, and policies that quietly make day‑to‑day calls, how much does Lokad really shift the focus of decision‑making? What does it take for the analytical layer to become the decision maker rather than just another adviser?
Joannes Vermorel: At Lokad we stand firm on delivering unattended decisions. We have numerical recipes that generate decisions unattended. As long as we need to tweak numbers, we keep iterating on the recipe so we don’t have to tweak anything; then we do maintenance so it continues running unattended.
It takes months of dual‑run, especially for large companies, to earn trust—production‑grade decisions day after day. In a large company, it will take months. As we move on, everyone starts to realize we now have much more interesting questions to ask: for example, what is “quality of service”?
If you tell me quality of service is “service level,” I disagree—it’s not service level. For DoD, that would be “operational readiness.” For a given budget, what spectrum of operations is feasible versus unfeasible? These are difficult questions.
When we roll out this approach, people have time to think deeper about these questions. Crunching the data brings many elements to the discussion: entire new classes of questions can find answers in data.
Unlike a data science team trying to get answers on its own, here it’s operation‑driven: “We want to do this; we have a bottleneck.” Look at the data; analyze the bottleneck; get a better allocation; then the next bottleneck. If you isolate data science, you get a solution looking for a problem.
People craft a solution and then look for a problem that approximately matches. With an operational function—say, allocating dollars for aircraft parts—you have a concrete challenge. You iterate on better solutions for a real problem.
Conor Doherty: A follow‑up from Joshua: when that shift happens toward your perspective, how much policy change is typically required for a client to commit? Is it usually a small tweak, or a fundamental rethinking of how supply‑chain decisions are governed?
Joannes Vermorel: It’s the latter, but you don’t have to do everything in one day. For some industries we even have testimonials—transformations going on for a decade and still ongoing.
For complex organizations—maintenance of jetliners is extremely complex—it takes time. With Air France Industries, the transformation is massive a decade forward. We took scopes of decisions one at a time; I think we have over twenty different scopes there.
It typically took a couple of months for each scope to be rolled out and made production‑grade. This is public record—case studies and interviews with directors.
Conor Doherty: I hope that helped. We’re a bit out of time, but good to answer all the questions. Thank you, Joannes, for your time—and to everyone else, thank you for attending and for your questions.
Some people prefer to DM privately; they come through to me live, and as you see, I will pose them to Joannes in exactly the verbiage presented to me. If you have questions, DM us or post in the comments.
We will stay and answer them. Have a good evening. We’ll see you next week for the next episode. And… get back to work.