00:00:08 Overview of forecast value added as a management technique.
00:01:29 Explanation of forecast value added process and why it emerged.
00:02:06 Importance of metrics to determine accuracy of forecasting process.
00:03:37 Discussions of why multi-stage forecasts by multiple teams doesn’t work.
00:07:55 The idea of multiple teams working together to improve accuracy is appealing but not supported by science.
00:08:01 Disadvantages of having the public vote on the next move of a chess champion.
00:10:31 The ineffectiveness of forecast value added in the supply chain.
00:12:17 The ineffectiveness of manual intervention in improving forecast accuracy.
00:14:50 Difficulty in trusting science over human judgment in forecasting.
00:15:55 Granularity of a salesperson’s understanding and analysis of their clients.
00:16:00 Criticism of the idea that a forecast should be a collaborative effort.
00:17:02 The recent resurgence of popularity in the forecast evaluated methodology.
00:17:23 The potential for financial gain from producing metrics and charging for upgrades.
00:19:10 The importance of determining if a consultant provides real value for the company or just busy work.
00:23:36 The need to identify radical incompetence and using forecast value added as a litmus test.


Forecast value added (FVA) is a flawed approach to supply chain forecasting that involves collaboration between various teams within an organization to improve accuracy. Joannes Vermorel, founder of Lokad, argues that FVA adds an extra layer of complexity to supply chain management with no clear benefit to the company, and it is not supported by scientific research. Vermorel suggests that businesses should focus on finding simpler, more effective solutions to their supply chain challenges and trust specialists in various fields. Companies should use FVA as a litmus test to detect radical incompetency in vendors or consulting agencies promoting collaborative forecasting.

Extended Summary

In this interview, host Kieran Chandler and Joannes Vermorel, founder of Lokad, discuss the concept of forecast value added (FVA) and its effectiveness in supply chain optimization. FVA emerged in the early 2000s as a technique to improve the accuracy of forecasts by identifying which steps in the forecasting process contribute positively or negatively to the final product. This approach typically involves various teams within an organization collaborating on the forecast, such as marketing, sales, and production.

The idea behind FVA is that by measuring the accuracy of each step in the process, one can identify the teams contributing positively or negatively to the forecast’s accuracy. This is done through backtesting, comparing the baseline forecast’s accuracy to the accuracy of forecasts with additional inputs from different teams.

However, Vermorel points out that this approach does not work in practice and is not supported by scientific research. Statistical forecasting literature does not advocate for a multi-stage process where forecasts hop from division to division within a company. In fact, forecasting competitions have consistently shown that the winners do not rely on such techniques.

Despite its shortcomings, FVA has gained traction because it is an appealing solution that allows everyone to contribute and feel included in the process. Vermorel likens this to the idea of a group of people trying to help a chess champion make their next move, which would likely result in a distraction rather than an improvement in performance.

Forecast value added is a popular but flawed approach to supply chain forecasting. It involves collaboration between various teams within an organization, with the intent to improve accuracy. However, this method is not supported by scientific research, and empirical evidence suggests that it does not lead to better forecasts. The appeal of FVA may stem from the desire for collective participation and the satisfaction derived from being part of the decision-making process.

They discussed the concept of forecast value-added and its flaws. He believes that it is a bureaucratic idea that adds an extra layer of complexity to supply chain management, slowing down processes and complicating operations, with no clear benefit to the company.

Vermorel argues that the premise of forecasting being a collaborative effort is incorrect, as science and past forecasting competitions do not support this notion. He emphasizes that the human mind is not well-equipped to deal with statistical noise, and manual interventions tend to decrease forecasting accuracy. He suggests that it is more efficient to focus on improving the numerical recipe underlying the forecast, rather than adjusting forecasts manually.

He questions the ability of non-specialists to perform complex numerical tasks better than experts who have spent years developing reasonable numerical recipes. Vermorel points out that sales teams, for example, usually operate at a much larger granularity than that required for forecasting individual SKU demand. Instead, their insights should be used to revise the numerical recipe, leading to more accurate forecasts.

Despite Vermorel’s concerns, he notes that forecast value-added has experienced a resurgence in popularity in recent years, with many consultants and software vendors promoting the methodology. However, he remains critical of this approach and believes that it is not the most effective way to improve forecasting accuracy in supply chain management.

Vermorel expresses his concerns about software vendors making money by offering complex solutions to supply chain problems that may not necessarily provide value. He suggests that many vendors use distractions to sell their products and services, making it difficult for clients to identify real value.

Vermorel emphasizes the importance of avoiding complexity for the sake of complexity. He points out that adding unnecessary dimensions to a process can actually make the problem quadratically more complex, which may not be beneficial for the company. Moreover, he believes that vendors often charge more for these complex solutions, including upgrades and consulting fees.

To differentiate between valuable consultants and those merely producing metrics, Vermorel suggests that companies should ask themselves whether the service or solution provided is actually adding value or just creating busy work. He emphasizes the importance of focusing on outward-looking solutions rather than inward, as adding extra layers of complexity within a process may not lead to better results.

In the context of supply chain management, Vermorel argues against the idea that forecasting should be a collaborative effort. He compares this to having electrical power in a building, which is not considered a team effort. He asserts that there is no reason for forecasting to be a collaborative process, and that companies should focus on finding simpler, more effective solutions to their supply chain challenges.

Vermorel emphasizes the importance of trusting specialists in various fields, such as electrical setups or supply chain forecasting. He points out that collaborative forecasting methods are often counterproductive, citing a series of papers demonstrating that manual intervention in forecasts can be harmful. This is due to humans’ poor perception of randomness; while humans Excel at identifying patterns, they struggle to understand randomness.

Vermorel contends that until a collaborative forecasting approach can be proven superior, it should not be trusted. He notes that leading researchers in the field, such as those participating in forecasting competitions, do not use collaborative methods. Vermorel suggests that businesses should be more skeptical of vendors and consulting agencies that promote such collaborative forecasting methods, as they may lack competence in the area.

The core message of the interview is that businesses should focus on identifying areas of radical incompetence in their supply chain management, rather than attempting to improve forecasts through collaboration. Vermorel recommends using forecast value added as a litmus test to detect radical incompetency in vendors or consulting agencies promoting collaborative forecasting. He compares this to detecting vendors pushing for methods based on astrology or artificial intelligence without understanding their true implications.

Vermorel advises businesses to be cautious when dealing with vendors and consulting agencies in the supply chain optimization field, using forecast value added as a tool to identify incompetence. Trusting specialists and avoiding collaborative forecasting methods can lead to more effective supply chain management.

Full Transcript

Kieran Chandler: Today we’re going to discuss just how well this works and why decomposing a forecast can actually lead to more difficult decisions. So, Joannes, what’s the idea today behind forecast value added?

Joannes Vermorel: Forecast value added is a process that emerged in the early 2000s, maybe the 90s. I haven’t found publications earlier, but since it’s not a very complicated idea, I suspect that it was already practiced in the 90s, probably under various names and forms. Essentially, it’s a process that aims to improve quantitatively the accuracy of the forecast by identifying whether certain steps taken while composing the final product, that is the forecast, actually improve the accuracy.

To illustrate, we have a forecasting team that produces the baseline forecast. Then, the marketing team steps in and adjusts this baseline based on their extra marketing insights, for example, what they intend to do in terms of campaigns. Then, sales step in and add their own layer of correction due to the extra sales insights they have. Then production steps in, and so on. We circle back to the forecasting team that finally concludes with the plan, and that’s essentially the sort of things that are done as part of the SNLP process. The forecast value added process consists of establishing metrics for the delta introduced by every single step of the forecasting process in terms of whether we improve the accuracy or decrease the accuracy, and implicitly, the idea is to remove the contributions that actually end up degrading the forecasting accuracy in the end.

Kieran Chandler: So why did this idea kind of originate? Because involving so many different stakeholders can definitely complicate things.

Joannes Vermorel: Yes, actually, at least the early papers in the 2000s indicate that producing forecasts with many hands usually ends up doing the opposite of the intent, which is to just degrade the forecasting accuracy. Their conclusion is that sticking with the naive baseline forecast, which is very frequently something like moving average plus seasonality, ends up more accurate than what you get once you let a lot of different people tweak those forecasts up and down. But the logic says that we should try to filter out the bad contributions to keep the good ones, so we can have the best of both worlds: the initial accuracy that is improved further thanks to extra contributions, but carefully filtered through the forecast value added process.

Kieran Chandler: So how do you determine which teams are producing the good contributions and which are the teams that are kind of producing the bad ones?

Joannes Vermorel: Very simply, by doing a backtest.

Kieran Chandler: What was the accuracy of the baseline forecast for the last quarter? What was the accuracy of the baseline forecast plus the contribution of the sales team? And then we have two different setup forecasts: the baseline ones and the one with the first layer of correction. We can see whether we have improved the forecast accuracy or not. And then we can repeat this experiment for every single stage to compare with the accuracy of what came just before. So we can isolate the results essentially.

Joannes Vermorel: It is very much akin to a backtest process, except that you add an extra level of granularity in the analysis. You want to analyze and compare the respective accuracy before and after changes, looking at the key steps of the forecasting process. And when I say key steps, I am defining steps from the organization’s perspective, where you have a forecast passed to a team, the team applies corrections, and then passes it to another team. The whole thing circulates inside the company until it circles back to the original forecasting team in charge of the entire forecasting process.

Kieran Chandler: It sort of makes sense from a logical perspective; you’re getting more and more people’s expertise, so it’s probably going to end up being more accurate. So what’s the big problem then?

Joannes Vermorel: The problem is that it simply doesn’t work, and it’s not supported by any scientific evidence. It’s the sort of thing that looks intuitive and sounds good, but if we start looking at the statistical forecasting literature, no science researcher is actually working with these sorts of techniques. Even if they have forecasting models with multiple stages, all those stages happen to be integrated into one algorithmic setting, so just one piece of software. The idea that you can refine a statistical forecast by having it hopping from division to division inside the company is mostly lunacy. If we look at forecasting competitions that have been taking place for a long time, like the ones organized by Professor Makridakis, the people who ended up winning those competitions did not resort to a multi-stage forecasting process where the forecast would hop from specialist to specialist. In my view, it’s the sort of thing that is very appealing because it makes everybody happy; everyone can contribute, and it creates a sense of positive emotion.

Joannes Vermorel: In this process, yes, but it’s not how it works. It’s the sort of thing like imagine you have a champion chess player, and you say, “Okay, now we are going to try to help this champion by having a big vote of people trying to help him make his next move.” The answer is no, it’s not going to make this champion any better. Chances are that it’s just going to be a complete distraction. If you want to have the best of the best, just let him play the game or let her play the game exactly the way the champions see fit, and that’s it.

Kieran Chandler: So why has this kind of idea gained so much traction then? Is it because it’s something that keeps lots of people happy so people feel like they’re contributing? I mean, why is it that people have heard of forecast value added?

Joannes Vermorel: I believe it’s a sort of very bureaucratic idea that tends to have a lot of attraction because it is very inward-looking. As soon as you start tackling the real problems, like risk management which forces you into upgrading your practice toward probabilistic forecasting, it becomes super complicated. Forecast value added is trivial. The level of math involved is junior high school. It’s the sort of thing that looks good, and nobody is going to be intellectually challenged by forecast value-added. It’s literally exceedingly simple.

Joannes Vermorel: It is very tempting to just add another layer of bureaucracy in your supply chain to let people take care of this process that looks good and sounds reasonable. It’s just going to keep everybody busy, and you’re going to be quite good at it. How could you possibly fail at something as simple and straightforward as forecast value added? The reality is you will fail in the sense that you will add an extra layer of complexity that is most likely going to slow down everything, complicate everything, and confuse everything. But your short-sighted KPIs are going to get a tiny bit better, and nobody has any clue whether it’s going to bring any dollar into the company. From the super narrow perspective of your very short-sighted KPIs, it might improve a bit.

Kieran Chandler: Is there not some sort of element of truth to it, though, that if we focused on specific core areas, we could see some vast improvements?

Joannes Vermorel: The first thing that we need to dismantle is that the premise of this proposition is completely wrong. The premise is that forecasting should be a collaborative effort. This is not the case. Science is absolutely not supporting this proposition, and all the forecasting competitions that took place over the last decades did not support this proposition either. The papers that I’m reading do not support this proposition either. So literally, we have a premise that is completely wrong. Forecasting is not better when done as a collaborative effort. Thus, the idea to have everybody on board and everybody making contributions is just wrong.

Kieran Chandler: And fundamentally, at Lokad, we did benchmark the contribution of manual tweaks on the forecast, and essentially, they were invariably incorrect. They were invariably decreasing the forecasting accuracy. However, in order to achieve that, you need to first have a mindset where, when somebody sees a statistical forecast that is deeply wrong, you don’t want to manually adjust the forecast; you want to fix the underlying numerical recipe. I’m assuming that we are talking about a reasonable numerical recipe for the forecast that has been battle-tested. So, if there are obvious things that are not properly factored into the forecast, those issues have been addressed. For example, if you don’t take into account your stockouts, you will confuse zero sales with zero demand, which is completely wrong.

Joannes Vermorel: Yes, you need to make your numerical recipe aware of the stockouts. I’m assuming that once you’ve solved all those problems that are more of a debugging nature, and once the model has been battle-tested and debugged, we did, at Lokad, on the request of many clients, a lot of benchmarks where we were looking at the forecast accuracy before and after manual intervention. And every single time, manual interventions were degrading the forecast accuracy. It turns out that the human mind is not very good at dealing with statistical noise. Statistical noise is not something that we perceive; we see patterns everywhere. It is actually very difficult to see the statistical noise for what it is, and thus, even simplistic statistical methods like moving average tend to outperform human judgment, although it’s barely better than moving average, and yet it is already better.

Kieran Chandler: So, what you’re advocating is this kind of single champion who would produce a forecast. And why is it, do you think, that people find it so difficult to put their trust in science? Why is it such a leap of faith?

Joannes Vermorel: It’s not a leap of faith; that’s the sort of thing science doesn’t ask you to actually believe. It’s about understanding what’s going on. The question is, why would people who are complete non-specialists be able to jump on a sizable dataset and perform a complex numerical task in their head or with random spreadsheets better than people who actually spend years trying to identify reasonable numerical recipes precisely to do that? What sort of magic would be involved? And if people say they have insights into the market, yes, but at which granularity? Let’s imagine for a typical company, you have something like 20,000 SKUs, and then you’re asking the people from the sales team – let’s say there are five people – you give them a spreadsheet with 20,000 SKUs and ask them to provide their insights.

Kieran Chandler: Your inputs on whether those queues should go up and down, so the sales team doesn’t know. I mean, if you’re part of the sales team, you’re managing maybe half a dozen important VIP clients if you’re in a B2B business. Those clients order thousands of products per quarter in varying quantities. This is not the granularity you operate if you’re a salesperson. Your granularity is that your client is an organization, and in this organization, you know a series of people. That’s your granularity in terms of thinking and analysis. And I know that this organization has traction, and the people might actually be inclined in getting closer to the sort of offering that I’m pushing or on the contrary, stepping back. But the detail of the 20,000 plus SKUs, I mean, literally they don’t know, so they’re just going to guesswork something and pretend that the job is done.

Joannes Vermorel: That’s the illusion. The problem is that if you have important insights, then why not explicitly revise the numerical recipe so that it can use those extra elements as input? The job will be done with much less effort and a lot more accuracy.

Kieran Chandler: So would you say this is a problem that still exists or would you say that people are coming around to this idea that it’s not just a single person responsible for the forecast?

Joannes Vermorel: Forecast evaluation has had a resurgence of popularity during the last couple of years. A lot of consultants, and even some fairly misguided software vendors, are now pushing for this methodology. To be honest, for vendors like Lokad, there is tons of money to be made. It’s the sort of thing that is going to be a complete distraction. The good thing for a vendor with a complete distraction is that you can never fail. There was no gain to be made in the first place, but conversely, there is no obvious loss that could happen. So, you cannot ever fail. This is very good. Distraction is a mission where you cannot fail.

You can spend weeks or months generating an entire wall of metrics out of that, which gives you another layer of dimension, so that everything that you were doing before, you can add an extra dimension, which is how every single thing that you’re already doing benefits or is harmed by every single stage of the process. I mean, you’re making the problem quadratically more complex, not more complicated because it’s very simple, it’s just one extra dimension, but you’re making it vastly more complex. If you have a large company with ten different stages, you literally have ten times more metrics for every single thing that you were measuring before. And thus, you can charge accordingly. You can charge for upgrades, consulting fees, whatever.

Kieran Chandler: So how can you then differentiate between the consultants that are actually providing value and those consultants that are just producing these metrics for the sake of it?

Joannes Vermorel: I believe you simply need to ask yourself, are you doing something that is real? Are you doing something that has an actual impact?

Kieran Chandler: So, do you think that this collaborative approach to forecasting has intrinsic value for the company, or are you just doing busy work and keeping the bureaucracy busy?

Joannes Vermorel: As a litmus test, I mentioned earlier when discussing the bureaucratic core of supply chain, whether you are looking inward or outward. In this case, it’s very much the archetype of looking inward. If you take an existing forecasting process, decompose the inner parts, and add another layer of complexity inside it to make it better, it doesn’t work like that in the real world. You don’t get something better in terms of organization by looking inward and adding extra layers of artifacts. There is no added value gained with this abstraction. The simple idea should be to remove the premise that the forecast should be a collaborative effort; there’s no reason for it to be.

There are many things, like having electrical power in your building, which are not a team effort. You wouldn’t think that everyone needs to be on board to have electrical power in the building. It’s kind of obvious that if you want a decent electrical setup, you trust specialists who will do the job and make sure your building doesn’t burn due to an unsafe electrical setup. The idea that you can have a better electrical setup with a team effort is nonsense, and it turns out that the very same thing is true for forecasting.

When I say “believe the science,” I mean that there is a series of papers demonstrating that manual intervention on forecasts is harmful. These papers are about 20 years old, and their findings are not surprising because there is an entire field of psychology that tests humans for their perception of randomness. It turns out that the human mind is terrible at understanding randomness. We are very good at seeing patterns, but not at understanding randomness itself.

Kieran Chandler: So, I believe that we have a very good cause to say, until you can demonstrate that a collaborative forecast is superior, we should not trust you. And when we see that we have competition where everybody that wins, you know, makes it to the top 100, is not using any kind of collaborative method and that the leading researchers in this field like Foreman are not using that either, and you can read their book from cover to cover, you will not find anything that looks like that. It is fairly reasonable to assume that it’s complete nonsense. There are plenty of things that look very good, feel very reasonable, but that are just completely wrong, and just like, you know, if you look around, the Earth is flat.

Joannes Vermorel: If we wrap things up today, the core message is much more radical. You need to identify, if you want to improve your supply chain, the spots of radical incompetence, people that have no clue whatsoever what they’re doing, literally Bozo the Clown. In business, those people can have, you know, an air of seriousness and whatnot. Forecast value added should be treated as a litmus test to detect radical incompetency. So if there is a forecasting vendor pushing that on their website, scratch this vendor as completely incompetent. If a consulting agency is pushing for that, you can scratch them as completely incompetent. That’s the sort of thing that is good because it’s a bit like the blockchain or the artificial intelligence, you know, that’s the same sort of test where if they’re pushing for artificial intelligence, okay, they have no clue whatsoever what they’re doing. We can scratch, move to the next. It’s just imagine that if a vendor was actually pushing for a method that relies on astrology, you would say, okay, those people are not credible, you’re out. I don’t even want to hear your reasoning. I know that there’s a 99.9% chance that you are a complete fraud. Well, that’s a good thing. Forecast value added gives you a litmus test to discard a company or consultancies that just prove themselves to be completely incompetent. So be grateful and just use that as a filter.

Kieran Chandler: Okay, I’ll have to leave it there, but a few companies to scratch off the list. So that’s everything for this week. Thanks very much for tuning in, and we’ll see you again in the next episode. Thanks for watching.