00:00:00 Introduction to Forecast Value Added (FVA)
00:00:41 Explanation and steps of FVA analysis
00:01:59 Back testing and evaluating FVA accuracy
00:02:47 FVA and manual interventions
00:03:56 FVA’s presumption on the value of accuracy
00:04:37 FVA as a competence test for forecasting protocols

Learn more in Lokad’s full-length Forecast Value Added article.

Summary

Conor Doherty, Lokad’s technical writer, discusses the Forecast Value Added (FVA) tool, a diagnostic tool for the forecasting process. FVA integrates insights from various departments to enhance forecasting precision. Doherty explains the steps of an FVA analysis and its effectiveness. He notes that while FVA can demonstrate the value of insights, it presumes that greater forecasting accuracy is always beneficial, which is not always the case. He suggests that the focus should be on reducing monetary error rather than pursuing greater accuracy. He concludes that FVA could be used as a competence test, but does not validate the routine use of non-specialists for forecasting input.

Full Transcript

Forecast Value Added, or FVA, is a simple collaborative tool designed to evaluate each step in the forecasting process. Its ultimate goal is to eliminate steps that do not increase forecasting accuracy.

FVA achieves this by expanding the forecasting process to include insights from other departments, such as marketing and sales. Today, we will ask and answer three simple questions: How does one perform an FVA analysis? Does it work? And should you use it? Let’s get started.

A statistical forecast is generated and then passed between departments. Each department makes changes based on their expertise. These changes are later compared with each other, then with the actual demand, and finally with the naive forecast.

If the departments made the forecast more accurate, they added positive FVA. If they made it less accurate, they added negative FVA.

Generally, an FVA analysis looks something like this: Step one, define the contributors and the order in which they will contribute. Step two, generate a statistical forecast and then a naive one.

Step three, collect the insights. These will apply to the statistical forecast. Step four, calculate the FVA for each contributor at each step of the process. And lastly, step five, optimize the forecasting process.

First, by eliminating the touch points that decreased accuracy. Secondly, augment the touch points that increase accuracy. In practice, an FVA timeline looks something like this.

As you can see, the company’s statistical forecast goes through several overrides, including a consensus forecast stage. It is not uncommon for an FVA analysis to even include an executive phase where upper management validates the consensus forecast.

Once the company has the actual demand data, a back test can be performed to determine how much accuracy was increased or decreased at each stage. A sample back test looks like this.

In this stair step report, the positive or negative FVA for each override can be compared to every other override. Here, the evaluating metric is MAPE, Mean Absolute Percentage Error.

For example, the statistical forecast lowered error by 5% compared to the naive forecast, hence it contributed positive FVA. However, the consensus forecast overall contributed significant negative FVA.

I can only cover three major points today. For a much longer analysis of FVA, please consult our FVA article. The link is in the description below.

Point number one, FVA is predicated on the notion that multiple and even consensus interventions can add positive value. Furthermore, FVA believes that this value is distributed throughout the company.

However, Makridakis et al. indicate that the best forecasting models leverage advancements in machine learning technology. In other words, limiting human involvement.

During the recent M5 competition, in which contestants had to forecast demand for the largest retailer in the world by turnover, Walmart, the winning model was developed by a student with very little sales or even forecasting experience.

This indicates that we might overestimate the role of market insight in demand forecasting. Point number two, to its credit, FVA does demonstrate just how flawed human override is.

FVA has the ability to show people with cold hard numbers that their insights don’t increase forecasting accuracy. For that, it definitely does have once-off utility. However, as a recurring practice, it suffers from a very large limitation.

Which brings me very neatly to point number three. FVA presumes that greater forecasting accuracy is worth pursuing when in fact there are myriad situations in which greater accuracy comes at considerable costs, both directly and indirectly.

A forecast could be 5% more accurate but through associated costs result in significantly lower profits. As such, forecast success really ought to be solely predicated on reducing Euros or dollars of error rather than pursuing greater accuracy in and of itself.

So, should you use it? One could use FVA as a one-time competence or incompetence test of one’s current forecasting protocols. However, this does not validate the idea of routinely turning to non-specialists for manual input on the forecasting process.