- Manifeste pour la logistique quantitative
- Le test de la performance logistique
- Synthèse la logistique quantitative
- Prévisions probabilistes généralisées (en anglais)
- Optimisation fondée sur des décisions
- Moteurs économiques
- Préparation des données
- Ingénieur données logistiques
- Calendrier d'un projet classique
- Livrables projet
- Évaluer les résultats
- Antimodèles logistiques

- Analyses des ventes
- Analyses du stock
- Analyses de la marge brute
- Analyses des fournisseurs
- Prévoir la demande et les délais d'approvisionnement
- Optimisation avancée de la tarification
- Données utiles à la tarification
- Une première stratégie de tarification (en anglais)
- Test A/B de la tarification
- Stratégies courantes de tarification (en anglais)

- Optimisation avancée de la tarification
- Données utiles à la tarification
- Une première stratégie de tarification (en anglais)
- Test A/B de la tarification
- Stratégies courantes de tarification (en anglais)
- Prévision avancée du stock
- Rapport des priorités de commande
- Ancien format des fichiers d'entrée des prévisions
- Ancien format des fichiers de sortie des prévisions
- Choisir les taux de service
- Gérer vos paramètres de stock
- Ancien rapport Excel de prévision
- Utiliser les Tags pour améliorer la précision
- Les bizarreries des prévisions classiques
- Les bizarreries des prévisions quantiles
- Gérer le biais des ruptures de stock sur les prévisions
- Agrégations au jour, à la semaine et au mois

Home » Resources » Here

The convolution power is a relatively advanced mathematical operation. In supply chain, convolution power can be used to scale probabilistic demand forecasts up or down. Convolution power offers the possibility to perform linear-like numeric adjustments on probabilistic forecasts. Furthermore, convolution power can be interpreted as the probabilistic counterpart of the linear adjustments performed on "classic" forecasts - i.e. periodic forecasts regressed against the mean or the median.

`cumsub(G.Item, G.Stock, G.Quantity, G.Rank)`

`G.Item`

the item identifier, all lines that share the same value belong to the same item; `G.Stock`

the initial stock for the item, all lines that belong to the same item must have the same `G.Stock`

value; `G.Quantity`

the quantity of the item required for the purchase of the grid line; `G.Rank`

a `(G.Item, G.Rank)`

pair, and all bundles are ordered by increasing rank. The function `cumsub()`

explores all bundles by increasing rank, keeping track of the remaining stock for each item. Initially, this stock is defined by the `G.Stock`

vector. For each bundle, the function determines whether there is enough remaining stock to purchase `G.Quantity`

. If that is the case, then the function decrements the stock for each item, and writes to each grid line the remaining stock for that item. If there is not enough stock to serve the entire bundle - usually because one of the item has run out - then the function does not update the remain stocks and stores for each grid line the value `-(S+1)`

(where `S`

is the remaining stock for that item at that point), to indicate both that the grid line is not purchased (test if `G.S < 0`

) and whether it was that specific line that caused the bundle not to be purchased (test if `G.Quantity + G.S + 1 > 0`

) and by how much (`G.Missing = G.Quantity + G.S + 1`

).`forex(value, Origin, Destination, date)`

`Origin`

into the equivalent amount in the currency `Destination`

according to the historical rates at the specified date. The currencies should be encoded with their canonical three-letter codes. Lokad supports about 30 currencies leveraging the data provided by the European Central Bank. Rates are updated on a daily basis. See also `isCurrency()`

to test the validity of your currency code.`hash(value)`

`isCurrency(currencyCode)`

`true`

if the text entry passed as argument is a currency code recognized by the `forex()`

function.`mkuid(X, offset)`

`X`

is ignored, but the UID (unique identifier) is generated as a scalar in the table associated to `X`

. The `offset`

is an optional scalar that represents the starting suffix for for the UID. The generated strings are numbers in format `PPPPPPPAAA`

, with `P`

a page number (does not start with 0) that is always strictly increasing, and `A`

an incremented counter that starts at offset (or 0 if no offset parameter is provided). `P`

has at least 7 digits, `A`

has at least 3. The UIDs offer three properties. (1) All UIDs can be parsed as numbers, and those numbers will be different. Keep in mind, however, that UIDs have at least 10 digits, and likely more if each call needs to generate more than 1000. (2) An UID generated at time T is strictly inferior (in alphabetical order) to an UID generated at time T' > T. (3) If all calls generate similar numbers of UIDs (less than 999, or between 1000 and 9999, etc.) then the previous property is also true for the numeric order between UIDs.`solve.moq(...)`

`pricebrk(D, P, Prices.MinQ, Prices.P, Stock, StockP)`

`priopack(V, MaxV, JT, B) by [Group] sort [Order]`

`Order`

contains the ranks of the lines to be packed. `V`

is the volume of each line. `Group`

is the equivalence class of the suppliers, with bin packing computed `MaxV`

is the max volume capacity, its value is homogeneous to `V`

, and it is assumed to be a constant value across the equivalent class `Group`

. `JT`

is the jumping threshold, its values is homogeneous to `V`

, it is typically expected to a small multiple of the `Group`

value. `B`

is an optional argument interpreted as the `B`

.`smudge(values, present) by [Group] sort [Order]`

`values`

and a boolean vector that determines where the valid values are present. It returns a full vector of valid values, that has been completed by spreading valid values into the non-valid ones. More precisely, the output vector is built by looking at every line, group by group (if there is a `Group`

argument) and following the ascending `Order`

, and replacing any non-valid value by the last value that has been seen or by a default value if no valid value has been seen yet in the group.`stockrwd.m(D, AM), stockrwd.s(D), stockrwd.c(D, AC)`

Probabilistic demand forecasts are particularly suitable for optimizing decisions while taking supply chain risks into account. However, unlike classic forecasts where the demand is expressed as a definite quantity associated with a specific period of time, probabilistic forecasts involve distributions of probabilities.

<center>[image||{UP}/Resources/project-timeline.svg]</center>

While distributions provide more insights about the future compared to single-point indicators, distributions are more complex to manipulate. Such manipulations may be required to reflect market evolutions that cannot be inferred from historical data. The convolution power is a mathematical operation that allows to scale a distribution of probabilities in a pseudo-linear fashion.

[image||{UP}/Resources/project-timeline.svg]

For example, if a retailer knows that each promotion will bring a 100% increase in sales, then, all it takes to adjust a classic demand forecast - which ignores promotions - is to multiply the original number by 2. In the case of probabilistic forecasts (which also ignores promotions), it's not possible to multiply the distribution by 2 in the naive sense because the sum of the distribution needs to remain equal to 1 and represent the sum of the probabilities.

Lokad already supports many well-known commerce management software connectors. Thus, if your system happens to be already supported by Lokad, in this case, you only need to grant Lokad an access to your system. Lokad will take care of importing all the relevant data and the corresponding tabular files will be created directly within your account. These files are then ready for immediate consumption by Envision.

If Lokad does not yet support the software where your data is currently located, then it is still possible to perform an ad-hoc data extraction, typically by querying the database, and then importing the files into Lokad using FTP (File Transfer Protocol) or its variants like SFTP. Lokad also supports manual uploads through the web, but our experience indicates that if data extraction cannot be automated, re-uploading your most recent data every time tends to be fairly tedious.

In order to get anything out of Envision, Lokad needs to be fed with the historical data of your business. This data is expected to be provided in tabular file format, and Envision supports a wide spectrum of data formats such as CSV or Excel sheets. Your Lokad account is bundled with an online file repository, and the input data files are expected to be stored within your Lokad account.

Lokad already supports many well-known commerce management software connectors. Thus, if your system happens to be already supported by Lokad, in this case, you only need to grant Lokad an access to your system. Lokad will take care of importing all the relevant data and the corresponding tabular files will be created directly within your account. These files are then ready for immediate consumption by Envision.

If Lokad does not yet support the software where your data is currently located, then it is still possible to perform an ad-hoc data extraction, typically by querying the database, and then importing the files into Lokad using FTP (File Transfer Protocol) or its variants like SFTP. Lokad also supports manual uploads through the web, but our experience indicates that if data extraction cannot be automated, re-uploading your most recent data every time tends to be fairly tedious.

In the illustration here above, we have a lead time equal to 4 days. It means that if Lokad computes a reorder point with this lead time, and say, a service level of 95%, the reorder point will be the minimal inventory value - as forecasted by Lokad - that is sufficient to cover the fluctuation of the future demand so that 95% of the time, the reorder point is higher than the demand (hence avoiding the stock-out).

The quantile forecast used to generate the reorder point interprets the lead time as the segment to cover starting from the end of the historical data. Indeed, Lokad defines the “present” as the end of the data. Hence, the forecast starts where the data stop.

In the illustration here above, we have a lead time equal to 4 days. It means that if Lokad computes a reorder point with this lead time, and say, a service level of 95%, the reorder point will be the minimal inventory value - as forecasted by Lokad - that is sufficient to cover the fluctuation of the future demand so that 95% of the time, the reorder point is higher than the demand (hence avoiding the stock-out).

For $a$, a non-negative real number, we re-define the convolution power as follows:

$$ x^{*a} = \mathcal{Z}^{-1} \Big\{ \mathcal{Z}\{x\}^a \Big\} $$ where $\mathcal{Z}$ is Z-transform of the discrete distribution $x$, defined as: $$ \mathcal{Z}\{x\} : z \to \sum_{k=-\infty}^{\infty} x[k] z^{-k} $$ and where $\mathcal{Z}\{x\}^a$ is the point-wise power over the Z-transform defined as: $$ \mathcal{Z}\{x\}^a : z \to \left( \sum_{k=-\infty}^{\infty} x[k] z^{-k} \right)^a $$ Finally, $\mathcal{Z}^{-1}$ is the inverse Z-transform $$ \mathcal{Z}^{-1} \{X(z) \}= \frac{1}{2 \pi j} \oint_{C} X(z) z^{n-1} dz $$ with $X(z) = \mathcal{Z}\{x\}(z)$, introduced for the sake of readability, and where $C$ is a counterclockwise closed path encircling the origin.

If $a$ is an integer, then the two definitions given above for the convolution power coincide.

In practice, the inverse Z-transform is not always defined. However, there are ways to generalize the notion of Z-transform inversion - somewhat similar to the notion of matrix pseudo inverse used in linear algebra. Details relating to Z-transform pseudo inverse go beyond of the scope of the present document.

Through this Z-transform pseudo inverse, the convolution power can be defined for all random variables of compact support, and for any non-negative real number used as the exponent.

`^*`

operator.
y := poisson(3) ^* 4.2 // fractional exponentThe script above illustrates how a Poisson distribution, obtained through the

`poisson()`

function, can be convoluted to the power of 4.2.For $x$ a function $\mathbb{Z} \to \mathbb{R}$ and $y$ a function $\mathbb{N} \to \mathbb{R}$, we can define the convolution power of $x$ by $y$ with: $$ x^{*y} = \sum_{k=0}^{\infty} y[k] x^{*k} $$ Envision also supports this alternative expression of the convolution power through the

`^*`

operator, as illustrated by the script below.
y := poisson(3) ^* exponential(0.05)The exponent is an exponential distribution obtained by using the

`exponential()`

function.Now, this company has the opportunity to buy a small competitor operating 5 aircraft that are homogeneous to our company's own fleet. Through this competitor acquisition, the company gains extra aircraft and extra passengers. If we assume that all aircraft are statistically independent in their need for APUs, and if we assume that the competitor's aircraft have similar needs to the ones of the acquiring company, then, the total demand for APUs for the merged entity can be revised as $X^{*\frac{100 + 5}{100}}=X^{*1.05}$.

- Convolution, Wikipedia
- Convolution power, Wikipedia
- Z-transform, Wikipedia
- Moore–Penrose pseudoinverse, Wikipedia