Demand Driven Material Requirements Planning (DDMRP)

learn menu
By Joannes Vermorel, February 2020

Demand Driven Mateerial Requirements Planning (DDMRP) is a quantitative method intended to optimize the supply chain performance of multi-echelon manufacturing businesses. The method revolves around the notions of ‘decoupling points’ and ‘stock buffers’, which are intended to mitigate the flaws of earlier methods implemented by most MRP (Material Requirement Planning) systems. The method delivers the quantities to be either bought or manufactured for any SKU (Stock-Keeping Unit) of a multi-level BOM (Bill Of Materials).

Tyre production machine conveyor

The multi-level BOM flow optimization problem

A BOM (Bill of Materials) represents the assemblies, components and parts, and the quantity of each, needed to manufacture an end-product. A multi-level BOM is a recursive hierarchical perspective of the original BOM, where certain parts are further decomposed with BOMs of their own. From a formal viewpoint, a multi-level BOM is a weighted directed acyclic graph1 where vertices are SKUs, where edges indicate inclusion (i.e. is part of), and where weights represent the quantity required for the assembly - either the end product(s), or the intermediate product(s).

The problem addressed by DDMRP is the flow optimization within a multi-level BOM and consists of determining, at any point of time, (a) whether more raw materials should be sourced and how much, (b) whether more units of any SKU should be produced and how many.

Intuitively, this problem is difficult because there is no direct correlation between the quality of service of any intermediate SKU - usually measured through service levels - and the quality of service of the end product. Adding more stock to a given SKU only improves the end product’s quality of service if this SKU was, somehow, a bottleneck in the manufacturing flow.

In practice, the resolution of this flow optimization problem requires a series of further inputs, most commonly:

  • Order history from clients
  • Supplier lead times
  • Stock levels, on hand, in transit or on order
  • Manufacturing lead times and/or production throughputs
  • etc.

Then, real-world supply chains tend to exhibit further complications such as batch sizes (any kind of desirable multipliers imposed by either the supplier or the manufacturing process itself), shelf-lifes (not just for perishable goods, but also chemicals and sensitive pieces of equipment), imperfect substitutes (e.g. when a more expensive part can be used as a replacement if the less expensive one is unavailable). Those complications require further data to be reflected by the model.

Limits of the classic MRP

The inception of DDMRP was motivated by the limitations associated with what could be referred to as the classic MRP perspective (simply referred to as the MRP perspective in the following), which was primarily developed in the 80’s. The MRP perspective is geared around the analysis of lead times and identifies the longest path (time-wise) in the BOM graph as the bottleneck associated with the manufacturing process of the the end product.

In order to identify this bottleneck, the MRP offers two distinct numerical methods to assign a static lead time to every edge of the BOM graph, either:

  • manufacturing lead times which is maximally optimistic and assumes that the inventory is always available everywhere (i.e. for every SKU), hence that lead times only depend on the throughput of the manufacturing processes.
  • cumulative lead times which is maximally pessimistic and assumes that the inventory is always unavailable, and thus that lead times only depend on the time to produce the first unit starting from the blank state, i.e. zero raw materials and zero intermediate products.

These two methods have a single key advantage in common: they are relatively straightforward to implement within the relational database, which was the architectural core of nearly all the MRPs engineered from the 80’s to the 2010’s.

However, these two methods are also overly simplistic and usually deliver nonsensical lead times. The authors of DDMRP point out that computing purchase or production orders based on deeply flawed lead time estimates end up generating a mix of overstocks and stockouts, depending on whether the lead times turn out to be grossly over or underestimated.

The numerical recipe of DDMRP

DDMRP’s numerical recipe is a mix of numerical heuristics coupled with human judgement calls - i.e. supply chain experts. This recipe is intended to overcome the flaws associated with classic MRP without resorting to “advanced” numerical algorithms. The recipe comes with four main ingredients, namely:

  • decoupling the lead times
  • the net flow equation
  • the decoupled explosion
  • the relative priority

By combining those four ingredients, a supply chain practitioner can compute the quantity to buy and to manufacture when facing a multi-level BOM situation. DDMRP’s authors argue that this method delivers superior supply chain performance - as measured in inventory turns or service levels - compared to the performance achieved by MRPs.

Decoupling the lead times

In order to remedy the naively extreme optimism / pessimism of the MRP perspective on lead times, DDMRP introduces a binary graph coloring2 scheme where certain vertices (i.e. SKUs) of the graph (i.e. BOMs) are promoted as a decoupling point. These vertices are then assumed to always hold serviceable inventory, and the methodology of DDMRP ensures that it is indeed the case.

The choice of the decoupling points is essentially delegated to supply chain practitioners. As the decoupling points are intended to be stocked SKUs, practitioners should favor SKUs that make sense at a strategic level - for example because they are consumed by multiple end products and benefit from more steady consumption patterns than most end products.

Once the decoupling points are chosen, the DDMRP lead times associated with any vertex can be computed as the longest path (time-wise), starting from the vertex and reaching down, but truncating the path whenever a decoupling point is encountered.

With a careful selection of the decoupling points, the authors of DDMRP argue that the DDMRP methodology delivers shorter lead times. This proposition is not entirely correct, not because lead times are longer, but because DDMRP proposes a new definition of what is being referred to as a lead time in the first place.

The Net Flow Equation

In order to compute the quantities associated with either purchase orders or manufacturing other products, DDMRP’s authors introduce a concept called the net flow defined as follows:

On-Hand + On-Order - Qualified Sales Order Demand = Net Flow Position

This equation is defined at the SKU level. The net flow quantity is interpreted as the quantity of stock that is available to address the uncertain part of the demand.

The net flow position is then compared to a buffer size; and when it becomes notably lower than its target buffer, an order is passed. We will get back to this mechanism in the ordering prioritization section below.

The DDMRP methodology offers some high-level guidance on how to size the buffers, typically expressing them in days of demand, and enforcing safe margins while respecting the DDMRP lead times - as defined above. In practice, sizing the buffers depends on the supply chain practitioners’ better judgement.

Through net flows, the authors of DDMRP emphasize that only the uncertain portion of the demand actually requires any kind of statistical analysis. Dealing with the future demand that is already known is a pure matter of adherence to a deterministic execution plan.

The decoupled explosion

DDMRP’s methodology both relies on and enforces the assumption that stock is always serviceable from any decoupling point. This assumption offers the possibility to partition the edges using the decoupling points (i.e. a subset of vertices) as frontiers between the partition subsets. This partitioning scheme is referred to as the decoupled explosion.

From a DDMRP perspective, when a client order is passed for an end product, the resulting demand is not recursively disaggregated into its innermost components, but only disaggregated up to its first encountered decoupling points.

The graph partitioning scheme of the decoupled explosion is exploited by the DDMRP methodology as a divide and conquer3 strategy. In particular, as the size of the subgraph can be kept small, DDMRP can be implemented on top of relational database systems, much like MRPs, even if those systems aren’t really suited for graph analytics.

Ordering prioritization

The final numerical step in the DDMRP numerical recipe consists of computing the orders themselves, either buying orders or manufacturing orders. The DDMRP methodology prioritizes all SKUs against their respective differences Buffer - Net Flow, with the largest values coming first. Orders are then generated processing the list in the specified order, picking all the values that are positive and, frequently, at least as great as the MOQ (when applicable).

The prioritization of DDMRP is one dimensional (scoring-wise) and driven by the internal adherence to its own methodology, that is, maintaining serviceable stocks for all decoupling points. The previous sections illustrated how this key property of the decoupling points was leveraged. The ordering prioritization clarifies how this property is enforced.

The ordering prioritization as proposed by the DDMRP authors is more granular than the recipes typically found in MRPs such as ABC analysis. It provides a mechanism to guide the attention of the supply chain practitioners toward the SKUs that need the most attention - at least according to DDMRP’s urgency criterion.

Criticisms of DDMRP

The authors of DDMRP are promoting4 the benefits5 of this methodology as a state-of-the-art practice to maximize supply chain performance. While, DDMRP comes with a few “hidden” gems, detailed below, multiple criticisms can be made however about this methodology: the most notable ones being first, an incorrect baseline to assess both novelty and performance, and second, a formalism that doesn’t capture real-world complexity.

Hidden gems

While it may seem a relative paradox, the strongest arguments in favor of DDMRP might not have been properly identified by its own authors, at least not in their 2019 publication. This seemingly apparent paradox is probably an unintended consequence of the limited formalism of DDMRP - detailed below.

As far as manufacturing supply chains are concerned, frequential moving averages are usually superior to temporal moving averages. Indeed, it is incorrect to state that DDMRP works without demand forecasts. The buffers are forecasts, except they are frequential forecasts (i.e. days of demand), rather than temporal ones (i.e. demand per day). As a rule of thumb, frequential forecasts are more robust whenever demand is either erratic and/or intermittent. This discovery can be traced back to J.D. Croston, who published “Forecasting and Stock Control for Intermittent Demands” in 1972. However, while Croston’s methods remain somewhat obscure, DDMRP popularized this perspective to the supply chain world at large.

Approximate prioritization is a robust decision-making mechanism in supply chain that prevents entire classes of problems, most notably systematic biases. Indeed, unlike SKU-wise approaches such as safety stocks, which can easily be numerically distorted by local supply chain artefacts (e.g. a stock-out), even a loose supply chain wide prioritization ensures that resources get directed first toward obvious bottlenecks. While the DDMRP authors are clearly aware that prioritization is beneficial as an attention mechanism, the insight isn’t brought to its logical conclusion: the prioritization should be economic, i.e. measured in dollars not in percentages.

Incorrect baseline

The prime criticism to be made against DDMRP is its incorrect baseline. MRPs, as implemented and sold in the four decades ranging from the early 80’s to the late 2010’s, have never really been engineered6 to plan, to forecast or to optimize anything. The name itself, MRP (Material Requirements Planning), is a misnomer. A better name would have been MRM (Material Requirement Management). These software products are built with a relational database at their core (i.e. a SQL database) and are primarily intended to keep track of the company’s assets, and perform all the clerical tasks associated with the most mundane operations, e.g. decrementing a stock level when a unit is picked.

As the relational core is largely at odds with any numerically intensive processing, such as most kinds of graph algorithms, it is unsurprising that the numerical recipes delivered by such products end-up being simplistic and dysfunctional, as illustrated by the two flavors of lead time estimations discussed above. Nevertheless, a vast catalog of literature exists in computer science on the predictive numerical optimization of supply chains. This literature was pioneered in the 50’s under the name of Operations research, and has been pursued ever since under different names, such as quantitative methods in supply chain management or simply supply chain optimization.

Both claims of novelty and superiority for DDMRP are incorrectly drawn from the false premise that MRPs are a relevant baseline for supply chain optimization purposes; i.e. improving upon MRP is an improvement in supply chain optimization. However, MRPs, like all software systems centrally engineered around relational databases, are simply unsuited for numerical optimization challenges.

Manufacturers stuck with the limitations of their MRP should not seek incremental improvements upon the MRP itself, as numerical optimization is fundamentally at odds with the MRP’s design, but rather take advantage of all the software tools and technologies that have actually been engineered for numerical performance in the first place.

Limited formalism

The DDMRP perspective is an odd mix of simple formulas and judgement calls. While DDMRP clearly operates within a specific mathematical framework - i.e. a weighted directed acyclic graph - and that its mechanisms have well-known names, i.e. graph coloring, graph partitioning - those terms are absent from the DDMRP materials. While it can be argued that graph theory is too complex for the average supply chain practitioner, the lack of formalism forces the authors into lengthy explanations for numerical behaviors that could be described much more precisely and concisely.

Then, more concerning, the lack of formalism isolates DDMRP from the vast body of computer science literature, which provides many insights on what can be done with known algorithms from multiple computer science fields that have been extensively studied beyond the requirements of supply chain management, namely: graph theory, stochastic optimization and statistical learning. As a result, DDMRP frequently adopts simplistic perspectives - we’ll get back to this point below - which are not justified considering both known algorithms and present hardware computer capabilities.

Then, the limited formalism of DDMRP leads to erroneous claims such as reduced lead times. Indeed, numerically, the lead times, as computed by DDMRP are certainly shorter than most alternatives, because, by construction, lead time paths are truncated whenever a decoupling point is encountered. Yet, a methodological error is made when asserting that with DDMRP, lead times are shorter. The correct proposition is with DDMRP, lead times are measured differently. A proper quantitative assessment of the merits, lead-time wise, of DDMRP requires a formal notion of system-wide inertia to evaluate how quickly a supply chain governed by a formal policy would catch-up when facing changes in market conditions.

Also, judgement calls are used extensively by DDMRP - i.e. delegating to human experts key numerical decisions, such as the choice of decoupling points. As a result, it is impractical, if not impossible, to benchmark a DDMRP practice against a competing, properly formalized, methodology, as performing the benchmark would require an impractical amount of manpower for any sizeable supply chain (i.e. thousands of SKUs or more).

Finally, relying on human inputs to tune a numerical optimization process is not a reasonable proposition considering the price point of modern computing resources. Meta-parameter tuning might be acceptable, but not a fine-grained intervention at every vertex of the graph. In particular, a casual observation of present day supply chains indicates that the need for human inputs is one of the biggest factors behind system-wide inertia. Adding another layer of manual tuning - the choice of decoupling points - isn’t an improvement in this regard.

Dismissive of real-world complexity

Modeling a supply chain is, by necessity, an approximation of the real world. Thus, all models are a tradeoff between precision, relevance and computational feasibility. Nevertheless, DDMRP is abusively simplistic with regards to many factors that cannot reasonably be dismissed anymore when considering present computing hardware.

The supply chain exists to serve the economic interests of the company. Putting it more bluntly, the company maximizes the dollars of returns that are generated through its interaction with the economy at large; yet DDMRP optimizes percentages of error against arguably arbitrary targets - its buffers. The prioritization as defined by DDMRP is looking inward: it is steering the supply chain system toward a state that is consistent with the assumptions underlying the DDMRP model itself - aka stock availability at the decoupling points. However, there is no guarantee that this state is aligned with the financial interests of the company. This state might even go against the company’s financial interests. For example, when considering a brand producing many low-margin products that are close substitutes to one another, maintaining high service levels for a given SKU might not be a profitable option if competing SKUs (quasi-substitutes) already have an excess of inventory.

Furthermore, the prioritization scheme proposed by DDMRP is fundamentally one dimensional: the adherence to its own stock targets (the buffers). However, real supply chain decisions are nearly always many-dimensional problems. For example, after producing a batch of 1000 units, a manufacturer might usually put those 1000 units in a container for sea freight; yet if a stock-out is imminent down the supply chain, it might be profitable to have 100 units (out of the 1000) shipped by aircraft to mitigate the pending stock-out ahead of time. Here, the choice of transportation mode is an extra dimension to the supply chain prioritization challenge. In order to address this challenge, the prioritization method requires the capacity to integrate the economic drivers associated with the diverse options that are available to the company.

Other dimensions that need to be considered as part of the prioritization may include:

  • pricing adjustments, to increase or reduce demand (possibly through secondary sales channels)
  • build or buy, when substitutes can be found on the market (typically at a premium)
  • stock expiration dates (requiring in-depth insights on the stock composition)
  • return risks (when distribution partners have the option to return unsold goods).

Thus, while DDMRP is correct in stating that prioritization is a more flexible approach compared to binary all-or-nothing approaches as implemented by MRPs, the prioritization scheme proposed by DDMRP itself is rather incomplete.

Lokad’s take

DDMRP’s motto is build for people not perfection. At Lokad, we favor the classic IBM vision machines should work; people should think through the Quantitative Supply Chain Management (QSCM) perspective.

QSCM starts from the hypothesis that every mundane supply chain decision should be automated. This perspective emphasizes that competent supply chain practitioners are considered too rare and too expensive to spend their time on generating stocking, purchasing or pricing routine decisions. All those decisions can and should be automated, so that the practitioners can focus on improving the numerical recipe itself. From a financial perspective, QSCM turns those salaries from OPEX, where man-days are consumed to keep the system rolling, to CAPEX, where man-days are invested in the ongoing improvement of the system.

The DDMRP angle starts from the hypothesis that competent supply chain practitioners can be trained en masse, thus lowering both the cost for the employer, but also reducing the truck factor associated with the departure of any employee. DDMRP establishes a process to generate mundane supply chain decisions, but achieving full automation is mostly a non-goal, although DDMRP isn’t averse to automation whenever the opportunity arises.

Interestingly, whether the industry is steering towards the QSCM perspective or the DDMRP should be observable to some extent. If the QSCM perspective gets adopted more broadly, then supply chain management teams will evolve to become more like other “talent” industries, e.g. finance with their quantitative traders where a few exceptionally talented individuals drive the performance of large companies. Conversely, if the DDMRP perspective is adopted more broadly, then supply chain management teams will evolve to become more like successful franchises - e.g. Starbucks store managers - where teams are plethoric and well-trained, with exceptional individuals having little effect on the system - but where a superior culture makes all the difference between companies.

Resources

  • Demand Driven Material Requirements Planning (DDMRP), Version 3, by Ptak and Smith, 2019
  • Orlicky’s Material Requirements Planning, 3rd edition, by Carol A. Ptak and Chad J. Smith, 2011

Notes


  1. In discrete mathematics, a graph is a set of vertices (also called nodes or points) and edges (also called link or line). The graph is said to be directed if edges have orientations. The graph is said to be weighted if edges have a number - the weight - assigned to them. The graph is said to be acyclic if no cycle exists when following the edges according to their respective orientations. ↩︎

  2. A coloring scheme consists of assigning a categorical property to every vertex of the graph. In the case of DDMRP, there are only two options: decoupling point or not decoupling point; i.e. only two colors. ↩︎

  3. In computer science, a divide and conquer is an algorithm that works by recursively breaking down a problem into two or more related subproblems, until these become simple enough to be solved directly. This approach was pioneered by John von Neumann in 1945. ↩︎

  4. As of Feb 24th, 2020, the Demand Driven Institute™ is a for-profit organization that defines itself (sic) as The Global Authority on Demand Driven Education, Training, Certification & Compliance. Its business models revolves around selling training sessions and materials geared around DDMRP. ↩︎

  5. As of Feb 24th, 2020, the homepage of the Demand Driving Institute™ (demanddriveninstitute.com) gives the following figures as typical improvements: users consistently achieve 97-100% on time fill rate performance, lead time reductions in excess of 80% have been achieved in several industry segments, typical inventory reductions of 30-45% are achieved while improving customer service. ↩︎

  6. MRP vendors certainly made bold claims about the planning, forecasting and optimization capabilities of their product. Nevertheless, just like the Guide Michelin does not bother assessing whether cornflake brands could be eligible for a culinary star rating despite their magically delicious taglines, our assessment should be directed at parties who were focused mainly on delivering state-of-the-art supply chain performance. ↩︎