For most of my career, I have been told that “optimization” is the answer. If only we could formulate the right model, feed it with enough data, and let a powerful solver loose, the machine would calmly produce the best possible decisions.

Yet in real supply chains, this promise rarely materializes. Companies deploy sophisticated software, tune countless parameters, and still find themselves firefighting stockouts, excess inventory, and nervous planners who no longer trust the system. The mathematics inside the black box may be beautiful; the decisions on the warehouse floor are often not.

In Introduction to Supply Chain, particularly in the chapters devoted to decisions and numerical recipes, I argued that the real bottleneck is not the lack of solvers but the lack of an adequate frame around the optimization itself. In this essay I want to take this idea one step further and give it a name: holimization, and its verb, to holimize, formed by contracting “holistic optimization”.

abstract image on holistic optimization in supply chain

When optimization is framed around the wrong question

Classical optimization, in its simplest form, starts from a very tempting posture:

“Tell me what you want to maximize or minimize, list the constraints, and I will find the best possible decision.”

On a whiteboard this sounds perfectly reasonable. In a supply chain, it hides almost everything that actually matters.

What exactly are we “maximizing”? Profit this month? Over a year? Growth over several years? Customer satisfaction in some not-quite-measurable sense? Resilience to disruption? A blend of all of the above?

What exactly are the constraints? The formal ones, such as capacity and lead times, or also the undocumented ones, such as “this warehouse team cannot realistically handle more than 5,000 inbound cartons a day” or “this machine cannot be reset twice in the same week”?

And how should we measure harm? Is a stockout worse than a clearance sale? How much worse, in money terms, and for which products?

When we deploy a classical optimization engine inside a supply chain, we are forced to answer these questions by encoding them—explicitly or implicitly—into an objective function and a collection of constraints. Once we have done that, the solver will indeed find “optimal” decisions relative to that encoding. Unfortunately, if the encoding is even slightly misaligned with reality, we simply get the wrong decisions faster.

I have seen many variations of the same story. A retailer rolls out a new replenishment system that proudly maximizes “service level” subject to inventory and logistics costs. Stockouts go down on paper, but stores now hold too many slow-moving sizes and colors that customers do not want. A manufacturer installs an advanced planning system that optimizes production schedules against a simplified model of capacity; the resulting plan looks elegant in the interface, but the plant spends its time improvising workarounds because the model never captured critical bottlenecks on the shop floor.

In each of these cases, the optimization itself works. The computer does exactly what we asked it to do. The problem is that we asked the wrong question.

Naming the missing discipline

The notion I want to introduce is this:

Holimization (n.) – The discipline of holistic optimization for complex, evolving systems, where the primary design object is the optimization frame itself: the objectives, constraints, data semantics, and decision grammar. Holimization treats optimization as an iterative, experimental process in which the objective function, the model, and the instrumentation co-evolve under real‑world feedback.

To holimize (v.) – To orchestrate such a process: repeatedly framing, instrumenting, optimizing, and refactoring the decision system so that it remains aligned with economic reality and organizational intent.

Optimization, in the narrow sense, focuses on the inner loop: given a fixed objective and a fixed representation of the world, find decisions that extremize this objective. Holimization focuses on the outer loop: how we design that objective, how we represent the world, how we instrument the system, and how all of this changes over time.

In other words, optimization answers: “What is best, assuming the world looks like this?” Holimization asks: “Are we looking at the world in a way that lets ‘best’ make sense at all?”

This is not an abstract philosophical point. In a messy, evolving environment such as a supply chain, the question we are optimizing keeps changing under our feet. Markets shift, assortments change, channels appear and disappear, regulations evolve, and internal priorities move. If our optimization frame remains fixed while the world moves around it, the divergence between what is “optimal” on the screen and what is sensible in reality grows steadily over time.

Holimization is the name I propose for the discipline of treating the frame itself as a first-class object of work.

What it means to holimize a supply chain

Let me make this concrete with supply chain examples, as this is where I have spent most of my time.

Imagine a fashion retailer that wants to “improve service” in stores. If we take optimization in the narrow sense, we might begin by defining service as the probability of not being out of stock when a customer comes in, and we might set a target such as “95% service level”. We would then encode stockouts as a cost, overstocks as another cost, and let an optimization engine balance the two.

This leads to decisions that are numerically tidy but aesthetically odd. Stores end up with a technically sufficient number of units, yet most of them are in the wrong sizes or all in the same safe colors. From the perspective of the model, everything is fine: the service target is met and costs are within bounds. From the perspective of the customer walking into the shop, the collection looks lifeless.

If we holimize instead, we accept that “service” is not yet properly formulated. We turn the deployment of the decision system into an experiment on our own assumptions.

We start by putting a real decision recipe in place, powered by data and optimization, but we surround it with instrumentation designed to surface “insane” decisions: assortments that any human buyer would immediately judge as harmful, even if the system cannot yet express why. We monitor stores where the model insists on sending an implausible mix of sizes and gemstones of clearance items. We listen carefully to planners who complain that the system keeps starving certain stores of variety.

Each time we encounter such insanity, we trace it back. Perhaps our objective function has no way to reward assortment diversity, only unit availability. Perhaps we never captured the practical limit on how often a store can be reset. Perhaps our historical data hides substitution effects that make certain stockouts much less harmful than others.

We then change the frame. We introduce a quantified notion of diversity into the objective, with a financial value attached to it. We replace a hard constraint with a cost, or the other way around. We add a new class of decisions to the recipe, such as when to refresh a display or how to stagger deliveries for fragile stores. We extend the instrumentation to visualize these new aspects so that the next round of insanity, if it occurs, will be easier to diagnose.

We have holimized the situation: instead of blaming the solver or ignoring the system, we used the very failures of the decisions as signals that our frame was incomplete and needed to evolve.

A similar story plays out in maintenance and repair operations. Suppose you are managing spare parts for aircraft engines. The naïve optimization approach is to treat each part independently: estimate how often it will be needed, assign a cost to stockouts and a cost to holding inventory, and let the machine find the best reorder point.

In practice, this is not how engines are repaired. The real harm occurs when a repair is delayed because one critical part is missing, even if you are drowning in inventory of dozens of other parts. The cost is not a stockout on a line in a spreadsheet; it is days of downtime for an aircraft.

Holimization forces us to reframe the objective around something closer to reality: for example, the reduction of expected delay days per engine per unit of budget. It pushes us to instrument the process with views that make it obvious which parts are repeatedly responsible for blocking repairs. When the system suggests stocking a large quantity of a part that never actually gates repairs, that is an insanity signal. When it starves a part that repeatedly delays engines, that is another.

We do not treat these failures as embarrassing bugs to hide. We treat them as our best source of information about where our frame is wrong. Then we adjust the way we measure harm, the links between events and costs, and the way we structure decisions so that future optimizations are conducted in a more faithful space.

Holimization, then, is not about getting rid of optimization. It is about surrounding optimization with a disciplined practice of learning from its mistakes.

The hidden work around the optimization engine

If you walk into a large company that is “doing optimization”, most of the effort you will see is not in designing algorithms. It is in making sense of data, dealing with exceptions, and reconciling what the system says with what reality allows.

Holimization makes that hidden work explicit and structured.

A great deal of the difficulty lies in data semantics. Operational systems contain a bewildering number of fields, codes, and historical quirks. When a decision engine takes this data at face value, it inevitably interprets some of it incorrectly. A limit that was once meaningful might have become obsolete. A field that appears to be “lead time” may in fact be some blend of transit and administrative delays. A flag that looks like “on promotion” may be used inconsistently across regions.

Without a holimization mindset, these issues are discovered by accident, often after damage has been done. With holimization, we assume from the beginning that our interpretation of the data is provisional. We build tests, comparisons, and sanity checks that attempt to falsify our reading of the data. When the system proposes a decision that would overload a dock, we do not chalk it up to “bad luck”; we take it as evidence that a constraint is missing from our world view.

Another large component is instrumentation. An optimization engine by itself is blind: it takes in data, emits decisions, and has no opinion about whether these decisions are sensible. Holimization requires a layer of visibility designed not only to track key performance indicators but also to highlight where the system behaves in ways that humans find absurd.

This can take many forms: time-travel views that let us replay decisions against past data; microscopes on a single product or site, showing how the decision evolved over time; dashboards that highlight outliers rather than averages. The aim is always the same: to turn “insane” variables into a structured feedback channel, not a source of frustration.

Finally, there is the speed and safety of iteration. Holimization is experimental by nature. We need to try out new frames and revised objectives without putting the whole business at risk every time. That implies technical capabilities—versioning of decision recipes, controlled rollouts, shadow modes—but also organizational ones: clear responsibilities, a culture that accepts experiments as necessary, and management that understands the difference between a stable production rule and a probing test.

All of this is work. It is, however, work that we are already doing informally whenever we complain that “the system doesn’t get it.” Holimization is an attempt to give that work a proper name and a proper method.

Why a new word matters

You might reasonably ask: why coin a new term at all? Why not simply talk about “good optimization practice” or “experimental modeling”?

My experience is that without a distinct name, the outer loop gets swallowed by the inner one. Once the word “optimization” is on the table, attention gravitates toward algorithms, solvers, and performance benchmarks. The conversation shifts to whether a particular method converges faster or scales better, and away from the more uncomfortable question of whether we are optimizing the right thing in the first place.

By contrast, “holimization” carries, in its very construction, the reminder that optimization is only one ingredient in a broader discipline. It says that we care about the whole arc from reality to data to decision and back to reality again. It suggests that our primary artifact is not the solver, but the evolving frame in which the solver operates.

For my own company, Lokad, this naming also clarifies what we are trying to build. We are not, at heart, a provider of one more optimization engine. We are trying to provide a platform where companies can holimize their supply chains: a place where data can be reshaped, where objectives can be expressed in financial terms, where decisions can be automated yet remain intelligible, and where every failure of the system is treated as a precious learning opportunity about how the frame should evolve.

The word is new, but the need it captures is not. Supply chains, and many other complex systems, have been quietly holimizing themselves for years through trial and error, spreadsheet patches, and endless meetings. My hope is that by giving this process a name and a clearer shape, we can make it more deliberate, more rigorous, and ultimately more effective.

Optimization, as mathematics, is not going away; if anything, it will keep improving. Holimization is the invitation to step one level higher, to treat our models, objectives, and constraints not as sacred, but as hypotheses to be tested against the world. Only then can “optimal” stop being a formal label in a report and start resembling the decisions we actually want on the ground.