Most people who hear that I work on “quantitative supply chain” assume I am doing operations research. In a sense they are right: I care deeply about mathematical models, optimization, and using data to support decisions. Yet over the last two decades, while working with companies from fashion to aerospace, I have drifted fairly far from what is now the mainstream culture of operations research.

Colorful decision engine linking models to global logistics.

In my recent book Introduction to Supply Chain, I tried to put in one place the way I now think about flows of goods, uncertainty, and decisions. This essay is a more pointed companion: I would like to explain how my perspective evolved, why I see supply chain first as applied economics rather than applied mathematics, and where I feel current operations research both helps and misleads practitioners.

What operations research promised

Historically, operations research had a wonderfully pragmatic mission. During World War II, scientists and engineers were assembled into ad‑hoc teams to help radar operators, convoy planners, and bomber commanders make better decisions. Morse and Kimball’s early textbook defined the field as “a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control.” The important words here are “operations” and “decisions.” The goal was not to prove theorems; it was to change what fleets and factories actually did on Monday morning.

This spirit survived into the early post‑war decades. Operations research was still described as a scientific, quantitative approach to decision‑making, where we build models of real situations and use them to guide action. Modern summaries by organizations like IFORS still emphasize this view: operations research applies data analysis, mathematical modeling and optimization to help managers choose among alternatives. On paper, this is almost exactly what I want for supply chain.

But the way these ideas are usually translated into practice today feels very different.

How the mainstream looks from a supply chain operator’s desk

When I meet people trained in operations research, we typically agree on the grammar of a problem. There are decisions to take (what to buy, produce, move, price), constraints to respect (capacity, regulations, lead times), and an objective to optimize. In textbooks and many consulting engagements, this becomes a familiar pattern: define decision variables, write down constraints, choose an objective, and feed everything to a general‑purpose solver. Linear and mixed‑integer programming are the central tools; simulations and heuristics orbit around them.

There is nothing inherently wrong with this pattern. It shines in problems like network design or strategic capacity planning, where you truly want to pick a configuration once and live with it for years. It also makes sense in tightly constrained industrial settings where the real physics are well understood and the uncertainty is modest.

The trouble starts when this same way of thinking is carried over, almost unchanged, into highly dynamic, uncertain supply chains: e‑commerce, fashion, spare parts, consumer electronics, grocery. In those environments, I routinely see three failures.

First, the objective function is rarely expressed in money. We optimize cost subject to a target service level, or we maximize a service score subject to capacity, or we minimize forecast error as if that were valuable by itself. What is often missing is a single monetary ledger where stockouts, overstock, handling, capital, and all other pains and gains are expressed in comparable units. Without such a ledger, people argue about “trade‑offs” forever, but no one can truly arbitrate between them.

Second, uncertainty is handled as an afterthought. Forecasts are produced as single numbers, and any recognition of variability is folded into safety factors or buffers chosen more by habit than by calibration. Yet in most of the companies I see, the profit or loss of a season is determined by relatively rare events: an unexpectedly hot trend, a strike at a key supplier, a spell of bad weather at the wrong time. Collapsing uncertainty into a single “most likely” value and a thin layer of safety is a polite way of pretending this reality does not exist.

Third, time is treated as a planning horizon rather than a sequence of opportunities to reconsider. We build a monthly or quarterly plan, run a big optimization overnight, and then treat the result as a script to follow. The fact that tomorrow morning we will know more than we know today, and that we could in principle re‑optimize, is acknowledged but not systematically exploited.

From a distance, all of this still looks like operations research. From the perspective of a supply chain operator, it feels oddly detached from the problems that actually make and lose money.

Why I frame supply chain as applied economics

My own background is in mathematics and computer science, and for many years I tried to attack supply chain problems with the standard toolkit: forecasts, safety stocks, cost minimization, service constraints, clever algorithms. Gradually, client after client, it became obvious that I was solving the wrong problem.

What supply chain practitioners really struggle with is not a lack of models; it is a lack of a clear economic lens. They sit on resources that are scarce and versatile: inventory that can go to many places, machines that can produce many things, transportation that can serve multiple channels. Every day they are confronted with more possible actions than they can enumerate. They operate under deep uncertainty, especially about demand and lead times. And they are judged, in the end, in units of currency.

Once you accept this, supply chain starts to look much less like a branch of applied mathematics and much more like a branch of applied economics. The central question becomes: given what we know and what we can reasonably expect, which actions today are likely to create more economic value than they destroy?

This view has practical consequences.

I try to express every important trade‑off in monetary terms, even if the prices are approximate at first. The pain of lost sales, the cost of obsolescence, the value of freshness, the burden of working capital, the nuisance of congestion at a dock: all of these can be translated into per‑unit or per‑time prices. Once they are on a single scale, we can let the math do its job and rank candidate decisions by expected contribution.

I also insist on modeling uncertainty explicitly wherever it matters. Instead of treating demand as a point forecast plus a safety factor, I want a full distribution over possible future outcomes. The same goes for lead times, returns, supplier reliability, sometimes even prices. This does not need to be esoteric. Simple, well‑calibrated probabilistic models already change decisions dramatically, because they let us see where the tails of the distribution – the rarer but costly scenarios – sit.

Finally, I think of supply chain decisions as repeated bets rather than one‑off plans. Every day, new information arrives: sales, delays, disruptions, opportunities. The right question is not “What is the optimal plan for the next quarter?” but “Given what we know right now, and given a reasonable view of how the future might unfold, which commitments should we make today, and which should we postpone until we know more?” The ability to say “not yet” and leave resources uncommitted is itself a valuable option.

Once supply chain is framed this way, operations research is still present, but in a different role.

From solvers to decision engines

Mainstream operations research often takes the solver as the center of gravity. We formulate a mathematical program, send it to a general‑purpose solver, and judge our success partly by the size and complexity of the instances we can tackle. The more intricate the constraints and the more sophisticated the algorithm, the more successful we feel.

In my daily work, I find it more fruitful to treat the solver as one component in a larger “decision engine.” This engine has several responsibilities.

It must ingest messy, inconsistent enterprise data and turn it into a coherent view of the world: what products exist, where they are, what the lead times look like, what the current commitments are. It must produce probabilistic views of relevant uncertainties: demand, supply, returns, transportation times. It must maintain a monetary ledger of all important costs and benefits, with clear ownership for each price. And it must produce concrete, machine‑readable decisions: purchase orders, transfers, production orders, pricing changes.

Inside this engine, we absolutely use optimization algorithms, including classical ones. But they are no longer the hero of the story. Equally important are the choices about what to price, what to treat as a hard constraint, what to treat as a soft penalty, how often to recompute, how to attribute the consequences of decisions over time, and when to refuse to act because the uncertainty is too great.

Seen this way, the interesting design questions are closer to economics and software architecture than to pure algorithmics. How do we ensure that every important trade‑off inside the system is expressed in money? How do we make sure that the engine can be falsified by experience, that bad decisions are traceable back to assumptions we can examine and revise? How do we make it cheap to run experiments – challenger engines against incumbent ones – so that we can learn what really works in a given business?

These are not questions operations research ignores, but they are not central in the way the field institutions present themselves today.

A constructive disagreement: sequential decisions

In recent years, Warren Powell has been advocating “Sequential Decision Analytics,” a framework that tries to unify the many strands of stochastic optimization, reinforcement learning, and control theory under a single umbrella for decisions over time. I have written separately about where I agree with this approach and where I part ways.

Broadly speaking, we share the conviction that most interesting business problems are sequential: you make a decision, the world moves, you observe, and then you decide again. Where I diverge is in my emphasis on pricing (valuations) as the primary tool for compressing the future into something tractable.

In supply chain, you can often “buy out” the complexity of long‑term consequences by choosing appropriate shadow prices today. For example, if serving a customer today depletes inventory that might be more valuable to another customer tomorrow, that tension should appear as a price on holding inventory, or on consuming it, not as a gigantic scenario tree stretching months ahead. Of course, those prices are imperfect. But the discipline of expressing them in money forces useful conversations: who is willing to accept what trade‑off, and why?

Sequential decision analytics, as I understand it, tends to begin with a rich model of states, actions, transitions, and objectives, and then searches for policies within that structure. My practice begins one step earlier: with an argument about what should be priced, what should be left as a true constraint, and how long we will hold a given decision accountable for its consequences. Once these choices are made, the sequential nature of the problem often becomes much more manageable.

I see this not as a rejection of the broader framework, but as a particular, supply‑chain‑driven stance within it.

Where this leaves operations research

From the outside, it may seem that I am opposed to operations research. I am not. I still see enormous value in its toolbox and in its historical ambition to support real decisions. I simply believe that, in many supply chain contexts, the field has become overly attached to models that are too deterministic, too static, and too disconnected from the actual economics of the businesses they are supposed to help.

If we return to the original aspiration – to provide a quantitative basis for decisions about operations – then I think we must do three things differently.

We must treat money, not abstract KPIs, as the primary language of trade‑offs. We must take uncertainty seriously, not as an error term but as a first‑class input. And we must accept that most of our interesting problems are not one‑shot optimizations but ongoing sequences of bets, where the ability to reconsider and adapt is as important as any single solution we compute today.

In that sense, my work is not an escape from operations research but an attempt to reconnect it with the messy, uncertain, and deeply economic reality of modern supply chains.