Over the last two decades, I have watched “supply chain management” accumulate buzzwords faster than results. We speak of digital twins, control towers, integrated business planning, demand sensing, resilience, sustainability. Yet if you look carefully at balance sheets and income statements, many companies have not progressed much in terms of how they convert working capital, capacity, and complexity into economic returns.

In my book Introduction to Supply Chain, I tried to reframe supply chain management as a rigorous economic discipline, focused on how companies allocate scarce resources under uncertainty. The book goes into more detail than I can here, but the central idea is simple: whenever we decide what to buy, make, move, or price, we are placing small economic bets with uncertain outcomes. A modern supply chain should be judged by the quality of these bets, and by the long-run financial consequences they create.

abstract image on supply chain decisions viewed as economic bets

This essay lays out that “bets” perspective and explores what follows from it: how we define performance, how we treat forecasts and plans, what kind of technology we actually need, and what roles humans should play once the dust settles.

Supply chain as a portfolio of bets

The daily life of a supply chain is deceptively mundane. Someone decides to buy one more unit of this item for that warehouse, to be received on that date. Someone brings a production run forward, delays it, or cancels it. Someone adjusts a price for a given SKU in a specific channel tomorrow.

Each of these decisions consumes something scarce: cash, capacity, shelf space, human attention, goodwill with customers or suppliers. Each decision also creates exposure to a range of possible futures. The unit might sell on time at full price, sell late at a discount, or never sell at all and become obsolete. A production run might fill a profitable gap, or it might tie up capacity that would have been more valuable elsewhere.

On their own, most of these bets are small. In aggregate, they define the risk profile of the firm and its economic outcome. What we call “supply chain performance” is simply the long-run financial result of millions of such bets made under uncertainty.

This is why, to my mind, supply chain is fundamentally an economic discipline. Its subject matter is the allocation of scarce resources under uncertainty. Its unit of account, whether we like it or not, is money. Money is not everything, but it is the unit in which the firm settles its obligations and measures survival. If we are serious about improving supply chains, we must be serious about the economics of these bets.

What exactly are we trying to optimize?

If we strip away the jargon, most companies claim to be optimizing “performance”. But performance is a slippery word.

One natural lens is comparative and long-term. You can look at a peer group over a decade and ask: which companies have actually improved their position in terms of growth, operating margin, inventory turns, and return on capital? Some analysts call this the “effective frontier”: a multi-dimensional curve beyond which peers cannot easily be pushed. A company that grows quickly but bleeds margin is not excellent. A company that shrinks its inventory but also its market share is not excellent. Excellence sits where these metrics are jointly improved or at least well balanced.

This lens is useful because it forces us to confront trade-offs. It is not enough to hit an internal service-level KPI if you are quietly eroding margin or bloating inventory in the process. Over time, the scoreboard is merciless.

My own lens is unit-based and marginal. I focus on the risk-adjusted return of the marginal decision. If I buy one more unit of product X to place in location Y for week Z, given what I know now, what is the expected financial payoff? How much profit does this additional unit bring on average, once we account for:

  • the chance it sells on time at full price,
  • the chance it sells late at a discount,
  • the chance it never sells at all and becomes obsolete?

How does this compare to placing that same unit of working capital into another product, another location, or simply not investing it?

To reason about this, we need a common scale. All the trade-offs that clutter supply chain discussions—service levels, utilization, transportation costs, obsolescence, promotions—need to be expressed in consistent financial terms. A shortage is not “bad” in the abstract; it is costly in terms of lost margin, lost future business, and damage to reputation. Excess stock is not simply “waste”; it is an option that might still pay off, or might rot. Capacity that appears idle on a dashboard might be valuable as a buffer against volatility that is not yet in the historical data.

The effective frontier and the marginal risk-adjusted return are two ways to talk about the same underlying phenomenon. One looks at the integral: the long-run, multi-year trajectory of the company relative to peers. The other looks at the derivative: the incremental effect of the next decision. In practice, one cannot have a good integral with a bad derivative for very long. Persistent excellence on the frontier ultimately requires that the daily decisions, across thousands of items and locations, be economically sensible given uncertainty.

Forecasts, plans, and the illusion of certainty

Traditional planning processes usually start with a forecast. In many organizations, that forecast is a single number per period and per item: the “most likely” quantity to be sold. That number becomes the anchor for production plans, purchasing plans, transfer plans, capacity reservations, and so on. Deviations are treated as errors to be explained away after the fact.

This practice enforces a comfortable illusion of certainty. We take a messy, uncertain future and compress it into a single number—“expected demand” for a given period. We then build safety stocks and deterministic plans around that number, as if error were a nuisance at the margins rather than the main event.

In reality, the information we most need is precisely what this approach discards: the range of plausible futures and their probabilities. For any given item, the questions that matter are:

  • What is the chance that sales next month will be half the usual level?
  • Twice the usual level? Three times?
  • What do the tails look like? Are they fat, skewed, multi-modal?

Once we accept this, the idea of a single “consensus plan” becomes less compelling. Instead of asking “What is the forecast?” and then negotiating a plan around it, we should be asking:

“Given this distribution of possible futures and these financial consequences of stockouts, excess, and delay, what decisions make sense?”

The same demand distribution can justify very different inventory or production choices depending on:

  • the margin structure,
  • the availability of substitutes,
  • the lead times involved,
  • the cost of capacity and changeovers,
  • contractual constraints and penalties.

In my own work, I argue that forecasts should be distributions, not points. The question is not “What is the sales forecast for next month?” but “What does the probability distribution of possible sales look like?” Once we have such distributions, the plan stops being a single “consensus number” negotiated in meetings and becomes a series of decisions computed by algorithms that weigh costs and opportunities under those distributions.

Technology: architecture versus engine

Most companies have inherited a technology stack that was built primarily for transactional efficiency: recording orders, shipments, invoices, inventory movements. These systems integrate data across functions, but they do not necessarily help companies make better decisions. Adding more dashboards on top of such a stack does not fix the underlying problem. It gives you more ways to see what is going on, but not much help in deciding what to do.

There is a lot of discussion today about architectures that are more “outside-in”: integrating external demand signals, building richer taxonomies, providing near real-time visibility of inventory and capacity, and offering more flexible analytical tools. All of this is useful. However, I believe it misses the central capability.

What is lacking is not just another layer of integration or another dashboard, but a decision engine.

By this I mean a piece of software that, on a recurring basis:

  1. takes in all relevant data and current constraints,

  2. applies an explicit economic model of costs and opportunities, and

  3. proposes or directly issues concrete decisions:

    • which purchase orders to place,
    • which production orders to schedule,
    • which transfers to execute,
    • which prices to adjust.

Such an engine must be:

  • Programmable by people who understand the business.
  • Auditable, in the sense that it can explain, after the fact, why a particular decision was taken given the data and valuations at the time.
  • Fast and scalable enough to handle millions of decisions in the time windows imposed by physical lead times.
  • Probabilistic, able to work with distributions rather than point forecasts.

“Outside-in” architectures are useful because they provide better inputs to such an engine. Without the engine, however, they risk turning into more sophisticated reporting systems. You will see the problem more clearly, in more colors and with more latency metrics, but you will still depend on armies of planners pushing numbers around in spreadsheets, trying to manually reconcile conflicting objectives.

My emphasis, therefore, is on putting the decision engine at the center, with architecture in service to it.

Organization, governance, and the role of S&OP

Much of the organizational thinking around supply chain has crystallized around sales and operations planning (S&OP) and its derivatives. These processes are meant to cut across silos and align functions. They are where trade-offs are negotiated and where finance, sales, operations, and supply chain supposedly converge on a single plan.

I share the diagnosis that silos are a major source of value destruction. When each function optimizes its own metrics—service level here, utilization there, forecast accuracy elsewhere—the overall system suffers. People spend enormous effort resolving conflicts between plans that were never designed to be compatible.

Where I diverge from traditional S&OP thinking is on how central the planning meeting should remain in the long term.

In my view, if we do our job properly on the technology side, the bulk of operational planning should be delegated to the decision engine described above. This engine is fed by the most up-to-date data and the current economic valuations (for example, the relative cost of stockout versus excess for a given item, or the value of a day of lead-time reduction for a given lane). It recomputes optimal decisions as conditions change, far more often and more consistently than any human process can.

What remains for S&OP or integrated business planning is not planning but governance.

Instead of spending their time adjusting quantities in a spreadsheet, executives should spend their time adjusting the rules of the game:

  • the financial valuations,
  • the constraints,
  • the risk appetite,
  • the structural assumptions embodied in the decision logic.

They should examine how the engine’s decisions translate into realized outcomes and use this feedback to refine the economic parameters and structural assumptions. The main questions become:

  • Are we comfortable with the current trade-off between service and inventory on this product family?
  • Are we pricing risk appropriately in this market?
  • Are our lead-time assumptions realistic, given recent disruptions?

This is a subtle but profound shift. It turns S&OP from a collective attempt to hand-craft a single “right” plan into a periodic review of how well an automated decision system is performing, given the firm’s objectives. The human focus moves from micromanaging quantities to calibrating incentives and constraints.

How do we know what we know?

Supply chain is an awkward field from an epistemological standpoint. Experiments are expensive, environments are noisy, and the number of variables is daunting. It is easy to confuse plausible stories with robust knowledge.

Historically, many techniques that are still widely taught and implemented owe their survival less to their empirical performance than to their computational convenience. Safety stock formulas based on heroic assumptions, linearized models of clearly non-linear phenomena, simplified planning hierarchies that mirror organizational charts more than economic reality—these artifacts were understandable when computation was scarce and expensive. It is harder to justify them now.

I also worry about incentive structures. Software vendors, consultants, academics, and internal stakeholders all have reasons to prefer narratives that justify large projects, complex frameworks, or incremental adjustments. There is comparatively little incentive to prove that a cherished method is systematically losing money in practice.

The response, in my view, is to bring supply chain closer to applied economics with a strong empirical and computational component. We should:

  • formulate our assumptions explicitly,
  • encode them in algorithms,
  • confront them with reality through the firm’s own financial results, and
  • be willing to retire policies that destroy value, regardless of how elegant or widely taught they may be.

There are no timeless “best practices” waiting to be implemented. There are only practices that work in context, for a while, until the environment or the competitive landscape changes.

Toward a more honest practice

If you are an executive or practitioner trying to navigate these ideas, it may help to think in terms of layers:

  • At the strategic and diagnostic layer, you care about your trajectory versus peers on growth, margin, inventory turns, and return on capital. Are you actually moving toward an effective frontier, or simply rearranging internal KPIs?
  • At the operational and computational layer, you care about whether the millions of daily decisions—buying, making, moving, pricing—are good bets, given the uncertainty you face and the financial trade-offs you have accepted.
  • At the governance layer, you care about whether the rules of the game encoded in your decision engine reflect your actual strategy and risk appetite, and whether they are updated as the world changes.

These layers are not alternatives. They are different vantage points on the same animal.

From where I stand, the heart of the matter can be stated in one sentence:

Supply chain is, at its core, an economic discipline that should be practiced through software as the art of placing good bets under uncertainty.

Everything else—processes, architectures, dashboards, even maturity models—should be judged by the extent to which they help or hinder that central task.