Over the last two decades, I have watched “supply chain management” accumulate buzzwords faster than results. We speak of digital twins, control towers, integrated business planning, demand sensing, resilience, sustainability. Yet if you look carefully at balance sheets and income statements, many companies have not progressed much in terms of how they convert working capital, capacity, and complexity into economic returns.

Among the voices trying to make sense of this stagnation, some have argued for market-driven and outside-in value networks, and have spent considerable effort measuring performance over long periods of time. My own work looks at the same economic reality from a different vantage point. The purpose of this essay is to clarify that perspective.

abstract image on supply chain decisions viewed as economic bets

In my book Introduction to Supply Chain, particularly in its opening chapters, I tried to reframe supply chain management as a rigorous economic discipline, focused on how companies allocate scarce resources under uncertainty. The book goes into more detail than I can here, but the central idea is simple: whenever we decide what to buy, make, move, or price, we are placing small economic bets with uncertain outcomes. A modern supply chain should be judged by the quality of these bets, and by the long-run financial consequences they create.

One influential line of work starts from a different place. It looks at long time series of financial metrics across peer groups and asks: which companies have actually improved their position in terms of growth, margin, inventory turns, and asset utilization? Some refer to this as an “effective frontier.” I find this lens useful. Where we differ is less in the goal and more in the mechanism we believe can get us there.

Two vantage points on the same problem

One common description presents supply chains as market-driven value networks. The emphasis is on sensing markets, outside-in. Instead of treating orders from the next node in the chain as “demand,” the argument is that we must read the real market: point-of-sale data, channel inventory, promotions, social signals, supplier constraints, macro shocks. The supply chain is then a set of connected processes that translate these signals into coordinated responses: planning, sourcing, making, delivering.

My own vantage point is narrower in appearance, but sharp by design. I focus on the moment of decision. Should we buy one more unit of this item for that warehouse, to be received on that date? Should we advance production of this batch, delay it, or cancel it? Should we lower the price of this SKU for this channel tomorrow, or keep it as is? Each of these decisions consumes something scarce: cash, capacity, shelf space, human attention, goodwill with customers or suppliers. It also creates exposure to a range of possible futures.

From this angle, a supply chain is a machine for turning uncertainty into decisions, and decisions into financial outcomes. I am less interested in the elegance of the process chart and more interested in the quality of the next decision and the one after that, at scale.

The outside-in vantage point looks at the landscape from 10,000 meters: how the company moves relative to peers on a multi-dimensional performance frontier. I stand closer to the ground and ask whether the millions of tiny bets that make up daily operations make economic sense, given the uncertainty we actually face. These vantage points are not contradictory. They simply zoom in on different levels of the same system.

What exactly are we trying to optimize?

If we strip away the jargon, these different perspectives are all talking about performance. But they choose different lenses to define it.

One lens is explicitly comparative and multi-year. It cares about how a firm performs versus its direct competitors on revenue growth, operating margin, inventory turns, and sometimes cash-to-cash cycles or asset utilization. A company that grows quickly but bleeds margin is not excellent. A company that shrinks its inventory but also its market share is not excellent. Excellence sits on an effective frontier where these metrics are jointly improved or at least well balanced.

My own lens is unit-based and marginal. I focus on the risk-adjusted return of the marginal decision. If I buy one more unit of product X to place in location Y for week Z, given what I know now, what is the expected financial payoff? How much profit does this additional unit bring on average, once we account for the chance it sells on time, the chance it sells late at a discount, the chance it never sells at all and becomes obsolete? How does this compare to placing that same unit of working capital into another product, another location, or simply not investing it?

To reason about this, we need a common scale. Money is not everything, but it is the unit in which the firm settles its obligations and measures survival. So I insist on translating all the trade-offs that clutter supply chain discussions into consistent financial terms. A shortage is not “bad” in the abstract; it is costly in terms of lost margin, lost future business, and damage to reputation. Excess stock is not simply “waste”; it is an option that might still pay off, or might rot. Capacity that appears idle on a dashboard might be valuable as a buffer against volatility that is not yet in the historical data.

The effective frontier and the marginal, risk-adjusted return are two ways to talk about the same underlying phenomenon. One vantage point looks at the integral: the long-run, multi-year trajectory of the company. I look at the derivative: the incremental effect of the next decision. In practice, one cannot have a good integral with a bad derivative for very long. Persistent excellence on the frontier ultimately requires that the daily decisions, across thousands of items and locations, be economically sensible given uncertainty.

Forecasts, plans, and the illusion of certainty

Some of the most persistent critics of “inside-out” thinking have pointed out that companies treat their own orders and historical shipments as if they were a faithful representation of demand. This view is both late and biased. Orders are shaped by promotions, allocation rules, stockouts upstream, poor data integration, and a host of other distortions. In that alternative view, a modern supply chain should be “outside-in”: starting from real market and supply signals, then orchestrating response.

I agree with the critique of inside-out planning, but I come at it from a probabilistic angle. Forecasts, as they are commonly practiced, enforce a comfortable illusion of certainty. We take a messy, uncertain future and compress it into a single number: “expected demand” for a given period. We then build safety stocks and deterministic plans around that number, as if error were a nuisance at the margins rather than the main event.

This way of working discards precisely the information we most need: the range of plausible futures and their probabilities. In my own work, I argue that forecasts should be distributions, not points. The question is not “What is the sales forecast for next month?” but “What does the probability distribution of possible sales look like?” What are the odds of selling nothing? Of selling twice the usual volume? What do the tails look like?

Once we have such distributions, the plan stops being a single “consensus number” negotiated in meetings and becomes a series of decisions computed by algorithms that weigh costs and opportunities under those distributions. The same demand distribution can justify very different inventory or production decisions depending on the financial consequences of stockouts versus excess, the lead times involved, and the availability of substitutes.

Here again, these critiques are reacting against the same failure: pretending that uncertain, non-linear behavior can be captured in a single column in a spreadsheet. One line of thought pushes for richer, earlier signals and process redesigns that shift planning outside-in. I push for probabilistic models that force us to confront uncertainty explicitly and for decision systems that can digest those models at scale.

In a healthy practice, these two concerns should meet. You want good signals and a realistic representation of uncertainty; outside-in flows feeding probabilistic, economically grounded decisions.

Technology: architecture versus engine

Many observers emphasize the limitations of the technology stack that most companies have inherited. These stacks were built primarily for transactional efficiency: recording orders, shipments, invoices, and so on. They integrate data across functions, but they do not necessarily help companies make better decisions. The usual remedy that is proposed is to redesign architecture around the flows of demand and supply information: external data layers, better taxonomies, near real-time inventory visibility, and more flexible analytical tools.

I agree that the inherited stack is a big part of the problem. However, I place the emphasis elsewhere. The core missing capability, in my view, is not another layer of integration or another dashboard, but a decision engine.

By this I mean a piece of software that, every day, takes in all relevant data, all current constraints, and a set of economic valuations, and then proposes or directly issues concrete decisions: which purchase orders to place, which production orders to schedule, which transfers to execute, which prices to adjust. This engine must be programmable, auditable, and fast enough to handle millions of such decisions in a reasonable time. It must also be able to explain, after the fact, why a particular decision was taken, given the data and valuations at the time.

Architectures that are outside-in are useful because they provide better inputs to such an engine. But without the engine, they risk turning into more sophisticated reporting systems. You will see the problem more clearly, in more colors and with more latency metrics, but you will still depend on armies of planners pushing numbers around in spreadsheets, trying to manually reconcile conflicting objectives.

It is not controversial to argue that technology should serve better modeling and better decisions, not just better integration. An emphasis on architecture highlights where data should flow and how processes should be organized. My emphasis on the engine highlights what ultimately needs to happen with that data: a large number of economically sensible decisions under uncertainty. These are complementary concerns, but I would personally place the engine at the center, with architecture in service to it.

Organization, governance, and the role of S&OP

Much contemporary writing revolves around sales and operations planning and its evolution. There are maturity models in which S&OP progresses from simple feasibility checks to profit-focused planning, then to demand-driven and finally market-driven orchestration. In these stories, S&OP is the main horizontal process that cuts across silos and aligns functions. It is where trade-offs are negotiated and where the outside-in perspective is brought to the table.

I share the diagnosis that silos are a major source of value destruction. When each function optimizes its own metrics—service level here, utilization there, forecast accuracy elsewhere—the overall system suffers. People spend enormous effort resolving conflicts between plans that were never designed to be compatible.

Where I diverge is on how central S&OP, as a planning meeting, should remain in the long term. In my view, if we do our job properly on the technology side, the bulk of operational planning should be delegated to the decision engine I described earlier. This engine is fed by the most up-to-date data and the current economic valuations (for example, the relative cost of stockout versus excess for a given item, or the value of a day of lead time reduction for a given lane). It recomputes optimal decisions as conditions change, far more often and more consistently than any human process can.

What remains for S&OP or integrated business planning is governance rather than planning. Instead of spending their time adjusting quantities in a spreadsheet, executives should spend their time adjusting the rules of the game: the financial valuations, the constraints, the risk appetite. They should examine how the engine’s decisions translate into realized outcomes and use this feedback to refine the economic parameters and structural assumptions.

This is a subtle but profound shift. It turns S&OP from a collective attempt to hand-craft a single “right” plan into a periodic review of how well an automated decision system is performing, given the firm’s objectives. The human focus moves from micromanaging quantities to calibrating incentives and constraints.

Maturity models of this kind can still be useful in this context, particularly as a diagnostic tool for where a company stands culturally and organizationally. But I would argue that the end state is less about more sophisticated planning meetings and more about better economic governance of automated decision systems.

How do we know what we know?

Supply chain is an awkward field from an epistemological standpoint. Experiments are expensive, environments are noisy, and the number of variables is daunting. It is easy to confuse plausible stories with robust knowledge.

Some researchers, for example Lora Cecere, have invested significant effort in grounding their views in financial data. Instead of relying on self-reported surveys or consulting anecdotes, they have reconstructed histories of companies’ performance using public financial statements and looked for patterns over time. This does not prove causality, but it imposes a discipline: the practices we celebrate as “best” should at least correlate with long-term improvements in growth, margins, and inventory turns.

My own skepticism takes a different form. I worry about the survival of techniques whose main virtue was once computational convenience—safety stock formulas based on heroic assumptions, linearized models of clearly non-linear phenomena, simplified planning hierarchies that reflect organizational charts more than economic reality. Many of these artifacts persisted because they were easy to compute on paper or early computers. Today we have far more computational power, yet we still cling to them.

I also worry about incentive structures. Software vendors, consultants, academics, and internal stakeholders all have reasons to prefer narratives that justify large projects, complex frameworks, or incremental adjustments. There is comparatively little incentive to prove that a cherished method is systematically losing money in practice.

The response, in my view, is to bring supply chain closer to applied economics with a strong empirical and computational component. We should formulate our assumptions explicitly, encode them in algorithms, and confront them with reality through the firm’s own financial results. When a policy systematically destroys value in a particular context, we should retire it, regardless of how elegant or widely taught it may be.

On this point, these perspectives converge. There is a shared rejection of the idea that there are timeless “best practices” waiting to be implemented. There are only practices that work in context, for a while, until the environment or the competitive landscape changes.

Toward a synthesis

If you are an executive or practitioner trying to navigate these ideas, it may help to think in terms of layers.

The outside-in, financially grounded work is invaluable at the strategic and diagnostic layer. It helps you ask the right questions: Where are we on the effective frontier relative to peers? Are we growing, profitable, and capital-efficient, or are we trading one dimension off against the others? Do our processes remain inside-out, dominated by the inertia of ERP transactions and functional silos, or have we genuinely moved toward market-driven, outside-in flows?

My own work is more focused on the operational and computational layer. I want you to be able to answer questions like: Given our current understanding of demand and supply uncertainty, and given our financial valuations, are the decisions we take every day actually maximizing our risk-adjusted return on scarce resources? Can we make those decisions consistently and at scale through software, rather than manually, while retaining the ability to audit and improve the underlying logic?

These layers are not alternatives. In an ideal world, a company would use the strategic lens to define what excellence looks like and to measure progress over years, while using an economic, probabilistic decision engine to drive day-to-day operations toward that target. The outside-in architecture would feed the engine with rich, timely signals. Governance forums would focus on calibrating economic parameters rather than editing quantities. And the notion of “best practice” would be replaced by a more humble, empirical approach: what works, here and now, in this specific network, as revealed by actual financial outcomes.

In that sense, any apparent confrontation between these views is not a clash between incompatible theories. It is a conversation about where to place the emphasis: on architecture and process, or on algorithms and economics; on long-term trajectories, or on marginal decisions. Both perspectives are necessary. But if I had to summarize my own position in one sentence, it would be this:

Supply chain is, at its core, an economic discipline that should be practiced through software as the art of placing good bets under uncertainty.

Everything else—processes, architectures, dashboards, even maturity models—should be judged by the extent to which they help or hinder that central task.