When I published Introduction to Supply Chain, many readers asked me to expand on one idea that runs quietly through the book: the push toward unattended decision‑making in supply chains. Not “more dashboards,” not “more alerts,” but software that makes and executes everyday decisions without waiting for a human to click “approve.”

In this essay I want to clarify what I mean by unattended decisions, why I think they are economically unavoidable, and how this stance collides with the mainstream supply chain playbook built around plans, forecasts, and meetings.

abstract teal icons of planning, automation, and wagers

The real job of a supply chain: placing bets all day long

If you strip away the jargon, a supply chain is a machine for making bets.

Every purchase order, every allocation between warehouses, every price change is a small wager: we sacrifice one option in order to pursue another, under uncertainty, in the hope of improving long‑run profit. We never know the future; we commit resources anyway.

Seen from that angle, the daily work of a supply chain is not “maintaining a plan,” it is choosing among options. The more SKUs, lanes, and customers you have, the more of those choices appear. Even modest retailers find themselves making tens of thousands of such micro‑bets every day; large networks make millions.

When those decisions are taken one by one by people hunched over spreadsheets, the limiting factor is no longer trucks, ships, or warehouse space. It is human attention. It is the number of hours in a planner’s week.

Unattended decision‑making is my attempt to take that bottleneck seriously.

What I mean by “unattended decisions”

By unattended decisions, I mean something very concrete:

Software ingests the company’s records and relevant external signals; it computes proposed actions; and those actions are executed automatically under normal circumstances. No planner has to retype the quantity. No manager has to sign the requisition. The software writes the order back into the system of records directly.

If conditions go out of bounds—data corruption, weird market shock, contradictory constraints—the software stops and raises its hand. But stopping is the exception, not the rule. In the default case, the decisions go out without human gatekeeping.

The ambition here is not to “help planners decide faster.” The ambition is to reduce the number of decisions that require a human at all.

That requires two things to be made painfully explicit.

First, we must decide what “good” means in coins. An extra unit in stock buys some protection against stockouts, but ties up working capital, takes shelf space, and may increase write‑offs later. Each of these effects has a monetary impact. If we cannot express the trade‑off in money, the machine cannot be expected to choose well.

Second, we must define the space of admissible moves: which suppliers, which lead times, which transport modes, which routes, which minimums and maximums, which regulatory and physical limits. The machine cannot respect constraints we never bothered to state.

Once we have a clear economic yardstick and a clear description of what is allowed, the case for unattended decisions becomes straightforward. Recurrent decisions that share the same structure should be encoded once as a calculation and then executed by software, at scale, every day.

In other words: if a decision is frequent, structured, and governed by stable economics, then having a human re‑decide it every morning is waste.

A simple thought experiment

Imagine a single system that sees every demand signal, every stock position, every lead time update, every cost; and which then issues—without human intervention—every purchase order, transfer order, picking wave, and price change.

In such a world, there is nothing left to “align” in Sales & Operations Planning meetings; the meeting disappears because the machine already reconciles demand, supply, and finance in each choice it makes. There is no need for separate “inventory policies” or “service level targets” either; those notions are implicit in the economic calculation.

This thought experiment is not science fiction. Many digital businesses already behave this way in specific domains: ad auctions, credit scoring, real‑time pricing. In those fields, fully automated decision engines have been standard for years, because human reaction times and human memory simply cannot keep up.

Supply chain lags behind, mostly for historical reasons, not because it is inherently less automatable.

How the mainstream sees supply chain

To understand the clash, we need to look briefly at how the discipline describes itself.

Professional bodies such as CSCMP define supply chain management as the planning and management of all activities involved in sourcing, procurement, conversion, and logistics, together with coordination and collaboration with channel partners. ASCM uses similar language and provides a dictionary precisely to standardize this vocabulary.

Frameworks like the SCOR model organize this activity into a set of processes: Plan, Source, Make (or Transform), Deliver (sometimes split into Order and Fulfill), Return, and Enable or Orchestrate. These processes come with extensive libraries of metrics and best practices.

On top of that, the dominant management ritual is Sales & Operations Planning and its later cousin, Integrated Business Planning. The idea, in brief, is to build a single, consensus forecast of demand and use it as the backbone for aligning production, procurement, logistics, and finance over a rolling horizon.

If you attend an S&OP meeting in a large company today, you will almost certainly see:

  • Slides full of time‑series forecasts by family, region, or SKU.
  • Target service levels and inventory turns.
  • Gap analyses versus budget.
  • A calendar of pre‑meetings and executive reviews designed to get everyone to “one set of numbers”.

Those practices are not crazy. They are an attempt to impose order on a complex organization. But they embody a particular vision of what the problem is.

In that vision, the central artefact is the plan: a bundle of time‑series projected into the future. The job of managers is to bring reality “in line” with that plan or to keep revising the plan until the numbers look acceptable again. Automation, in this setting, mostly means nicer dashboards, smoother workflows, and more consistent parameterization of traditional formulas.

My view diverges sharply at exactly this point.

From plans to wagers

The mainstream picture starts from the plan and works backward. My picture starts from the wager and works forward.

In the plan‑centric world, the question is: “How do we get all functions to agree on the same future?” In the wager‑centric world, the question is: “Given what we know, where should we place the next marginal euro, pallet, or hour of capacity?”

A plan is, at best, a side‑effect of answering that second question; it is not a primary object. If tomorrow’s options change, the plan should change with them. The goal is not to respect the plan; the goal is to make profitable bets under uncertainty, day after day.

This sounds abstract, so let me contrast the two views along a few axes.

1. The role of the forecast

In mainstream practice, the forecast is the main control signal. S&OP and IBP place enormous weight on building a single, time‑series forecast by month or week and then reconciling everyone to that curve. Accuracy metrics such as MAPE and bias become central performance indicators.

In my experience, this has two problems.

First, aggregating demand into neat time buckets hides exactly the behaviours that matter most: lumpy sales, promotions, cannibalisation between products, erratic lead times, correlated shocks. A smooth line gives comfort, not truth.

Second, the forecast quietly substitutes for the decision. Instead of asking, “Should we bring in another container of this item at this price, given our constraints?” we ask, “What is demand next month?” and then let an old replenishment formula convert that answer into orders. If the formula is economically naive—and most are—the fact that we improved the forecast accuracy by two points tells us nothing about profit.

In an unattended, wager‑centric world, I still need views of the future, but they are not limited to demand time series. I need probabilistic estimates of many things: basket composition, lead times, returns, channel mix, the impact of price changes. And I need them only insofar as they help me compare options in money.

The focus shifts from “Is my demand forecast accurate?” to “Given all the uncertainties, which option has the best expected financial outcome?”

2. What we automate

Mainstream tools are usually described as “decision support.” Planning systems, control towers, and IBP platforms aggregate data, show KPIs, highlight exceptions, and sometimes suggest actions, but they rarely execute anything without human confirmation.

The human is deliberately kept “in the loop” on almost every decision. From a governance perspective this feels safe, but economically it is expensive. A planner who must approve a hundred replenishment suggestions a day will not be able to think deeply about any of them; they will skim, accept most, tweak a few, and hope nothing explodes.

By contrast, unattended decision‑making aims to remove the human from the loop wherever the logic is repetitive and the economics are well understood.

The software reads the records, evaluates the options, and commits. If there is a tie, or something out of pattern, it stops and asks for help. The fact that a decision is automated does not mean it is arbitrary; it simply means the reasoning has been captured once instead of improvised afresh each day.

The analogy with aviation is useful. For a modern aircraft, the default is autopilot during cruise, not manual control. The pilot is there to handle take‑off, landing, and abnormal situations. No one considers this a loss of prestige for the pilot; it is a recognition that a machine is better at maintaining a stable trajectory for hours on end.

Supply chain has its own “cruise phase”: the countless, recurrent choices that are boring precisely because they follow familiar patterns. These are the ones that should be unattended.

3. Architecture: where does the decision live?

The SCOR model and most ERP suites assume that planning and execution live in and around the same transactional core. Orders are stored there, parameters are stored there, and embedded logic turns both into recommended actions.

The result, in practice, is that business logic gets scattered across configuration tables, batch jobs, custom reports, and spreadsheet exports. When something goes wrong, it is hard to know why a given decision was taken. When you want to improve the logic, you must hunt down all the places where the old rules are hiding.

For unattended decision‑making to work, I prefer a sharper separation.

The system of records remains the single source of truth for transactions and master data. Analytical systems can continue to tell stories about the past. But the decision logic—the part that turns data into concrete commitments—lives in a dedicated layer that is easier to reason about, to version, to test, and to roll back.

I sometimes call this layer a “decision engine,” but the label matters less than the discipline. The key is to treat the logic that commits money, space, and time as a first‑class artefact, not as a fog of parameters swirling inside various tools.

When this is done properly, every automated decision can be traced back to a readable piece of logic and a specific snapshot of data. That is the opposite of a black box.

4. Governance and incentives

Mainstream governance often equates importance with headcount and meeting calendars. A manager who runs a large planning team and chairs a major S&OP process is seen as strategically central. Vendors reinforce this by selling licences per user and celebrating “user adoption” as a key success metric.

Unattended decision‑making flips the prestige gradient. The best compliment you can pay a team is that millions of correct decisions are taken every week with almost no one watching. The focus of governance becomes the quality of the decision logic and its impact on profit and risk, not the number of people touching the system.

This is not just philosophy; it affects contracts. If a vendor is paid per seat, they have little incentive to automate away those seats. If a team is rewarded for maintaining a sprawling ritual of meetings, they will unconsciously protect that ritual.

If, instead, we reward uplift in unattended decision quality—fewer stockouts at the same inventory, better utilization of capacity, better margins given the same risk—then both internal and external actors are pushed in the right direction.

“Isn’t this dangerous?” – common objections

Whenever I argue for unattended decisions, three objections appear.

The first is the fear of losing control. Managers worry about delegating decisions to software. My answer is that most large organizations already delegate decisions to software; they just do it implicitly through the formulas and parameters baked into existing tools. When a planner relies on a replenishment rule they barely understand, they are not truly “in control.” They are merely a human front end on a legacy algorithm.

By surfacing the logic, by expressing the economics explicitly, and by versioning the code, we actually gain control. We can test alternative policies side by side. We can replay last year under a different rule set. We can see exactly which change caused which outcome.

The second objection is the fear of fragility. What happens if the model is wrong? Here, again, the comparison with the mainstream is instructive. A company that runs on fixed safety stock formulas and rough service‑level targets is already exposed to model error; it is just hidden under layers of habit. Unattended decision‑making must be paired with mechanisms for detecting misbehaviour quickly: halting rules, monitoring of economic outcomes, and the ability to fall back to a simpler policy while we investigate.

The third objection is about people. Does this vision make planners obsolete?

It certainly changes the work. In an unattended world, there is less demand for people to adjust orders by hand and more demand for people who can help encode the right economics and constraints, who can challenge the data, who can run experiments and interpret the results. The centre of gravity moves from repetitive micro‑decisions to designing and maintaining the decision framework itself.

For organizations willing to make that shift, the human work becomes more interesting, not less.

What an unattended day looks like

Let me paint a modest picture.

In a retailer, overnight, a decision engine reads yesterday’s sales, current stocks, inbound shipments, and updated supplier lead times. It knows the cost of capital, the penalties for late deliveries, the markdown patterns at the end of the season. It proposes the day’s purchase orders and transfer orders. For the vast majority of SKUs and locations, the economics are routine; the orders are emitted automatically into the ERP.

A small fraction of cases look odd: a supplier that suddenly doubled its lead time, a product whose demand has exploded beyond any historical pattern, a constraint conflict on a key warehouse. The engine does not try to be clever there; it stops and logs a dossier. Human experts inspect those cases in the morning, decide what to do, and, if needed, adjust the logic for next time.

In a spare‑parts network, the engine continuously re‑evaluates which parts are worth stocking where, given failure rates, repair times, equipment criticality, and holding costs. It changes stocking policies without ceremony as conditions evolve, because the underlying economic calculation has changed. No one convenes a quarterly review to adjust “ABC classes” by hand.

In transportation, routing and consolidation are treated the same way. The system knows the cost curves for different carriers, modes, and service levels. It allocates shipments to lanes based on total cost and service impact, not a hierarchy of rules written five years ago in a workshop.

None of this requires mystical artificial intelligence. It requires precise data, honest economics, and the will to let the software place the everyday bets.

Why this confrontation matters

It might be tempting to see unattended decision‑making as a niche preference in software architecture, or as just one more “approach” among many. I do not see it that way.

The mainstream, plan‑centric view and the unattended, wager‑centric view are answering different questions.

The plan‑centric view asks: “How do we bring people into alignment around a view of the future?” It is understandably obsessed with consensus, meetings, and process maturity.

The wager‑centric view asks: “Given the uncertainty we face, how can we allocate scarce resources today in a way that improves long‑run profit?” It is obsessed with economics, with the flow of coins through the ledger, and with encoding that reasoning in software.

Both views care about service levels, costs, and risk. Both care about collaboration. But only one is designed to survive in a world where the volume and speed of decisions will continue to grow, while human attention remains finite.

In my book, I argue that supply chain is best seen as a branch of applied economics: a discipline whose job is to allocate scarce resources under variability. If that is true, then the natural endgame is clear. Wherever the economics are understood and the patterns are stable, we should let machines decide unattended. Wherever the economics are murky or the world has just changed, we should invest human effort to clarify the trade‑offs and update the logic.

The future does not belong to those with the prettiest plans. It belongs to those who can turn better reasoning into better decisions, at scale, without needing a roomful of people to re‑type the numbers every morning.