Introduction to Supply Chain
Joannès Vermorel
September 23, 2025
2
Contents
1 Primer 1
1.1 Decisions . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Optionality . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Physical goods . . . . . . . . . . . . . . . . . . . . 5
1.4 Variability . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Mastery . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Parting thoughts . . . . . . . . . . . . . . . . . . . 12
2 History 15
2.1 Early 1800s to pre-war 1900s . . . . . . . . . . . . . 16
2.2 WWII and post-war developments . . . . . . . . . . 17
2.3 1980s to early 2000s . . . . . . . . . . . . . . . . . . 20
2.4 Present day considerations . . . . . . . . . . . . . . 23
2.5 Mainstream perception . . . . . . . . . . . . . . . . 25
2.6 Looking ahead . . . . . . . . . . . . . . . . . . . . . 27
3 Epistemology 29
3.1 Economics . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Mathematics . . . . . . . . . . . . . . . . . . . . . . 34
3.2.1 Falsifiability . . . . . . . . . . . . . . . . . . 36
3.2.2 The incentives of academia . . . . . . . . . . 39
3.2.3 The pitfalls of safety stocks . . . . . . . . . 42
3.3 Sociology . . . . . . . . . . . . . . . . . . . . . . . 44
3.3.1 The pitfalls of S&OP . . . . . . . . . . . . . 48
3.4 Economic history . . . . . . . . . . . . . . . . . . . 51
3.5 Auxiliary sciences . . . . . . . . . . . . . . . . . . . 52
i
ii CONTENTS
3.6 Epistemic corruption . . . . . . . . . . . . . . . . . 56
3.6.1 Case studies . . . . . . . . . . . . . . . . . . 58
3.7 Negative knowledge . . . . . . . . . . . . . . . . . . 61
4 Economics 67
4.1 Essential economic principles . . . . . . . . . . . . . 68
4.2 Markets and supply chains . . . . . . . . . . . . . . 69
4.3 The value of supply chains . . . . . . . . . . . . . . 72
4.3.1 Standardization . . . . . . . . . . . . . . . . 73
4.3.2 Pooling . . . . . . . . . . . . . . . . . . . . 75
4.3.3 Batching . . . . . . . . . . . . . . . . . . . . 77
4.3.4 Versatility . . . . . . . . . . . . . . . . . . . 79
4.3.5 From forces to policies . . . . . . . . . . . . 81
4.4 The goal of supply chain . . . . . . . . . . . . . . . 83
4.4.1 Profit and loss . . . . . . . . . . . . . . . . . 84
4.4.2 Objective valuations . . . . . . . . . . . . . 85
4.4.3 Rate of return . . . . . . . . . . . . . . . . . 87
4.4.4 Cronyism and regulatory capture . . . . . . 98
4.4.5 Supra-economic goals . . . . . . . . . . . . . 99
4.4.6 Solutions and trade-offs . . . . . . . . . . . 111
4.5 Valuation concerns . . . . . . . . . . . . . . . . . . 114
4.5.1 The economic model . . . . . . . . . . . . . 115
4.5.2 Implicit valuations . . . . . . . . . . . . . . 116
4.5.3 Spending and revenue attribution . . . . . . 119
4.5.4 Projected spending and revenues . . . . . . 122
5 Information 131
5.1 Data, information, and knowledge . . . . . . . . . . 133
5.1.1 Data storage . . . . . . . . . . . . . . . . . . 134
5.1.2 Information theory . . . . . . . . . . . . . . 137
5.1.3 Knowledge . . . . . . . . . . . . . . . . . . . 140
5.2 Classes of enterprise software . . . . . . . . . . . . 144
5.2.1 The place of software . . . . . . . . . . . . . 146
5.3 Systems of records . . . . . . . . . . . . . . . . . . 148
5.3.1 The origin of enterprise software . . . . . . . 149
5.3.2 The CRUD design . . . . . . . . . . . . . . 150
CONTENTS iii
5.3.3 Technological trajectory . . . . . . . . . . . 152
5.3.4 Functional overlaps . . . . . . . . . . . . . . 156
5.3.5 Functional limits . . . . . . . . . . . . . . . 159
5.3.6 Vendor’s conundrum . . . . . . . . . . . . . 162
5.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.4.1 Origin of the data . . . . . . . . . . . . . . . 166
5.4.2 Semantics of the data . . . . . . . . . . . . . 168
5.4.3 Shenanigans in data . . . . . . . . . . . . . 172
5.4.4 Shadow IT . . . . . . . . . . . . . . . . . . . 174
5.4.5 Derived data . . . . . . . . . . . . . . . . . 176
5.4.6 External data . . . . . . . . . . . . . . . . . 180
5.5 Mundane knowledge . . . . . . . . . . . . . . . . . 187
5.6 Bad data . . . . . . . . . . . . . . . . . . . . . . . . 189
6 Intelligence 193
6.1 Systems of intelligence . . . . . . . . . . . . . . . . 194
6.1.1 A productive asset . . . . . . . . . . . . . . 196
6.1.2 Muddy emergence . . . . . . . . . . . . . . . 197
6.1.3 Culture and incentives . . . . . . . . . . . . 199
6.1.4 Conflicting design requirements . . . . . . . 201
6.1.5 No problem without solution . . . . . . . . . 205
6.2 General intelligence . . . . . . . . . . . . . . . . . . 207
6.2.1 General intelligence problems . . . . . . . . 209
6.2.2 Clerical jobs in supply chain . . . . . . . . . 212
6.2.3 Large language models . . . . . . . . . . . . 213
6.2.4 Misconceptions about intelligence . . . . . . 218
6.3 Specialized intelligence . . . . . . . . . . . . . . . . 224
6.3.1 Algorithms . . . . . . . . . . . . . . . . . . 225
6.3.2 Operations Research . . . . . . . . . . . . . 227
6.3.3 Statistical learning . . . . . . . . . . . . . . 229
6.3.4 Deep learning . . . . . . . . . . . . . . . . . 232
6.3.5 Generative Era . . . . . . . . . . . . . . . . 242
7 The Future 247
7.1 Visions . . . . . . . . . . . . . . . . . . . . . . . . . 248
7.1.1 Values . . . . . . . . . . . . . . . . . . . . . 251
iv CONTENTS
7.1.2 Time . . . . . . . . . . . . . . . . . . . . . . 252
7.2 The teleological vision . . . . . . . . . . . . . . . . 254
7.2.1 Historical emergence . . . . . . . . . . . . . 255
7.2.2 Time-series paradigm . . . . . . . . . . . . . 257
7.2.3 Planning through time-series . . . . . . . . . 260
7.2.4 Limits of planning . . . . . . . . . . . . . . 261
7.2.5 Bureaucratic appeal . . . . . . . . . . . . . 265
7.3 The Rugged Vision . . . . . . . . . . . . . . . . . . 268
7.3.1 Irreducible uncertainty . . . . . . . . . . . . 270
7.3.2 Opportunistic allocations . . . . . . . . . . . 272
7.3.3 Asymmetry of time . . . . . . . . . . . . . . 275
7.3.4 Superiority of the rugged vision . . . . . . . 278
7.4 Predictability of human affairs . . . . . . . . . . . . 279
7.4.1 Momentum and cyclicities . . . . . . . . . . 280
7.4.2 The dead end . . . . . . . . . . . . . . . . . 284
7.4.3 The law of small numbers . . . . . . . . . . 286
7.4.4 Probabilistic forecasts . . . . . . . . . . . . 288
7.4.5 Fat tails . . . . . . . . . . . . . . . . . . . . 294
7.4.6 The price of knowledge . . . . . . . . . . . . 297
7.4.7 High-dimensional forecasts . . . . . . . . . . 299
7.4.8 Functional forecasts . . . . . . . . . . . . . . 302
7.4.9 Stochastic simulations . . . . . . . . . . . . 305
8 Decisions 309
8.1 Thinking the decision . . . . . . . . . . . . . . . . . 311
8.1.1 Surfacing the frame . . . . . . . . . . . . . . 311
8.1.2 Banishing artifacts . . . . . . . . . . . . . . 312
8.1.3 System economics . . . . . . . . . . . . . . . 315
8.1.4 Embracing the margin . . . . . . . . . . . . 316
8.2 Why automate . . . . . . . . . . . . . . . . . . . . 318
8.2.1 A productive asset . . . . . . . . . . . . . . 319
8.2.2 Manual overrides . . . . . . . . . . . . . . . 320
8.2.3 Culture and unrest . . . . . . . . . . . . . . 321
8.2.4 Human exceptionalism . . . . . . . . . . . . 322
8.2.5 From management to machinery . . . . . . . 322
8.2.6 Where to start . . . . . . . . . . . . . . . . 325
CONTENTS v
8.3 Deciding is optimizing . . . . . . . . . . . . . . . . 327
8.3.1 The optimization paradigm . . . . . . . . . 327
8.3.2 On optimality . . . . . . . . . . . . . . . . . 329
8.3.3 The solver within . . . . . . . . . . . . . . . 330
8.3.4 The economics of compute . . . . . . . . . . 333
8.4 Deciding under uncertainty . . . . . . . . . . . . . . 334
8.4.1 Fragile decisions . . . . . . . . . . . . . . . . 336
8.4.2 Unenforceable constraints . . . . . . . . . . 339
8.4.3 Exploration vs. exploitation . . . . . . . . . 341
8.4.4 Nurturing options . . . . . . . . . . . . . . . 344
8.5 Sequential decisions . . . . . . . . . . . . . . . . . . 345
8.5.1 Window of responsibility . . . . . . . . . . . 347
8.5.2 Permutation invariance . . . . . . . . . . . . 351
8.5.3 Postponement principle . . . . . . . . . . . . 354
8.5.4 Policy search . . . . . . . . . . . . . . . . . 357
8.6 Changing course . . . . . . . . . . . . . . . . . . . . 358
8.6.1 The load of a decision . . . . . . . . . . . . 360
8.6.2 The half-life of a decision . . . . . . . . . . . 361
9 Engineering 363
9.1 Why program . . . . . . . . . . . . . . . . . . . . . 365
9.2 Experimental optimization . . . . . . . . . . . . . . 367
9.2.1 Cartesianism . . . . . . . . . . . . . . . . . 369
9.2.2 Mundane issues . . . . . . . . . . . . . . . . 370
9.2.3 Insane decisions . . . . . . . . . . . . . . . . 372
9.2.4 Iterative falsification . . . . . . . . . . . . . 373
9.2.5 Instrumentation . . . . . . . . . . . . . . . . 374
9.2.6 Continuous improvement . . . . . . . . . . . 375
9.3 Whiteboxing . . . . . . . . . . . . . . . . . . . . . . 376
9.4 Fermi-ization of the elusive . . . . . . . . . . . . . . 379
9.5 On programmability . . . . . . . . . . . . . . . . . 382
9.5.1 Programming paradigms . . . . . . . . . . . 385
9.5.2 Spreadsheets . . . . . . . . . . . . . . . . . 393
9.5.3 Generic languages . . . . . . . . . . . . . . . 395
vi CONTENTS
10 Deployment 399
10.1 Selecting a vendor . . . . . . . . . . . . . . . . . . . 401
10.1.1 Request For Proposals . . . . . . . . . . . . 402
10.1.2 Adversarial market research . . . . . . . . . 406
10.2 Scoping the deployment . . . . . . . . . . . . . . . 411
10.2.1 The numerical recipe . . . . . . . . . . . . . 413
10.2.2 The data pipeline . . . . . . . . . . . . . . . 416
10.2.3 Roles in the initiative . . . . . . . . . . . . . 418
10.3 Onboarding . . . . . . . . . . . . . . . . . . . . . . 420
10.3.1 Timeline overview . . . . . . . . . . . . . . . 421
10.3.2 Iterative deployment . . . . . . . . . . . . . 422
10.3.3 Progressive rollout . . . . . . . . . . . . . . 424
10.3.4 Long-term maintenance . . . . . . . . . . . 428
10.4 Organizational changes . . . . . . . . . . . . . . . . 432
10.4.1 Planner vs. Flow Manager . . . . . . . . . . 434
10.4.2 The Supply Chain Scientist . . . . . . . . . 439
10.4.3 The greater company . . . . . . . . . . . . . 443
11 Stagnation 449
11.1 The paradox of success . . . . . . . . . . . . . . . . 450
11.2 Real-world complexity . . . . . . . . . . . . . . . . 452
11.2.1 Operational vignette . . . . . . . . . . . . . 452
11.2.2 Local maximum . . . . . . . . . . . . . . . . 453
11.3 Why the equilibrium holds . . . . . . . . . . . . . . 454
11.4 Who sustains it . . . . . . . . . . . . . . . . . . . . 456
11.4.1 Software vendors . . . . . . . . . . . . . . . 459
11.4.2 Consultants . . . . . . . . . . . . . . . . . . 461
11.4.3 Corporate data science . . . . . . . . . . . . 463
11.5 The organizational trap . . . . . . . . . . . . . . . . 465
11.5.1 Second-class status . . . . . . . . . . . . . . 465
11.5.2 Bureaucracy and ticket latency . . . . . . . 466
11.5.3 Subordination to IT . . . . . . . . . . . . . 466
11.5.4 Patch accretion . . . . . . . . . . . . . . . . 467
11.5.5 Seam and ownership . . . . . . . . . . . . . 467
11.6 Rejection filters . . . . . . . . . . . . . . . . . . . . 468
11.6.1 Trivia . . . . . . . . . . . . . . . . . . . . . 469
CONTENTS vii
11.6.2 Authorities . . . . . . . . . . . . . . . . . . 470
11.6.3 Puzzles . . . . . . . . . . . . . . . . . . . . . 470
11.6.4 Using the filters . . . . . . . . . . . . . . . . 471
11.7 Looking onward . . . . . . . . . . . . . . . . . . . . 472
11.7.1 What must stop . . . . . . . . . . . . . . . . 473
11.7.2 Towards a better way . . . . . . . . . . . . . 474
A Technical notes 477
A.1 Aperiodic rate of return . . . . . . . . . . . . . . . 477
A.2 Discounted cash flows . . . . . . . . . . . . . . . . . 479
A.3 Informational entropy . . . . . . . . . . . . . . . . . 481
A.4 Topological sort . . . . . . . . . . . . . . . . . . . . 483
A.5 Four-layer perceptron . . . . . . . . . . . . . . . . . 486
A.6 Mathematical optimization . . . . . . . . . . . . . . 488
viii CONTENTS
Chapter 1
Primer
Between a bad economist and a good one lies the whole
distinction: the former stops at the visible effect; the latter
reckons both with the effect that is seen and with those
that must be foreseen. This difference is vast [..] the bad
economist pursues a small present good that will be followed
by a great future harm, whereas the true economist seeks a
great future good at the risk of a slight present hardship.
What is Seen and What is Not Seen (1850), Frédéric Bastiat
Supply chain is an abstraction—an intellectual tool that co-
ordinates the efforts of many to deliver physical goods cheaply
and reliably. This instrument was devised to tame the incredible
complexity that our industrialized civilization entails. Supply chain
cannot be seen or touched. Factories, warehouses, vessels, and
trucks—visible as they are—are not the supply chain. The supply
chain lies in the web of expectations binding those assets together.
Supply chain is an intent, not a thing.
Over the last two decades, automation has become a central
concern for supply chain. If the 20
th
century was the century of
mechanizing physical labor, the 21st is decidedly about mechanizing
intellectual labor. The ambient complexity of most large supply
chains already vastly exceeds what can be handled even by the
most diligent managers. As such, supply chain must be approached
and understood in ways that maximally leverage what modern
1
2 CHAPTER 1. PRIMER
computers offer.
Approaching supply chain is an arduous task. It is at once
abstract—demanding reflection on concepts like time or human
nature—and concrete, requiring attention to mundane details such
as a container’s maximum tonnage or national holidays. Most diffi-
cult of all, as Bastiat rightly noted, is the ability to envision what
cannot be seen—only foreseen. Indeed, the connection between
economics and supply chain runs deep.
We therefore need a precise definition of supply chain—a foun-
dation that will guide the chapters ahead.
Definition (Supply chain).
Mastery of optionality under variability in managing the
flow of physical goods.
Supply chain is both a field of study and a corporate prac-
tice. The field of study is descriptive: it describes the flow of
physical goods by uncovering its intrinsic principles and properties.
The goal is to develop a predictive theory that assesses both the
immediate and long-term consequences of every factor influenc-
ing the flow. Some levers, such as issuing a purchase order to a
supplier, are within the company’s control, whereas others, like
receiving a purchase order from a client, are not. Ultimately, these
consequences shape the company’s long-term profitability.
The corporate practice is prescriptive. It aims to design the
organization and methods best suited to guide the flow of physical
goods within—and potentially beyond—the company. The goal is
to pinpoint the perspectives, tools, and practices generally consid-
ered desirable for enhancing supply chains. Conversely, practices
generally detrimental to profitability should be recognized as such.
Naturally, the prescriptive approach is grounded in descriptive
analysis: understanding decision consequences should guide the
company toward better choices.
1.1. DECISIONS 3
1.1 Decisions
A decision allocates a scarce resource to a specific purpose, thereby
excluding alternatives that were previously available. For exam-
ple, if a company spends money to purchase one additional unit
of a product, that money can no longer be used for any alter-
native use. Over time, the company will need funds for new
purchases—possibly recovered by disposing of or repurposing the
previously acquired product. For now, the money remains com-
mitted as a result of the decision. Supply-chain decisions are the
subset of economic decisions concerned with the flow of goods;
many economic choices involve no such flow at all.
All resources, not just money, are scarce. Inventory is another
fundamental resource in supply-chain settings. For example, if
a unit in the warehouse is dispatched to one store, it cannot be
dispatched to any other. Eventually, the company might restock
the same product in the warehouse or even return the original unit
from the store. However, at least for now, that inventory unit is
committed as a result of the decision.
Pricing decisions also play a critical role in supply chain. Set-
ting a product’s price creates a market condition for its exchange,
effectively precluding other pricing options for that sale. Although
market demand always involves uncertainty, the chosen price com-
municates the company’s valuation of its inventory. Over time,
companies may adjust their pricing strategies for subsequent sales
in response to market feedback.
Any deliberate action that alters what moves, where, when, or
how much is a supply-chain decision. Raising a purchase order,
scheduling a truck, allocating an inbound container to a warehouse
rather than a cross-dock, or discounting a slow-mover all reshape
the material flow within operational lead times; they therefore fall
squarely under the remit of supply chain.
Conversely, initiatives whose influence is only remote—funding
basic R&D, commissioning an advertising campaign, scouting a
new geography—may eventually affect the flow but do so through
several intermediate steps. Their relevance is strategic rather than
4 CHAPTER 1. PRIMER
operational, and thus they sit outside the scope addressed here. In
practice, the boundary is crisp: among the hundreds of levers a
planner manipulates every week, only a handful resist immediate
classification, and for those rare cases the label—supply chain or
not—has negligible impact on profitability.
What usually obstructs this recognition is not analytical diffi-
culty but corporate tradition. Over decades, companies have la-
beled certain levers—pricing, merchandising, assortment choices—as
the exclusive province of marketing or sales and have thereby exiled
them from supply-chain conversations. Yet these very levers mold
demand and, by extension, every downstream movement of goods.
In the pages that follow, they will be treated where logic places
them: squarely within the perimeter of supply chain.
1.2 Optionality
A decision is made among competing options. However, having
multiple available options is not assured for a company. For
example, if a company has identified only one supplier for a product,
its choices are limited to selecting the order date and quantity from
that supplier. If that supplier proves too unreliable or expensive,
the company may be forced to stop ordering, likely frustrating
both the business and its clients. Thus, fostering favorable options
is crucial. Optionality is vital because any forecasts a company
produces inherently involve irreducible uncertainty, regardless of
the technology used.
Supply chain options are limited by the capacity—and, by
extension, the cost—required to establish them. Management
attention—and organizational attention in general—is another
scarce resource that must be allocated carefully. Having multiple
reliable suppliers is preferable, but it requires identifying, qualifying,
and contracting them to enable timely ordering. Depending on the
circumstances, this attention may be better invested in deepening
the relationship with the current—and possibly best—supplier.
Viewing unused options as mere overhead to be trimmed by
1.3. PHYSICAL GOODS 5
management is naive and misguided. There may be cases where
a company overinvests in optionality, yielding diminishing re-
turns when additional options fail to deliver proportional ben-
efits. Nonetheless, such overinvestments are infrequent, even if
maintaining options may seem costly to outsiders.
Numerous options are self-evident, yet for practical purposes
many are ruled out. This happens whenever options cannot be
easily assessed. For instance, a retail network with many stores
may recognize occasional opportunities to rebalance inventory by
selecting an optimal route for a truck to load and unload goods
sequentially. Yet, if such rebalancing must be planned by hand—or
with generic tools ill-suited to the task—the combinatorial explo-
sion swiftly overwhelms people and spreadsheets. In the absence
of dedicated optimization software able to sift through millions of
routing permutations in seconds, this class of options is quietly
dismissed as “too complex” and never reaches the decision table.
More often, options are assessed only intermittently, largely
due to the company’s limited attention or limited tooling. For
example, a company might place orders with a supplier only once
a week—not because weekly cycles have inherent limitations or
benefits, but simply because composing daily orders would be too
time-consuming. A daily assessment does not mean an order is
placed every day; it means the option to order is reviewed daily,
even if no order is made on most days.
1.3 Physical goods
Supply chain is about the flow of physical goods. Remove the goods,
and what remains is little more than finance and trading. The craft
unfolds in three recurring moves: first, money is converted into
goods—stock units purchased from suppliers; next, those goods are
processed—inspected, stored, transported, or transformed; finally,
the goods are turned back into money, whether through direct sale
or by being consumed in producing further products and services.
The process is meant to be repeated indefinitely, though never
6 CHAPTER 1. PRIMER
in the exact same way, as every aspect of the flow changes over
time. Only the existence of the flow remains constant, enduring as
long as the company exists.
Physical goods impose constraints that cannot be eluded. Every
action involving them—purchasing, moving, transforming—burns
cash, yet every non-action—keeping, waiting, storing—levies an
implicit toll just the same. Money is spent whether the goods flow
or stand still; the only leverage lies in choosing the mix of actions
and inactions that minimizes total cost while preserving optionality.
These realities remain invisible when goods are reduced to electronic
markers in modern trading systems. The high-frequency trader
dealing in copper need not care where the metal rests, how it is
protected from oxidation or theft, or whether it fits the gross-weight
limit of a twenty-foot container. The supply-chain practitioner,
conversely, must grapple with each of these constraints, and the
market rewards companies that master both the explicit costs of
action and the implicit costs of inaction.
Handling physical goods is constrained by matter, energy, space,
and time. Crates cannot tunnel through walls, pallets cannot
out-lift their forklifts, and a vessel that sailed from Shanghai
yesterday will not berth in Rotterdam tomorrow. By contrast, the
informational layer that steers those movements is, for practical
purposes, elastic: a commodity laptop can explore millions of
permutations per second, and a modest cloud cluster can tame
billions.
Elastic does not mean trivial. Turning raw data into actionable
insight is fiendishly difficult—noise, ambiguity, and sheer volume
conspire to blur the picture. Yet every minute a planner spends
waiting for a report to refresh, every replenishment batch delayed
by “system limitations”, flag a design failure. The bottleneck
should reside in steel and diesel, not in an SQL query.
Timing makes the point vivid. An algorithm that computes
procurement recommendations for a thousand SKUs in fifty millisec-
onds is instantaneous next to the fortnight needed to manufacture
the goods and the six weeks needed to float them across an ocean.
Relative to atoms, bits move at light speed; any decision latency
1.4. VARIABILITY 7
that looms large beside physical lead time is unnecessary waste.
Scale tells the same story. Enumerating ten million cartons
costs pennies in storage and milliseconds of CPU. Housing those
cartons demands hectares of racking, fleets of conveyors, and armies
of operators. When a project is declared “blocked by data volumes”,
the remedy is never more forklifts; it is always better data modeling
and leaner software.
In short, the limiting factor of a well-run supply chain ought
to be reality itself—cubic meters, kilowatts, calendar days. When-
ever the choke point sits in the digital cockpit—dashboards that
stall, optimizers that time-out, planners who “cannot see” the op-
tions—management has located a low-hanging opportunity. Elim-
inating decision friction is not cosmetic; it is the primary lever
through which modern computing squeezes more returns from the
same scarce resources.
1.4 Variability
Every supply-chain decision is a wager on the state of the firm and
its market at the moment the goods finally move. Because shifting
or transforming atoms always takes time—sometimes hours, often
days—the decision must reach into a future that is fundamentally
uncertain. That future may mirror today’s assumptions, or it may
diverge sharply, turning a careful plan into a loss or, with equal
ease, into an unexpected windfall.
For example, when a company places an order with an overseas
supplier, it must navigate multiple uncertainties. Production and
transportation delays are inherently unpredictable, as neither the
manufacturer nor the carrier is entirely reliable. Furthermore, the
quantity of goods received might be diminished due to random
transit damage. Finally, customers’ future willingness to pay
for these goods remains uncertain—further complicated by the
company’s lack of influence over competitors’ pricing strategies
during and after delivery.
A brief comparison between the isolated manufacturing process
8 CHAPTER 1. PRIMER
and supply chain as a whole clarifies the nature of variability. On
the factory floor, any deviation from the nominal thickness of a
steel sheet, the purity of a reagent, or the timing of a pick-and-
place arm is a defect. Variability is an enemy that can always be
beaten—at a price—and the price keeps falling as sensors, actuators,
and algorithms push control ever closer to the atomic scale; the
semiconductor industry is proof enough. Given sufficient capital,
the residual spread around the target mean can be driven arbitrarily
low. In supply chain, the situation is inverted: demand, lead
times, supplier reliability, and competitive moves are produced by
countless independent agents whose behavior cannot be engineered
away. No amount of money can purchase a deterministic future.
The only rational stance is to expect variability, cultivate options,
and design decisions that turn uncertainty into profit rather than
loss.
Indeed, supply chains are exposed to a broader world on a
scale far beyond that of a closed manufacturing environment. No
company can shield itself entirely from external forces—be they
from clients, suppliers, regulators, the military, or competitors. In
addition to man-made factors, natural events—such as snowstorms,
floods, forest fires, tsunamis, earthquakes, and pandemics—still dis-
rupt supply chains, even though the human toll such catastrophes
exact has, thankfully, declined markedly over the last century.
Unlike in a manufacturing process, where variability is invari-
ably a net negative, supply chain variability can often be trans-
formed into a competitive advantage. Consider a bottled-water
producer that intentionally builds filling capacity far above its
average demand. Most of the time the extra lines sit idle; yet
when a heatwave strikes—an episode that recurs only a handful
of summers per decade—the company can surge output overnight
while competitors ration stock. The transient spike in volume,
coupled with the premium consumers willingly pay under swelter-
ing temperatures, more than repays the dormant steel. Surplus
capacity here is an option bought in advance precisely to monetize
extreme weather—a textbook illustration of variability turned into
profit.
1.5. MASTERY 9
In short, whenever variability brings risks, it simultaneously
opens up opportunities. In fact, risks and opportunities reveal
more about an observer’s vested interests than about their intrinsic
nature. Tour operators organizing trips to Mount Etna have a
vested interest in the volcano’s continued activity. For them, unlike
the people of ancient Pompeii, an active volcano is not a disaster
but a crucial source of income.
1.5 Mastery
Mastering a supply chain means nurturing favorable options while
committing just enough resources to remain profitable today and
resilient tomorrow. Such mastery is never the feat of a lone planner;
it crystallizes through the coordinated work of many disciplines
acting in concert.
Even in well-aligned organizations, five stubborn traits keep
supply chains hard to tame: intangibility, mediation, complexity,
connectivity, and wickedness. The pages that follow examine each
trait in turn and show how it can be transformed from a hindrance
into a lever of competitive advantage.
All visible assets—factories, trucks, pallets, even people—matter
to the supply chain only through the intent that animates them.
Supply chain is thus fundamentally intangible: a set of expectations
about future exchanges of goods, money, and service.
Consider a single bottle of milk resting on a supermarket shelf.
Tangible though it is, its presence is justified only by a pair of
overlapping wagers. The retailer bets that a customer will buy
the bottle before it spoils; the customer bets that the retailer will
keep the shelf stocked. These bets remain silent when they succeed
and become painfully audible only when they fail—when the milk
expires or the shelf is bare.
This intangibility is the first obstacle to mastery. Expectations
never appear on a balance sheet; they must be inferred from their
symptoms—stockouts, overstocks, lead-time drift, margin erosion.
Like economists, supply-chain practitioners advance by surfacing
10 CHAPTER 1. PRIMER
the latent assumptions buried in every MOQ, every replenishment
cycle, every promotional calendar. Mastery begins only once those
assumptions are made explicit, quantified, and challenged.
Modern supply chains cannot be observed directly; every glimpse
is mediated by software records. What planners read as on-hand
inventory, open purchase orders, or transit quantities is only a dig-
ital representation, assembled by an applicative landscape pieced
together over decades. Crucially, this landscape did not arise
as a scientific experiment designed to capture the most relevant
data—contrary to what many academic treatments imply—but as
a patchwork of tools built to keep operations running day after
day. Consequently, the landscape dictates both the visibility and
the vocabulary of the flow; the atoms themselves remain one step
removed.
Practitioners often worry about numerical inaccuracies—an
errant cycle count, a late barcode scan—yet such errors are minor
and usually contained. The real hazard is semantic drift. A
field that meant confirmed_order yesterday may, after a vendor
patch or an improvised user workaround, capture something quite
different tomorrow while its label stays unchanged. Meanings
mutate faster than counts, and distortions multiply as data hops
between systems. Each record is therefore a hypothesis about
reality, not reality itself, and must be revalidated continuously.
The complexity of supply chains far surpasses the capacity of
any single mind, regardless of the effort invested. Even modest
firms now juggle hundreds of thousands of SKUs (stock-keeping
units). Part of this proliferation is accidental—legacy regulations,
haphazard processes, badly patched software. Yet an ever-larger
share is intentional: intricacy deliberately introduced because it
augments the value proposition.
E-commerce makes the point vivid. Delivering a toothbrush
to a studio apartment entails last-mile routing, parcel tracking,
and reverse logistics—steps absent from a pallet drop at a hyper-
market. The extra layers are undeniably complex, yet they spare
customers the trip and therefore command a premium. In aviation,
engine makers no longer sell hardware; they sell “power by the
1.5. MASTERY 11
hour”, aligning perfectly with what airlines value while shifting
maintenance variability onto the manufacturer’s MRO network.
Likewise, a fashion brand that broadens its assortment from 1,000
to 10,000 SKUs complicates buying, stocking, and markdowns, but
the wider choice enlarges the addressable audience and cements
loyalty.
Such essential complexity earns its keep because it creates
customer value that can be monetized—provided it is profitably
tamed. What remains—compliance quirks, phantom SKUs, policy
relics—is accidental complexity and deserves to be excised. Dis-
tinguishing the two, suppressing the accidental, and harnessing
the essential is one of the grand challenges of supply chain in the
21
st
century.
Supply chains operate as systems in the sense articulated by
Russell Ackoff: the performance of the whole emerges from the inter-
actions of its parts, not from the isolated excellence of each. They
are therefore inherently interconnected. Every decision—however
small—ripples through the entire network. Consider a single unit of
inventory resting in a regional warehouse. The moment a planner
allocates it to Store A, all other stores are instantly deprived of
that option. A local relief—preventing a stockout at Store A—may
simultaneously seed future shortages at Stores B, C and D. This
zero-sum tension is structural, for every product, supplier, truck,
and minute of managerial attention draws upon the same finite
pools of resources.
These couplings are magnified by economic forces. Customers
expect complementary items to be available together; carriers
quote full-truck rates, not pallet rates; a conveyor belt amortizes its
capital only when volume is consolidated. Attempting to “optimize”
one node in isolation merely displaces cost or risk elsewhere in the
chain. Such mechanical, divide-and-conquer reasoning—excellent
for machines—fails for supply chains because it ignores the feedback
loops that bind decisions across time and space.
Mastery therefore demands systems thinking: evaluating every
lever through its influence on the entire flow, both now and in
the future. Only a company that sees warehouse allocation, truck
12 CHAPTER 1. PRIMER
scheduling, replenishment rules, and pricing as facets of the same
decision space can transform interdependency from a handicap
into an advantage.
The problems that supply chains face are wicked—not in the
moral sense, but in the technical sense coined by Rittel and Webber
to describe challenges that mutate as soon as one intervenes. A
wicked problem has no crisp formulation, no final solution, and
every “fix” reshapes the very system it tries to mend.
A concrete illustration makes this plain. Suppose a company
announces a bonus tied to “94 % forecast accuracy. Once planners’
livelihoods depend on that figure, the fastest route is to shorten
horizons, lump SKUs together, or pad safety stocks until the
arithmetic behaves. Working capital soars, service slides, yet the
dashboard glows green. The measure became the target and thus
ceased to be a good measure—Goodhart’s Law laid bare.
The paradox runs deep. Supply-chain management needs ex-
plicit, numerical targets so optimizers—often software immune to
personal agendas—can pursue them relentlessly. Yet the moment
a target is announced, it broadcasts fresh incentives for employees,
suppliers, and even competitors to game the rules. No prescription
remains optimal for long, and any rule that is obvious, static, or
predictable will be undermined as agents adapt. Wickedness there-
fore overturns naive academic hopes of tidy “problem + resolution”
pairs: mastery lies in continually revisiting both the objectives and
the algorithms that chase them.
1.6 Parting thoughts
The deceptively compact definition of supply chain hides a radical
prospect. Once every lever that moves goods is captured as data,
decisions no longer wait for a planner to click “OK”; they can
be enacted, audited, and refined by algorithms that run day and
night at negligible marginal cost. Automation—understood as
unattended, software-driven decision making—is poised to do for
white-collar schedulers what the assembly line did for manual labor.
1.6. PARTING THOUGHTS 13
Far from being mere labor arbitrage, such automation unlocks
productivity that spreadsheet-ridden organizations can scarcely
imagine. Many companies now employ more analysts grooming
spreadsheets than operators handling cartons, yet a single decision
engine can outpace both, slashing latency and error while freeing
scarce talent to tackle novel, higher-order challenges.
More importantly, software can roam decision spaces that hu-
mans never will. Continuous repricing during viral demand spikes,
mid-voyage rerouting of ocean containers, or continent-wide inven-
tory balancing every hour becomes tangible once computation—not
human patience—is the bottleneck. Just as e-commerce reinvented
retail rather than polishing telesales, unattended optimization will
reshape supply chains instead of merely accelerating yesterday’s
procedures.
The quantitative supply-chain literature amassed since the UNI-
VAC I (1951) overflows with intimidating algebra and petabyte-
hungry algorithms, yet it still fails to harness modern computing
to deliver tangible supply chain results. Its sophistication is mis-
directed: it optimizes what is easy to tally rather than what is
economically decisive, clings to brittle point forecasts, and col-
lapses when confronted with the combinatorial sprawl of real-world
options. By missing the target, it turns today’s near-free gigaflops
into background noise while planners continue to wrestle with
spreadsheets. The qualitative prescriptions on leadership have
fared no better: libraries are full of supply-chain “handbooks” that
few leaders read and from which fewer still would benefit.
A striking symptom of this vacuum is the prevalence of self-
taught practitioners in large companies. Society rightfully tolerates
no self-taught surgeons—the gap between amateurs and trained
experts is too vast to risk human life. Yet when billions in inventory
are at stake, enterprises routinely rely on planners who have pieced
together their craft from tribal lore and blog posts. The comparison
is stark, and it underscores how underdeveloped the discipline
remains.
Mainstream methods have not merely underdelivered; in many
cases they have actively misled, promoting illusory precision and
14 CHAPTER 1. PRIMER
spreadsheet heroics in place of scalable science. Their shortcom-
ings—and the path to move beyond them—are examined head-on
in the final chapter of this book.
The chapters ahead assemble the scientific building blocks
needed to encode supply-chain mastery into software and to unlock
the unattended optimization that tomorrow’s leaders will take
for granted. Replacing folklore with a precise, computer-ready
science is the prerequisite for turning variability and optionality
from recurring headaches into repeatable gains.
Chapter 2
History
Definitions of logistics, supply chain, and supply chain management
(SCM) diverge: some restrict “supply chain” to the physical flow of
goods and reserve “SCM” for coordination; others treat the terms
as synonyms. Vendors, consultants, and practitioners twist the
terms further to flatter org charts and marketing pitches; a brief
historical detour helps explain the resulting fragmentation.
Supply-chain ideas sit between two poles: Euclidean crispness
and AI’s shape-shifting jargon. Labels change every decade; the
underlying task—mastering optionality in the flow of physical
goods—does not. Claims of expertise in logistics, operations re-
search, supply chain, or enterprise optimization often reveal more
about the claimant’s vantage point than about the field.
This chapter serves a dual purpose. First, it retraces how the
loose, ever-shifting label supply chain emerged from earlier notions
such as logistics and operations research, showing that today’s
meaning is still provisional. Second, it illustrates the adversarial
mindset that will thread through the rest of the book. Below, each
buzzword is paired with the incentives that birthed it. Vendors
trumpet novelty to sell licenses; consultants repackage yesterday’s
ideas to sell slides; executives, anxious not to fall behind, adopt the
language before testing the substance. Sustaining this skeptical
posture protects practitioners from costly mirages while still letting
15
16 CHAPTER 2. HISTORY
authentic innovations surface.
2.1 Early 1800s to pre-war 1900s
Long before “logistics” or “supply chain” were coined, the activity
itself was the lifeblood of every city-state. Rome relied on African
grain fleets, Amsterdam on Baltic timber convoys, and Edo on
rice barges timed to favorable tides. Each case posed the same
enduring puzzle: bridging, day after day, the spatial gap between
production and consumption amid uncertainty. The challenge en-
dures; the jargon and analytical tools change. The first systematic
mathematical attempts surfaced only in the late 18
th
century—a
story we now turn to.
If the modern supply chain era means pursuing operational im-
provements analytically, it begins in that period. In 1781, French
mathematician Gaspard Monge published his Memoir on the The-
ory of Embankments and Rubble (Mémoire sur la théorie des déblais
et des remblais), proposing to minimize the labor required to move
soil for leveling terrain with both elevations and depressions.
The term logistics, once used interchangeably with supply
chain, took shape in a military context during the first half of the
19
th
century. The earliest attested use—logistique—appears in the
Summary of the Art of War (Précis de l’Art de la Guerre) (1830) by
the Swiss officer Antoine-Henri Jomini
1
. Logistics meant moving
resources—people, equipment, materials—to the right place at
the right time. It refined the longstanding concept of military
stewardship dating to antiquity.
In 1840, the British Post Office adopted the “Uniform Penny
Post”, an early civilian demonstration of what we would now call
operations research. Mathematician-engineer Charles Babbage
compiled cost tables showing distance contributed only marginally
to postal expenses; the real drivers were collection, sorting, and
final delivery. Armed with these quantitative insights, Parlia-
ment accepted a nationwide flat rate of one penny per letter, a
1
Jomini derives it from the French logis, “lodgings.
2.2. WWII AND POST-WAR DEVELOPMENTS 17
decision that radically expanded the volume of mail and, by exten-
sion, the reach of commerce. Babbage’s study prefigured modern
supply-chain analytics by replacing folklore with measurement and
optimization. His later pursuit of programmable calculating ma-
chines—culminating in the Difference Engine—aimed to automate
exactly the statistical tabulations his postal work required.
In 1888, economist Francis Ysidro Edgeworth, of Irish and Span-
ish descent, articulated an insight later known as “the newsvendor
problem”. His idea, first introduced in his Mathematical Theory
of Banking, described the challenge of determining the optimal
inventory level when excess stock must be disposed of at the end
of a period, as is done daily with unsold newspapers. The contri-
bution blended statistical risk analysis with operational problem-
solving—a perspective that would not become mainstream for
roughly a century.
The rise of a stock-owning middle class in the United States at
the end of the 19
th
century, in turn, gave birth to the early economic
forecasters. Among these pioneers, Roger Babson and Irving Fisher
rose to prominence at the start of the 20
th
century. They promoted
quantitative methods for allocating resources to individuals and
corporations—a spirit echoed by many modern enterprise software
offerings. They also pioneered statistical time-series forecasting
techniques.
The evolution of economics in the first half of the 20
th
century
spurred numerous methods for solving small operational puzzles.
Several of the attempted solutions led to theories and formulas
that would become enduring supply chain shibboleths. For exam-
ple, the economic order quantity (EOQ) formula—a supply-chain
staple—was introduced in 1913 by Ford Harris and popularized in
the 1930s as Wilson’s formula.
2.2 WWII and post-war developments
At the outbreak of the Second World War, the field returned to its
military roots. Britain assembled nearly 1,000 men for what became
18 CHAPTER 2. HISTORY
operations research, defined by the British Army as “a scientific
method of providing executive departments with a quantitative
basis for decisions regarding the operations under their control”.
The effort delivered resounding results: fewer artillery rounds to
down enemy aircraft and higher survivability for British ships and
planes.
Buoyed by its wartime prestige, operations research rapidly
expanded in academic circles. The advent of general-purpose pro-
grammable computing hardware enabled the community to explore
methods that would have been utterly impractical to compute by
hand. Hardware limits kept most 1950s–60s research on static or
stationary problems—those assumed unchanged over time. For
example, determining the location of warehouses to service a re-
tail store network, if we assume the demand to be constant, is
a stationary problem. In contrast, optimizing warehouse alloca-
tion processes and dispatch routes is dynamic, as these problems
are sensitive to fluctuations in demand, lead times, traffic, driver
availability, truck maintenance, and more. Dynamic problems
outstripped the hardware of the day and were largely set aside.
Although operations research initially captivated corporate
managers, its appeal began to fade in the 1970s. By the 1980s,
wartime prestige had faded and corporate interests had shifted
2
.
The 1970s brought novel advances, including computing hardware
capable of leveraging barcodes
3
. Coupled with Codd’s 1970 rela-
tional database, barcodes would reshape supply chains and their
digital landscape in subsequent decades. This convergence yielded
the inventory management system—an early precursor to modern
business software.
In software circles, “management” then meant the routine tasks
of lower-level managers. The so-called “management system” was
designed to replace the manager entirely. Although their work was
largely clerical, these managers belonged to an educated workforce
2
By the 2020s, operations research (OR)—in Americanized spelling—remains
productive, but mostly in designing solvers for mathematical optimization, with
little connection to business or supply chain practice.
3
Patented in 1952 by Norman Woodland and Bernard Silver.
2.2. WWII AND POST-WAR DEVELOPMENTS 19
adept at keeping meticulous, accurate records. Moreover, these
white-collar workers held positions of authority and prestige that
surpassed those of blue-collar workers responsible for physically
moving goods.
With the emergence of management systems, software vendors
soon recognized that supply chain management encompasses two
distinct facets: bookkeeping and decision-making. Bookkeeping
involves electronically tracking stock levels and handling routine
clerical tasks. Decision-making involves determining the optimal
inventory orders and allocations for the company. Vendors initially
aimed to automate the managerial role; bookkeeping yielded readily,
but decision-making proved vastly more complex—well beyond
1970s hardware.
Nevertheless, “bookkeeping” lacked the allure of “management”.
Software vendors opted for the more marketable term—even at
the cost of clarity—because it bolstered their product image. Sub-
sequently, the software industry widely adopted the term “manage-
ment” in naming systems: WMS (warehouse management system),
CRM (customer relationship management), PLM (product lifecycle
management), etc.
The ubiquity of “management” eventually diluted the prestige
of the title “manager”. In the 2020s, the title “manager” frequently
implies managing only oneself—as seen in roles like project man-
ager, office manager, or account manager. Conversely, employees
with genuine hierarchical authority typically benefit from job titles
that are more specific than “manager”. Instead, these employees
might be assigned titles like “lead” or “leader”.
Thus, “supply chain management” denotes the activities of
managers in the traditional corporate sense—individuals with
formal authority. In contrast, “supply chain” is broader, naming
both the practice and the academic field.
20 CHAPTER 2. HISTORY
2.3 1980s to early 2000s
By the 1980s, software vendors expanded their scope beyond in-
ventory to encompass orders, payments, salaries, and more. Dis-
tributed software was not yet economically viable for vendors.
Thus, if a company was to acquire one system to bookkeep inven-
tory and another to bookkeep salaries, the later integration of the
two systems was usually deemed impractical. Vendors therefore
consolidated functions into a single mainframe monolith.
From post-war fame, logistics had slipped behind finance and
marketing in prestige. A blunt aphorism circulating in large
consumer-goods companies at the time summed up the prevailing
hierarchy: “Smart people go to marketing, reliable people to pro-
duction, and those lacking both qualities to logistics. Although
clearly colored by amicable rivalry between corporate vocations,
the saying captured a widespread perception of diminished status
within the logistics function.
One explanation is that, until the 1980s, logistics was largely
empirical. The initial promise of operations research to transform
logistics into a modern analytical science had faded. In 1979, Rus-
sell Ackoff, an American pioneer of the corporate era of operations
research, summarized the situation in his paper The Future of
Operational Research is Past:
Operations Research is dead even though it has yet to be
buried. I also think there is little chance for its resurrection
because there is so little understanding of the reasons for
its demise [...] The life of OR [Operations Research] has
been a short one. It was born here late in the 1930s. By the
mid-60s it had gained widespread acceptance in academic,
scientific, and managerial circles. In my opinion this gain
was accompanied by a loss of its pioneering spirit, its sense
of mission and its innovativeness. Survival, stability and
respectability took precedence over development, and its
decline began.
In the 1980s, decision-making stagnated while bookkeeping
flourished; by decade’s end, many companies had digitized their
transactional operations. Electronic records enabled new ways
2.3. 1980S TO EARLY 2000S 21
to monitor operations; simple totals and averages, recalculated
continuously, began to reshape corporate practice.
Having made modest early-20
th
-century debuts, consultancy
firms became corporate giants by the early 21
st
century. Among
many causes, the spread of electronic records in the 1980s signifi-
cantly boosted management consulting.
Despite the mundane observation that the sole occupation of
consultants appears to be wrestling with spreadsheets and slides
4
,
the corporate rationale is that consultants are in the business of
selling impact. Those electronic records created numerous oppor-
tunities to enhance a variety of processes without resorting to any
grand level of analytical sophistication.
Consulting’s perennial challenge is differentiation. As consult-
ing comes with low entry barriers, it is difficult to maintain an
offering that remains distinct from what the competition is offering.
Thus, it is hardly surprising that influencing the evolution of cor-
porate language is highly appealing to consultants. A now-classic
move is to coin—or at least adopt—new terms every few years to
refresh the offering and brand. The authority gained from being
ostensibly linked with a novel, trendy buzzword creates a highly
desirable competitive edge, at least until competitors inevitably
replicate it.
Supply Chain Management (SCM) was coined by the British
logistician-turned-consultant Keith Oliver in 1982. Sales and Oper-
ations Planning (S&OP) was coined in 1985 by American software
engineer-turned-consultant Richard Ling. Both perspectives re-
volve around the idea of seeking end-to-end alignment of the various
corporate functions. Both practices would have remained abstract
without the increasing availability of digital corporate records.
Backed by software vendors, consultant-advocated practices
spread and demanded new skill sets. Logistics directors became
responsible for producing time-series forecasts and tuning service-
level parameters, as exemplified by the safety stock formula. These
new, more analytical responsibilities marked a significant departure
4
Also invented in the 1980s.
22 CHAPTER 2. HISTORY
from traditional logistics directors, who were primarily field leaders
capable of managing large teams of warehouse workers and truck
drivers (no small feat).
By the 1990s, large vendors integrated a bewildering array of
bookkeeping processes into software monoliths. For clarity, these
products should have been named ERM (‘Enterprise Resource
Management’); however, this terminology never caught on. By
then, computing hardware had advanced so much that the decision-
making aspect was no longer considered intractable. Enterprise
clients soon realized that, despite the claims, these tools were
limited to bookkeeping.
Nevertheless, many vendors began—or resumed—investing in
the decision-making aspects of supply chain. These vendors wanted
to avoid confusion with bookkeeping competitors, so they differen-
tiated their offerings. For this reason, vendors, assisted by market
analysts, coined the term ERP (Enterprise Resource Planning).
Within enterprise software, “planning” became the rallying cry
signaling decision-making capability. Indeed, with a master plan
that drills down to the most granular level—thanks to improved
computer hardware—every decision becomes, in theory, merely a
matter of clerical execution
5
. In practice, planning modules com-
bined point time-series forecasting with deterministic inventory
optimization. The intent was to deliver a technological solution
that would replace the lower-level manager entirely in one fell
swoop, rather than gradually.
Soon, competing vendors labeled their products ERPs regard-
less of how little planning they actually included. Granted, en-
terprise products are inherently complex, aiming to cover a wide
array of scenarios; opacity often extends to clients and even to
vendors themselves. As a result, vendors enjoyed considerable free-
dom in choosing which capabilities to showcase in their marketing
materials. One might even argue that a simple moving average
sales forecast qualifies as ‘planning’, thereby justifying the ERP
5
This insight is incorrect deeply so but we will return to it at a later
time.
2.4. PRESENT DAY CONSID ERATIONS 23
label. Many vendors took advantage of this semantic legerdemain.
From the 1990s onward, new vendors focused exclusively on
decision-making, positioning themselves as an analytical layer
above the transactional layer offered by ERPs (which arguably
should have been called ERMs). In the supply chain context,
these vendors broadly fell into two camps: the algorithmic and the
forensic. Both camps leveraged the same electronic records found
in preexisting business systems.
The algorithmic camp focused on time-series forecasting tech-
niques as well as the inventory optimization techniques underlying
the planning paradigm. Because ERPs already claimed planning,
marketing extra “planning” was difficult; emerging vendors, aided
by analysts, coined “Advanced Planning System” (APS). In other
words, “yes, your ERP has planning already, but our complemen-
tary solution has advanced planning.
The forensic camp concentrated on producing business reports
designed to help managers make better decisions. ERPs already
featured “business reporting”, so software vendors—again with
analyst help—popularized “business intelligence” (BI) for this
class of tools
6
. Technically, BI’s multidimensional cubes marked
a notable departure from Codd’s relational model two decades
earlier.
2.4 Present day considerations
In 2025, the descendants of both camps (algorithmic and forensic)
remain active. Within the algorithmic camp, integrated busi-
ness planning is promoted with an emphasis on machine learn-
ing—pitched as a superior descendant of earlier statistical tech-
niques. In the forensic camp, some promote concepts such as “sup-
ply chain digital twins” or “supply chain control towers,” which
are aimed at providing insights to company leadership.
All this shows that creatively rebranding existing products with
6
Here “intelligence” means collecting and presenting data; by any reasonable
standard those tools are not intelligent.
24 CHAPTER 2. HISTORY
the latest buzzwords—a practice traceable to the 1970s—remains
prevalent. The case of “planning” capabilities was merely a minor
skirmish in the history of enterprise software. Computing hardware
and software have advanced at an impressive pace over the past
70 years. Due to continuous advances in hardware, technological
obsolescence arguably poses the greatest threat to enterprise soft-
ware vendors today. Newer hardware makes once-impossible feats
feasible—but typically demands complete software re-engineering
to exploit its capabilities. Given this reality, it is far easier—and
more cost-effective—to rebrand one’s offering than to completely
re-engineer it to fully leverage the latest technological develop-
ments.
Software vendors encounter a real-world example of the classic
prisoner’s dilemma. Collectively, vendors would benefit from more
honest and transparent communication about their products. Mak-
ing promises that cannot be delivered inevitably leads to costly
problems for both parties. Yet, if one vendor exaggerates its claims
more than competitors, it gains a strategic market edge by making
its solution more appealing to clients. Overpromising and underde-
livering are thus unsurprising consequences of enterprise software’s
dynamics.
The lexical inflation soon reached the org chart itself. Supply
chain directors emerged in the early 2000s—sometimes as a simple
rebranding of the incumbent logistics director, sometimes as a
freshly minted role meant to champion cross-functional analytics.
By the 2020s, logistics directors seldom engaged in sophisticated
predictive analytics, whereas supply chain directors rarely handled
day-to-day warehouse operations.
Although advanced analytics—machine learning and mathe-
matical optimization—have been available for about three decades,
adoption in supply-chain divisions remains the exception; spread-
sheets still dominate decision-making.
Today, a logistician handles warehousing and transportation.
When logistics is outsourced, providers are known as “third-party
logistics providers” (3PLs), which receive an electronic order for
each stock movement. Ironically, although logistics was originally
2.5. MAINSTREAM PERCEPTION 25
founded on the principle of moving resources at the right time
and place, 3PLs profit from inefficiencies—charging for inventory
movement and for storage duration. Thus, the greater the number
of movements and the longer the storage durations, the higher the
revenue earned by the logistician.
2.5 Mainstream perception
Over the last decade, the term “supply chain” has become increas-
ingly common in mainstream media. In news coverage, a “supply
chain” refers to a network of specialized companies that cooper-
ate—sometimes very closely—to deliver sophisticated products,
such as semiconductors or automobiles. News about a “supply
chain” typically emerges from negative events—such as a catas-
trophic port fire—that not only affect one entity but also ripple
through a network of interdependent companies, amplifying the
original impact.
Journalists frequently question what should be done to pre-
vent such problems from happening again in the future. While
legitimate, this perspective overlooks the absence of any central
authority governing those “supply chains”. Indeed, in this context,
a supply chain is composed of numerous uncoordinated compa-
nies. Furthermore, introducing an authority—such as through
government intervention—would likely worsen the situation, as
such an authority would lack the necessary information to allo-
cate resources effectively. As Friedrich Hayek noted in The Use
of Knowledge in Society (1945), relevant knowledge is widely dis-
persed among countless individuals in disparate companies. No
single authority can ever fully harness this decentralized informa-
tion, and any attempt to impose a “captain” inevitably leads to
resource misallocation.
Such newsworthy failures are emergent properties of the econ-
omy’s division of labor. Any division of labor carries inherent
fragility; if a highly specialized segment falters, its repercussions
spread throughout the economy. The only alternative to this
26 CHAPTER 2. HISTORY
fragility is to renounce the division of labor, an unreasonable
proposition. Indeed, it would dramatically increase the cost of
physical goods consumed every day, from rolls of toilet paper to
smartphones. The tremendous benefits of the division of labor have
been well-understood for centuries, with the first formalization
dating back to the 19
th
century with the work of David Ricardo
and his law of comparative advantage.
Yet the division of labor characterizes markets as well: people
trade precisely because specialization benefits both sides. The
"supply chains" observed by the media are merely groups of tightly
dependent companies that emerge due to the competitive advan-
tages conferred by their respective specialization.
Supply chains are also conflated with outsourcing, perceived as
“stealing” jobs from wealthier Westonia to benefit poorer Ruritania.
Yet, in modern settings, gaining access to cheap labor overseas
is rarely more than a secondary concern. If supply chains were
solely about accessing the cheapest labor, then Africa would be a
prime supply chain partner for advanced nations. Yet, the opposite
is true: advanced nations primarily trade between themselves.
The least-developed countries—often with the cheapest labor—are
largely absent from global trade.
In practice, infrastructure, technology, and capital requirements
usually overshadow pure labor considerations. For instance, the
containerization boom of the 1950s–60s drastically reduced mar-
itime shipping costs, but also demanded massive port upgrades
that only countries with robust capital investments and stable
regulatory frameworks could afford. More recently, high-value
electronics require advanced robotics, rigorous quality standards,
reliable power, and specialized engineering talent—complexities
that dwarf the advantage of marginally lower wages. As a result,
countries with strong logistics networks, dependable legal systems,
and a critical mass of technical know
-
how benefit from large trade
volumes with similarly developed nations, whereas regions lacking
these assets often remain largely disconnected from modern supply
chain opportunities.
2.6. LOOKING AHEAD 27
2.6 Looking ahead
Supply-chain terminology remains in flux, and vendors are more
active than ever. Some consultancies advocate the “agile”, “dy-
namic”, and “holistic” supply chain—and for “supply-chain design”.
Some forgo the “supply chain” suffix altogether, as in flowcasting or
Demand Driven Material Requirements Planning (DDMRP). All
these paradigms address similar problems, although their methods
and techniques differ significantly.
Given that over a million scientific papers and more than ten
thousand books have been published in the last 70 years, the field
of ‘supply chain’ is generally regarded as mature. Yet companies
still rely on spreadsheet heuristics that bear little resemblance to
mainstream supply-chain theories.
One could blame practitioner ignorance of the vast literature;
the charge is hardly tenable. Most large companies with an inter-
national footprint operate under fierce competitive pressure. Given
how little mainstream theory has evolved over the last half-century,
claiming persistent corporate ignorance strains credulity.
A simpler proposition is that mainstream theory is inadequate:
practice deviates because the theory is flawed and the anticipated
profits fail to materialize. In other words, despite its vast literature,
supply chain remains in a pre-scientific stage where knowledge fails
to yield consistent and predictable results.
Over seventy years, bookkeeping layers leaped forward—barcode
scans, real-time databases, cloud ledgers—yet the rules deciding
what to buy, make, move, or stock still resemble mid-century text-
books. Progress has been spectacular on the transaction side; on
the decision side it has stalled. The root is not technical but
epistemic: the field lacks a robust way to separate durable insight
from fashionable doctrine.
Explaining—and ultimately closing—this gap requires a pause
in the historical narrative. The next chapter, therefore, shifts
from chronology to epistemology. Before prescribing remedies, we
will examine how claims are generated, who benefits from which
narratives, and what tests ideas must pass before governing a single
28 CHAPTER 2. HISTORY
pallet. Only once these standards are explicit can progress in
decision-making—interrupted for five decades—resume in earnest.
Chapter 3
Epistemology
Improving a supply chain begins with improving the knowledge
that steers it. This chapter therefore shifts from the mechanics
of moving goods to the epistemic question that precedes every
recommendation: How do we decide that an idea about supply chain
merits belief, funding, and deployment? Answering this question is
not idle philosophy; it is the only safeguard against the fashionable
doctrines, glossy slides, and “proven” algorithms that routinely
consume budgets while leaving performance unchanged.
Supply-chain knowledge is unusually error-prone. Uncertainty
blurs the feedback loop between decision and outcome; incentives
reward confident storytelling over careful testing; and the sheer
physical latency of global flows makes controlled experiments rare.
As a result, spreadsheets still dominate day-to-day practice while
journals and books overflow with theories that claim—on paper—to
be superior. The tension is epistemic, not technical: we lack a
robust method for filtering durable insight from transient hype.
This chapter therefore pursues three objectives:
1.
Classification. We locate supply chain on the map of
knowledge—as an applied branch of economics—and show
why mistaking it for mathematics, sociology, or mere record-
keeping leads to systematic failure.
29
30 CHAPTER 3. EPISTEMOLOGY
2.
Validation. We examine what counts as evidence in a
domain where randomized trials are scarce, counterfactuals
dominate, and success is measured in profit rather than
elegance or consensus.
3.
Corruption. We analyze how actors—academics, consul-
tants, software vendors—despite good intentions, unwittingly
distort the knowledge pipeline, and outline safeguards that
preserve integrity amid adversarial incentives.
Throughout this book, the singular “supply chain” denotes
the discipline, whereas the plural “supply chains” refers to the
concrete flows run by companies. The discipline’s role is to arm
any given chain—not an idealized archetype—with decisions that
raise long-run profit under uncertainty. Conversely, extending the
term to whole markets drifts into general economics and blurs the
practical lens adopted here.
By the end of the chapter, the reader will possess a yardstick
for judging every subsequent claim—algorithmic, organizational,
or technological: Does this idea allocate scarce resources more
profitably when variability and adversarial incentives are accounted
for? Establishing that yardstick is the prerequisite for the analyses
that follow.
3.1 Economics
Supply chain must be understood as an applied branch of eco-
nomics, not a free-standing discipline. Overlooking this hierarchy
is perhaps the most frequent—and most severe—epistemic error in
the literature. This single error explains much of why mainstream
supply-chain knowledge proves barren in practice.
Regrettably, economics itself is frequently misunderstood, even
though its core ideas are quite simple. The British economist Lionel
Robbins gave what is typically regarded as the classic definition of
economics:
3.1. ECONOMICS 31
Definition (Economics).
The study of the use of scarce resources which have alter-
native uses
a
.
a
An Essay on the Nature and Significance of Economic Science (1935),
Lionel Robbins.
Robbins’ sentence is deceptively dense; each word carries weight.
Resources encompass far more than cash. Factory hours, pal-
let space, truck capacity, managerial attention, even goodwill
with a supplier all qualify. Whatever can be employed to satisfy
a want—or withheld to satisfy another—enters the economist’s
ledger.
Scarcity is the universal constraint. Desires outrun means in
every society, whether a medieval manor starved for arable acres, a
planned economy constrained by steel quotas, or a venture-backed
startup racing against its burn rate. Scarcity is not a quirk of
capitalism; it is the human condition.
Alternative uses encode the bite of opportunity cost. A ton
of copper rolled into power cables cannot also be stamped into
cartridge cases; an hour of a planner’s focus spent chasing a late
shipment cannot refine next season’s assortment. Every allocation
vetoes its rivals, and that foregone benefit is the hidden price of
every choice.
Because these three elements—resources, scarcity, and alterna-
tive uses—persist under feudalism, socialism, or capitalism alike,
economics remains valid regardless of the social order. The mecha-
nisms that resolve scarcity differ (prices, queues, decrees), but the
underlying calculus of trade-offs does not. Supply-chain practice
must therefore rest on economic bedrock, no matter how the firm
is owned or how markets are organized.
Set against our definition of supply chain, the relationship
becomes clear:
Supply chain is the mastery of optionality under variability
in managing the flow of physical goods.
Supply chain fundamentally parallels economics in its treatment
32 CHAPTER 3. EPISTEMOLOGY
of scarcity. Here, “optionality” names the alternative uses the
company must first cultivate and then exploit. Each supply chain
decision utilizes a scarce resource with alternative potential. Every
item bought contends with all other possible purchases, and every
item manufactured competes with alternatives that could have
drawn on the same materials. Similarly, each site receiving an
extra dispatch unit vies with all other sites for that single item
from the warehouse.
In summary, if we focus solely on allocating scarce resources,
supply chain would be indistinguishable from economics. However,
supply chain diverges from economics in several important ways.
First, “mastery” denotes the profit-seeking stance of companies.
Supply chain is an applied science; unlike the broader field of
economics, it aspires to be prescriptive rather than merely descrip-
tive. While economics explains why some individuals are wealthier
than others, it does not claim to directly guide individuals toward
greater wealth—except indirectly through a better understanding
of market mechanisms.
Furthermore, the scope of supply chain is restricted to the
flow of physical goods—a small subset of economics. For example,
intellectual-property questions—patents and copyrights shaping
production choices—belong to economics, except where they con-
strain production despite adequate physical resources.
Moreover, the scope narrows further with the element of vari-
ability; without variability, an issue would cease to be a supply
chain concern. For instance, consider a warehouse process that
can be nearly perfectly automated, setting a strict upper bound
on pick-and-pack time—provided the arrival rate stays below a
threshold. In such cases, supply chain accepts the automation as a
given, without scrutinizing the economic decisions that enabled it.
Properly classifying supply chain is crucial, because the dis-
cipline is routinely approached through misguided lenses—sterile
mathematics, clerical bookkeeping, or managerial platitudes—while
its genuine foundation is, and always has been, economics.
Contrary to popular belief, economics is far from a “dismal
science”. Granted, it cannot achieve the numerical certainty of
3.1. ECONOMICS 33
most natural sciences; the free will of participants precludes that.
That said, many economic principles can be considered as solidly
established as any scientific theory can be.
Consider the well-documented case of rent controls. When
a government caps rents below the market-clearing level, several
predictable outcomes follow. Property owners lose the incentive
to maintain or improve existing units, as they cannot recoup
those costs through higher rents. Developers become reluctant to
build, and long-term shortages ensue. Shortages lengthen waiting
lists, reduce choice, and degrade housing quality. Such effects are
observed time and again across different cities and eras, illustrating
how economic principles remain remarkably consistent, regardless
of cultural and temporal differences.
A nearly identical mechanism appears inside individual firms
whenever a product is sold below its market-clearing price. Imagine
a consumer-electronics manufacturer launching a new console at
a price intentionally set 20% below the market-clearing level in
the hope of “delighting the customer”. Demand instantly over-
whelms the short-term production capacity. Retail channels are
wiped out, waiting lists form, and—most revealingly—gray-market
intermediaries (scalpers) purchase the scarce consoles at list price
only to resell them online at the actual market price. The margin
the company voluntarily relinquishes is captured in full by these
middlemen, while end-customers still pay the higher amount. The
firm forfeits revenue, frustrates loyal clients, and tarnishes its brand
by appearing disorganized.
Unless the company can raise output fast enough to meet the
unexpected surge, the economically sound remedy is to remove the
self
-
imposed price ceiling and let prices rise until demand equals
capacity.
1
Any other policy merely subsidizes an ecosystem of
arbitrageurs whose sole contribution is to pocket the difference
between the constrained price and the true clearing price.
1
The 2020–2021 shortage of next
-
generation game consoles illustrated this
dynamic: resale prices on major platforms frequently reached double the manu-
facturer’s suggested retail price.
34 CHAPTER 3. EPISTEMOLOGY
The laws of economics are no easier to evade than the laws of
electromagnetism. Failing to characterize supply chain as applied
economics yields, at best, theories and practices rightly ignored by
practitioners, and at worst, decisions that harm firms.
3.2 Mathematics
In academic circles, a prevalent mistake is to regard supply chain
as a branch of mathematics. The book Fundamentals of Supply
Chain Theory (2019) by Snyder and Shen is a canonical exponent
of this misguided path. Over the past seven decades, despite over
a million published papers
2
, academic supply chain literature has
contributed little to real-world practice. Supply chains remain,
with few exceptions, governed by spreadsheets. A company is more
likely to profit from buying lottery tickets than from adopting any
“provably optimal” inventory optimization technique. Nonetheless,
thousands of researchers seem to have devoted their careers to this
approach. It is astonishing that the community has persisted in
such a massive error for so long.
When supply chain is treated as a branch of mathematics, it
is reduced to a series of mathematical puzzles. The newsvendor
model—where a newspaper vendor decides how many copies to
stock under uncertain demand, with unsold copies becoming worth-
less—is one of the oldest and most notable examples. Similarly,
the economic order quantity, defined as the order quantity that
minimizes total holding and ordering costs under constant future
demand, stands as another classic example. Solving these puz-
zles typically involves analytical methods, probability theory, or
algorithms.
In this paradigm, the supply chain puzzle begins with a formal
characterization of the physical flow of goods—the products, sites,
demand and lead-time equations, and constraints. From this
2
As of June 2024, Google Scholar returns 827,000 papers for “inventory
optimization techniques”. While there are certainly duplicates and false positives,
one million published papers is the correct order of magnitude.
3.2. MATHEMATICS 35
formalization, authors derive a mathematical solution—ideally
with provable properties or, at minimum, a heuristic validated on
(often synthetic) data.
Internal consistency is not enough; survival in contact with
reality is. What matters is whether a numerical recipe survives
contact with trucks, lead times, and customers. The criterion we
adopt—and its consequences for this book—are stated below under
Falsifiability.
Put plainly, a theory that survives solely by refusing to ex-
pose itself to reality tells us nothing about how trucks roll, how
customers buy, or how pallets wait. Internal consistency guaran-
tees correctness only inside the walls erected by the assumptions;
the moment those walls meet the first promotional spike or the
first customs delay, the “optimal” policy reverts to an untestable
conjecture.
When treated as a branch of mathematics, supply chain puzzles
remain detached from real-world conditions. For example, under
a few assumptions—stationary demand, constant lead time, non-
perishable goods, etc.—an inventory replenishment policy can be
proved optimal. No real-world supply chain can contradict such
a result: it is impossible to devise an experiment to challenge it.
The solution is based solely on internal consistency derived from
the problem’s assumptions.
Proponents of this approach would undoubtedly argue that the
puzzles were not selected at random. On the contrary, these puzzles
have been carefully crafted to accurately reflect the challenges of
real-world supply chains. Moreover, many papers claim support
from “real-world” datasets and report rigorous tests against them.
However, practice offers a clear and forceful rebuttal of this
proposition. Companies neither use these supposedly “optimal”
methods to manage their supply chains nor adopt any of their
variants, not even as a source of inspiration. If these algorithms
were genuinely valuable, the community would see shocks akin
to those in software, where superior techniques—transactional
databases, cloud computing, deep learning—are rapidly embraced.
The claim that companies are unaware of these “modern” supply
36 CHAPTER 3. EPISTEMOLOGY
chain methods does not hold up under scrutiny either. Since the
late 1970s, enterprise vendors have packaged thousands of such
methods. Yet, they are invariably discarded, as supply-chain
practitioners swiftly revert to their spreadsheets.
Applying Occam’s razor, the simplest explanation is that these
mathematically neat “puzzles” fail to produce any repeatable,
profit-relevant gain once confronted with the messiness of day-
to-day operations. Because they neither cut total cost nor raise
service levels in a way that outweighs their implementation bur-
den, practitioners quite rationally set them aside and return to
tools—even spreadsheets—that have proven their worth.
Finally, much of the apparent “progress” in this branch of
mathematics may be illusory. Research is just as vulnerable to
fashion as any other branch of knowledge. In the 1960s, explicit
analytical resolutions for supply chain puzzles were in vogue; in
the 1990s, parametric time-series models gained popularity; and in
the 2020s, Monte Carlo simulators are the trend. Authors focusing
on current scientific fads do not necessarily contribute to genuine
progress—or its absence—in the field. Moreover, earlier trends
are labeled “outdated”, regardless of what those papers may have
contributed to real-world supply chains.
3.2.1 Falsifiability
Having established that the mathematical posture by itself can-
not steer a supply chain, we need a positive criterion for sorting
knowledge that deserves to govern a pallet from what does not.
A decisive perspective on complex theories emerged in the
first half of the 20
th
century with the work of Karl Popper
3
. In
his lifelong attempt to characterize science, Popper introduced
the criterion of empirical falsification. This criterion is central to
the present book. An anecdote from his life sheds light on this
criterion.
3
See “The Logic of Scientific Discovery” (1934) by Karl Popper, a landmark
in the history of science.
3.2. MATHEMATICS 37
In the opening chapter of his book “Conjectures and Refuta-
tions” (1963), Popper explains how his youthful experiences in
Vienna around 1919 shaped his idea of falsifiability as the hallmark
of science. He notes that whereas Einstein’s relativity made bold
predictions—and risked being refuted by observation—Marxism
(and also Freudian psychoanalysis) was perpetually “saved” by ad
hoc explanations whenever contrary evidence appeared. Physicists
testing Einstein’s theory welcomed, and even sought out, the pos-
sibility of falsification, using observations—e.g., starlight bending
around the sun during a solar eclipse—to see if the theory held up.
Physicists sought maximal exposure to reality, pursuing the angles
most likely to tear a theory apart.
In contrast, Marxists would “immunize” their doctrine from
refutation by reinterpreting events to fit the theory. For example,
Marxist theory predicted the proletarian revolution would occur
first where factory workers were most numerous—i.e., Great Britain.
Yet the revolution occurred in Tsarist Russia in 1917, one of
Europe’s least industrialized countries. Instead of rejecting the
theory as disproved, Marxists revised the interpretation to explain
why the revolution would happen first in the least industrialized
country.
Obviously, physicists were the ones with the correct intellectual
stance. Their intellectual inclination would end up delivering
two monuments of our present understanding of the universe,
namely general relativity and quantum physics. Two theories that
turned out extremely capable of making predictions of empirical
significance. Conversely, the Marxist theory would keep being
endlessly refuted by dozens of countries that implemented reforms
inspired by this doctrine, and got dismal economic results as a
result, baffling their Marxist economists in the process
4
.
These observations led Popper to articulate falsifiability. It
highlights the radical asymmetry, in science, between truth and
falsehood. No theory can ever be proven true; it can only be
4
In contrast, Austrian economists were entirely unsurprised by those out-
comes. It was exactly what their theory predicted.
38 CHAPTER 3. EPISTEMOLOGY
disproved. Indeed, to prove a theory exhaustively—say, general
relativity—one would have to verify that every celestial body,
everywhere and always, follows the laws it states. This is obviously
a wholly impossible feat. At best we can gather a long series of
measurements in agreement with the theory, but beyond that, we
can only extrapolate the validity of the theory to the rest of the
universe. However, the converse is much more accessible: it only
takes a single pair of celestial bodies
5
contradicting the theory in
order to prove that the theory is wrong.
Thus, at best, we can only attempt to disprove a theory. How-
ever, for such an attempt to ever take place, the theory must expose
itself to an empirical validation process that could, in principle, fail.
Popper referred to such theories as falsifiable. It follows that the
longer a theory survives relentless experimental attack, the more
confidence it earns. In contrast, an unfalsifiable theory cannot
be disproved; such theories do not belong to scientific knowledge.
Falsifiability is now widely recognized as one of the central tenets
of science.
Circling back to supply chain, falsifiability reveals what is wrong
with the purely mathematical posture: it is immune to refutation.
It belongs to applied mathematics, not to the natural sciences. As
a result, the correctness of the numerical recipes derived from this
perspective is shallow. It merely reflects consistency with an initial
bundle of assumptions. Those recipes are left entirely vulnerable to
the correctness of their premises, but also to even more mundane
issues.
Falsifiability matters because supply chain is judged in profit,
not in elegance. A recommendation that cannot be shown false
when shelves stay empty or when inventory bloats is not operational
science—it is mathematical storytelling. Mathematical rigor is
indispensable, provided the premises invite empirical challenge.
5
The “anomalous” orbit of Mercury around the sun was instrumental in
rejecting the old Newtonian mechanics in favor of the relativity theory of Einstein.
In fact, there was no anomaly, merely an incorrect theory. Once relativity was
adopted, the anomaly vanished.
3.2. MATHEMATICS 39
3.2.2 The incentives of academia
Before examining concrete illustrations of the oversized assump-
tions that cripple these models, we must ask why academia keeps
championing them. The answer lies in the incentives that shape
scholarly careers: journals reward novelty and formal elegance—not
operational impact—so research naturally gravitates toward puz-
zles that are publishable rather than practicable.
In The Future of Operational Research is Past (1979), Russell
Ackoff identifies a series of issues undermining his field. In contem-
porary terms, “operational research” is synonymous with “supply
chain”, and Ackoff’s critiques remain entirely relevant today. Those
problems include:
1. Solving made-up problems;
2. Taking the objective for granted;
3. Ignoring system properties;
4. Assuming the future is knowable;
5. Being inward-looking, and increasingly so;
6. Rationalism passing for reason.
More than four decades later, it may seem astonishing that academia
still overlooks those foundational issues, especially given Ackoff’s
significant reputation. Yet this behavior becomes less surprising
when one considers the incentives.
Academics are primarily assessed on two tightly coupled axes
of evaluation. First comes the volume of papers produced and
the impact those papers achieve inside the scholarly echo cham-
ber—impact here meaning little more than how often other papers
cite them. Whether the cited idea ever trims a procurement bill,
shortens a lead time, or prevents a stockout does not register in
this metric.
40 CHAPTER 3. EPISTEMOLOGY
The second axis is teaching performance, which in practice re-
duces to the lecturer’s aptitude for transmitting that same citation-
oriented material to the next cohort of would-be scholars. This twin
yardstick creates a self-referential loop. Intellectuals are judged
solely by how well they persuade other intellectuals; classrooms are
staffed and syllabi written by those who have already mastered the
art of persuading peers. Empirical effectiveness—the capacity of an
idea to survive contact with forklifts, tariffs, or customers—never
enters the equation.
Consequently, mathematical problems fitting the supply-chain
theme offer an inexhaustible stream of publishable statements and
resolutions. Adding constraints, modifying demand equations,
altering network topology, or incorporating additional terms into
objective functions produces nearly endless variants. Each variant
serves as ideal material for an academic publication. Moreover,
the specific details from a business willing to share even modest
datasets can provide a unique—and thus novel—perspective on
the subject. This inevitably fosters the illusion that resolving an
isolated puzzle holds significance for the domain at large.
Thus, academia relentlessly treats supply chain as a branch
of mathematics because it is the most expedient way, in terms
of time invested, to get papers published. Furthermore, these
papers exhibit the fashionable hallmarks of “great science”—such
as equations, algorithms, and quantitative results. By categorizing
supply chain research as applied mathematics, authors avoid the
rigorous falsification challenges that natural sciences routinely
confront.
In classroom teaching, because promotion hinges on student
evaluations and pass rates, instructors have every incentive to
present material that mirrors what their colleagues will set in
examinations—again privileging continuity over confrontation with
reality.
Also, mathematics is unusually easy to teach—assuming the
speaker understands the material. Unlike in the natural sciences,
no dubious compromises are necessary to fit the presentation into
the allotted time. The speaker can simply tune the number and
3.2. MATHEMATICS 41
complexity of puzzles to fit the clock.
Furthermore, when it comes to grading, mathematical puzzles
are maximally convenient. Their number and complexity can be
precisely adjusted to fit the exam context, enabling nearly flawless,
objective grading with minimal effort. Unfortunately, this results
in puzzles that bear little resemblance to the messy, ambiguous,
and complex realities of actual supply chains. Intellectuals teach
future intellectuals the material authored by past intellectuals,
with no external milestone to break the circle.
No matter how poorly this approach—treating supply chain as
applied mathematics—performs in practice, it still addresses the
enduring concerns of academic career seekers. Thus, as Ackoff illus-
trates, merely proving that this research approach is misguided is
insufficient; the underlying detrimental incentives within academia
must be addressed—a task entirely beyond the scope of this book.
Let’s clarify that there is nothing inherently wrong with using
the supply chain theme in mathematics—or in fields such as anal-
ysis, algorithms, and statistics—if a mathematical problem is best
illustrated by a supply chain metaphor. For instance, consider the
famous Seven Bridges of Königsberg problem, solved by Leonhard
Euler in 1736. The challenge was to devise a route through the
city that crossed each bridge exactly once. Euler proved that no
solution exists, introduced the concept of graphs, and thereby
paved the way for the field of topology.
Under these circumstances, a paper is worthy only if it possesses
intrinsic mathematical merit: Does it reveal a hidden truth or
structure? Is the exposition particularly elegant? Does the method
pave the way to solve other, seemingly unrelated, mathematical
problems? Otherwise, the work is deemed worthless. Simply
using a supply chain theme cannot confer merit on work that
fundamentally belongs to the realm of mathematics.
Any reader familiar with the literature will know that supply
chain papers meeting this standard are exceedingly rare.
42 CHAPTER 3. EPISTEMOLOGY
3.2.3 The pitfalls of safety stocks
Demonstrating the shortcomings of the vast literature that treats
supply chain as mathematics is daunting; with roughly a million
papers to challenge, the task borders on insurmountable—a vivid
illustration of Brandolini’s law, which notes that refuting nonsense
usually demands an order of magnitude more effort than producing
it.
To keep the discussion manageable, we focus on one widely
accepted concept—safety stocks—to illustrate the general short-
comings of approaching supply chain as mathematics.
Safety stock is the buffer required to achieve a desired service
level for a given SKU (stock-keeping unit). Here, the service level
is defined as the frequency with which a stockout is avoided during
a given period. The model assumes demand during lead time is
normally distributed and lead time is constant. These assumptions
yield a closed-form formula for the required stock.
Lead demand
Probability
Working stock = 5
(90 % service)
Safety stock = 1.81
10 % risk
Figure 3.1: Under the safety stock model, the reorder point is set as the
sum of the working stock (mean demand during the lead time) plus the
safety stock (scaled deviation of the demand during the lead time).
Since the calculation is of historical interest only, it is omitted
here. Suffice it to say, the model yields an explicit formula that
can be easily computed in a spreadsheet with minimal effort from
both the programmer and the computer.
Yet safety stocks are fundamentally flawed—“hazardous stocks”
would be apter—because the mischaracterization yields poor out-
3.2. MATHEMATICS 43
comes. As a silver lining, safety stocks serve as a simple litmus
test to identify gross incompetence in the field of supply chain.
Safety stocks present three major issues, each of which is suffi-
cient to preclude satisfactory outcomes when applied in a real-world
supply chain. The first two are categorical errors, while the third
is technical.
The first—and most damaging—flaw is methodological isola-
tionism. Safety-stock formulas examine every SKU as if it were
the only claimant on the balance sheet, striving to compute an
“optimal” quantity for that item in a vacuum. Yet each extra unit
ties up cash that could have funded stock for another product
or been deployed to any more profitable use. Indeed, all SKUs
compete for the same finite pool of working capital, and any model
that ignores this rivalry is blind to the very trade-offs it is supposed
to orchestrate.
Outside micro-companies, no supply chain deals with a single
SKU; firms typically manage hundreds, if not thousands. Moreover,
replenishing one SKU is fundamentally different from managing
replenishment across many SKUs. The two problems are radically
distinct. Safety stocks fail even to acknowledge that SKUs cannot
be treated in isolation; their very definition sidesteps the problem
that must be solved: the allocation of money for inventory.
Second, even if we momentarily accept treating each SKU in
isolation, the safety-stock recipe remains hopelessly incomplete. It
hinges on picking a “target service level” a percentage plucked
from thin air and then treating this number as if it possessed a
built-in link to profit. The formula never explains why 95% should
outperform 92% or 97%; it simply delegates the most difficult
judgment call to managerial fiat and proceeds as if the matter were
settled.
Proponents sometimes reply that a higher service level will
“generally” boost profitability, but correlation is not causation.
Set the level too high and dead stock accumulates the moment
demand shifts; set it too low and frustrated customers defect to
competitors. Because the mathematics permits any target between
0% and 100%, the practitioner ends where they began: facing the
44 CHAPTER 3. EPISTEMOLOGY
real economic question of how much capital to commit to which
items. The safety-stock apparatus does nothing to answer it.
The third—and final—flaw is statistical naïveté. Real-world
demand is almost never bell-shaped: it is sparse for long stretches,
then jumps without warning, and its extremes occur far more
often than a normal law would allow. Lead times show similar
unruliness, clustering around distinct modes—“on-time”, “customs
delay”, “supplier stockout”, and so on—rather than hugging a
single, symmetric peak. These patterns are the norm, not outliers,
and any model that ignores them starts from the wrong coordinates.
Textbooks and software cling to the normal curve because
it keeps the algebra tidy, not because it matches the data. In
principle one might swap in zero-inflated, heavy-tailed, or mixture
distributions to repair the theory, but doing so strips away its lone
virtue: simplicity. Once the arithmetic becomes opaque, the safety-
stock recipe loses its only practical appeal while still inheriting all
the earlier categorical mistakes. If a method is neither realistic nor
simple, nothing remains to recommend it.
The vacuum left by mathematically sterile academia is, alas,
quickly filled. Into it step consultants and business-book authors
who swing the pendulum to the opposite extreme: abandoning
numbers for narratives. Where the academic fallacy treats supply
chain as a puzzle bereft of incentives, the consultant fallacy treats
it as a story bereft of quantification. Both camps ignore the
economic core, merely from different ends of the spectrum. The
next section examines how this sociological turn, though wrapped
in inspirational prose, proves no more reliable than the equations
it claims to supplement.
3.3 Sociology
If the academic path sterilizes supply chain into abstract puzzles,
the consulting path swings to the opposite extreme: it recasts
the discipline as a catalog of sociological observations and lead-
ership bromides. Business consultants have therefore dominated
3.3. SOCIOLOGY 45
supply-chain publishing for the last four decades, framing every
problem in terms of culture, teamwork, and “best practices. Their
output is vast—well over 10,000 titles appear under “Supply Chain
Management”
6
. Dynamic Supply Chains (2015) by John Gattorna
is a canonical example of the genre—and of its epistemic weakness:
persuasive anecdotes replace testable evidence, leaving the reader
with stories rather than substantiated knowledge.
This literature belongs squarely to the self-help genre. Its
intended readership is the corporate executive—or the manager
who hopes to become one—who believes that mastering a new buz-
zword will both lift company performance and accelerate personal
promotion. The consultant-author, for his part, adopts the posture
of a guru whose insight can be rented by the day.
Each book then unfolds along two predictable axes. The strat-
egy axis offers a handful of slogans—typically three to five—that
promise to explain how one should “think” about the supply chain;
the discipline is sliced into neat quadrants, waves, or maturity
levels. The organization axis converts those slogans into a checklist
of rituals: steering committees, cross-functional workshops, respon-
sibility assignment matrices, and the like, all presented as turnkey
pathways to execution.
To sustain momentum, the narrative is laced with borrowed
authority: sound bites from Fortune 500 executives, quotations
from Navy SEALs about resilience, and blow-by-blow retellings of
last-minute sports victories. Besides briefly enlivening the prose, a
sports anecdote—or an athlete’s quote—adds no evidential weight
to supply-chain arguments. Celebrity in athletics or entertainment
has no bearing on the allocation of pallets and purchase orders.
Such cameos create the illusion of battle-testing; in fact they deliver
color, not proof.
The genre relies on a familiar first trick: buzzword inflation.
Terms such as “agile”, “digital twin”, or “AI-driven” are repeated
until they sound inevitable, yet their definitions remain elas-
6
Amazon.com search performed in 2025; the overwhelming majority of results
are authored by consultants rather than academics.
46 CHAPTER 3. EPISTEMOLOGY
tic enough to fit any circumstance. Behind the shimmer, one
finds only evergreen truisms—“the world is changing, you must
adapt”—rebranded as revelations. Taxonomies that slice firms into
“leaders” and “laggards,” or “innovators” and “followers”, cloak
tautologies in flow-chart form and create the illusion of depth
where none exists.
The second trick is manufactured urgency. Authors spotlight a
handful of allegedly neglected levers—supplier intimacy, customer
centricity, sustainability, culture—and intone that immediate es-
calation to the C-suite is vital. This “act-now-or-perish” framing
is impossible to falsify in a boardroom setting. It deftly diverts
attention (and budget) toward the consultant’s favored initiative
without ever weighing the opportunity cost imposed on all the
other constraints a functioning supply chain must honor.
The final trick is borrowed authority. Case studies name-drop
admired brands, parade selective success metrics, and sprinkle
quotations from athletes or Special-Forces veterans. The implied
syllogism is specious: “Company X is famous; Company X endorses
this framework; therefore, the framework creates fame. Correlation
is quietly traded for causation, and the failures that would break
the spell are screened out of view.
These rhetorical devices are not harmless entertainment; they
crowd out the quantitative reasoning that genuine improvement
requires. By wrapping platitudes in prestige and urgency, the
literature flatters executives into feeling decisive while leaving the
actual flow of goods untouched. Strip away the logos and the
motivational anecdotes, and nothing remains that can survive
analytical scrutiny.
In other words, the consulting canon treats supply-chain dys-
function as a human-relations problem whose resolution lies in
reshaping group behavior. Its favored variables are roles, rituals,
and narratives; its instruments are workshops, charters, and incen-
tive grids. This viewpoint is textbook organizational sociology: it
studies how collectives maintain cohesion, how status is negotiated,
and how norms are enforced, then prescribes ceremonies meant to
realign those social constructs. Pallets, containers, and lead times
3.3. SOCIOLOGY 47
appear only as props in a narrative whose real protagonists are
departments vying for influence. Whether tonnage actually flows
more smoothly is deemed an emergent by-product of better social
harmony. This literature is therefore sociological in method and
aim—even if it rarely advertises itself as such.
Even if this literature contained genuine insights—which it sel-
dom does—treating supply chain chiefly as a sociological endeavor
leads improvement efforts astray. Consider a thought experiment:
imagine a near-future platform in which a single autonomous artifi-
cial intelligence (AI) ingests every demand signal, lead-time update,
and cost curve, then issues—without human intervention—every
purchase order, price change, and routing instruction. In that
world there is nothing left to “align” or “empower”; the entire scaf-
folding of committees, workshops, and incentive grids evaporates
overnight. Yet the economic substance of the discipline endures:
scarce resources must still be allocated under variability. The
AI would lean on principles of optionality, opportunity cost, and
risk—core economics—not on any theory of inter-departmental
politics. The thought experiment clarifies the hierarchy: orga-
nizational constructs are temporary workarounds we erect only
because people, slower and scarcer than CPUs, must coordinate.
Automation is already eroding this sociological wrapper—pricing
bots, routing optimizers, and allocation engines now perform tasks
that once filled entire departments—just as a laptop replaced the
actuarial tables that formerly justified platoons of clerks. Until
full automation arrives, humans will stay in the loop. The com-
pass that guides them must remain economic and operational, not
sociological.
Granted, some supply chain challenges still resist software.
Once the necessary steps are explicit, it is acceptable to rely on
human intelligence as a stopgap until state-of-the-art methods can
automate them. Where workload exceeds one person’s capacity,
add people and organize them accordingly; even then, sociology
should be a last resort. Indeed, to insist that an intellectual task
“requires human intelligence” is to admit: “I have no clear idea
how to solve this problem, but if push comes to shove, someone
48 CHAPTER 3. EPISTEMOLOGY
will figure something out.
For more than two centuries, it has been a recurring mis-
take to treat human involvement as a virtue in itself. The Lud-
dites—British weavers and textile workers who opposed mecha-
nized looms and knitting frames in the early 19
th
century—are
the canonical example of this mistake. In supply chains, human
involvement in the daily micromanagement of millions of SKUs is
not inherently desirable. The absence of mechanization imposes
predictable limits—long processing delays, reduced reactivity, and
mediocre allocations—because operators cannot weigh all relevant
quantitative factors.
Taken together, these consultant-driven tricks explain why
glossy frameworks so often leave containers, prices, and purchase
orders exactly where they started. They turn resource allocation
into group therapy, mistaking harmony for profit. To see the
hard cost of confusing sociology with supply-chain mastery, we
now examine the flagship doctrine of this literature—Sales and
Operations Planning.
3.3.1 The pitfalls of S&OP
Sales and Operations Planning (S&OP) has been heralded for
four decades as the cure for functional silos, yet it epitomizes the
sociological fallacy just exposed. Its elaborate calendar of meet-
ings seeks consensus around a single—highly uncertain—forecast
and then asks every department to pledge allegiance. Renamed
variants such as integrated business planning (IBP) inherit the
same weakness: they treat pricing, purchasing, and capacity as a
corporate ritual rather than an economic optimization. Because
S&OP is both influential and symptomatic, it will serve as the lone
exhibit in the critique that follows, though the argument applies
equally to all its look-alikes.
Since the rise of the giant corporation in the 19
th
century,
specialization has supplied most of what we label “economies of
scale”. By narrowing their focus, teams deepen expertise, lift
productivity, and make outcomes more predictable. Yet every
3.3. SOCIOLOGY 49
new specialty opens another seam that must be stitched back to
the whole, and the stitching grows combinatorially harder as the
organizational tapestry expands.
When coordination lags, the misfires are dramatic. Marketing
can stoke demand that plants cannot meet, squandering advertising
budgets and disappointing customers. Manufacturing can flood
warehouses with units the market will not absorb, forcing fire-sale
discounts or outright write-offs once carrying costs eclipse any
plausible margin.
Sales and Operations Planning (S&OP) was devised in the early
1980s as a ritualized cease-fire between the factions just described.
Its promise was simple: convene every function around a single
calendar checkpoint, agree on one forward view of demand, and
cascade that view into synchronized supply, capacity, and financial
plans. In theory, the meeting guarantees that the people who
create demand (sales, marketing, merchandising) and the people
who fulfill it (procurement, manufacturing, logistics) leave the
room committed to the same numbers and the same constraints.
The framework gained momentum in the 2000s when ERP ven-
dors and consultancies began selling S&OP dashboards, maturity
models, and benchmarking surveys. Despite the glossy tooling, the
mechanics remain spartan: a rolling 18- to 24-month horizon, a
consensus forecast sometimes styled “one set of numbers”, and a
monthly sequence of pre-reads and workshops that massage those
numbers until every vice president can sign off.
In historical perspective, S&OP is less a breakthrough than a
modern sequel to Alfred Sloan’s executive production conferences
at General Motors in the 1920s. The vocabulary has changed, but
the intent is identical: impose an institutional routine that keeps
functional empires from drifting apart when variability threatens
to pry them loose.
Despite its longevity, S&OP collapses under close inspection.
Four structural defects undermine the practice and show why,
from an epistemic standpoint, it cannot be trusted to steer scarce
resources.
50 CHAPTER 3. EPISTEMOLOGY
1.
The forecast illusion. S&OP begins by assuming that a
single, time-series forecast of demand not only exists but is
precise enough to govern capacity, pricing, and inventory
months in advance. That premise treats an uncertain fu-
ture as a known constant, turning the hardest part of the
problem into a spreadsheet input. By disguising uncertainty
as certainty, the ritual severs the feedback loop that would
otherwise test whether those numbers ever deserved belief.
2.
The decision vacuum. Having conjured a consensus de-
mand forecast, S&OP then hands off every quantitative
decision—procurement lots, production schedules, routing
plans—to whatever heuristics each functional silo already
uses. The framework contributes no economic criterion for
choosing one allocation over another; it merely notarizes
choices made elsewhere. Epistemically, this is bookkeeping
masquerading as optimization.
3.
Ritual overreach. The monthly cadence of pre-meetings,
alignment workshops, and executive reviews is presented
as the cure for miscoordination, yet the cure scales with
headcount, not with the physics of the flow. Every extra
SKU, lane, or promotion demands another slide deck, not a
sharper algorithm. The complexity is accidental—created by
meetings—rather than essential—created by atoms and lead
times.
4.
Arbitrary taxonomies. S&OP partitions time into opera-
tional, tactical, and strategic buckets and reduces reality to
a two-axis “demand vs. supply” matrix. These partitions
are rhetorical conveniences, not empirical discoveries; they
can be rearranged at will without changing economic out-
comes. A theory whose parameters are free to float cannot be
falsified and therefore cannot accumulate reliable knowledge.
Together these defects explain why companies that adopt S&OP
often return to homegrown spreadsheets: the practice offers pro-
3.4. ECONOMIC HISTORY 51
cess theater without predictive power. A meeting can enforce
temporary consensus, but it cannot reveal which decision maxi-
mizes long-run profit under variability and adversarial incentives.
Until a framework exposes its assumptions to empirical challenge
and grounds its prescriptions in opportunity cost, it belongs to
organizational folklore, not to the science of supply chain.
We now turn to economic history to examine how similar
epistemic missteps have played out in the past—and what lessons
they hold for the present.
3.4 Economic history
Economic history catalogs what actually happened—the prices
paid, the goods moved, the crises endured. It is indispensable for
context, yet, as Ludwig von Mises stressed in Human Action (1949),
facts do not interpret themselves. Only a prior economic theory
can assign meaning by tracing cause, effect, and counterfactuals.
In short, economics must precede economic history; we first need
a logical framework before we can usefully retell the past.
The same ordering governs supply chain. Dashboards brimming
with lead-time histograms, service-level charts, and inventory turns
are nothing more than chronicles. Without a theory of opportunity
cost and optionality, an analyst cannot tell whether a 20% stock
reduction was brilliance or negligence. Data become knowledge
only when filtered through an economic lens; otherwise, the exercise
is bookkeeping masquerading as insight.
Yet in day-to-day practice, the distinction between chronicle
and explanation is routinely blurred. Analysts fill dashboards,
vendors publish benchmark reports, and consultants parade before-
and-after case studies—all of which recount what did happen while
remaining silent about what could have happened under different
choices. The record tells a story; it never diagnoses the mechanics
that generated that story.
Consider a retailer that advertises a 20% inventory reduction.
Absent a theory of opportunity cost, the headline is meaningless:
52 CHAPTER 3. EPISTEMOLOGY
was working capital truly released with no loss of service, or were
shelf-outs simply postponed to a quarter yet to be reported? The
ledger contains no rows for the sales that never materialized, the
assortment expansions never attempted, or the price experiments
never run. History records only the path taken, never the adjacent
ones left unexplored.
Benchmarks magnify the problem. Ranking companies by
“days of supply” or “inventory turns” invites imitation yet ignores
strategic asymmetries such as assortment breadth, service promises,
or pricing power. Copying the metric without reproducing the
context is cargo-cult optimization: the ritual is preserved, the
economics are not.
The lesson echoes the critiques leveled at mathematics and
sociology: economic history is an auxiliary source of insight, not
a foundation. Facts collected in arrears are indispensable, but
they must be filtered through a forward-looking economic lens
that weighs counterfactuals and opportunity costs. Without that
lens, history becomes numerate storytelling—comforting to read,
powerless to steer the next container. Mastery therefore starts
with theory, tests that theory against experience, and only then
lets the ledger guide future allocations.
3.5 Auxiliary sciences
Auxiliary sciences are the disciplines that supply chain routinely
relies on for facts, proofs, or tools—statistics for uncertainty, com-
puter science for automation, empirical psychology for human bias,
aeronautical engineering for fuel efficiency, and so on. The phrase
comes from historiography, where since the sixteenth century fields
such as paleography or dendrochronology have been called auxil-
iary because they supply evidence without writing history. The
hierarchy is epistemic, not pejorative: supply chain is subordinate
to whatever durable truths those sciences establish—if statisticians
refine the meaning of a confidence interval or aircraft engineers
demonstrate a cheaper flight profile, the practitioner must adapt
3.5. AUXILIARY SCIENCES 53
at once. What matters is that the knowledge is reliable, not the
route by which it was obtained.
Subordination, however, does not imply fusion. Supply chain is
not a multidisciplinary stew of mathematics, software engineering,
psychology, and logistics. It remains a distinct applied branch of
economics whose sole mandate is to allocate scarce resources under
variability. Auxiliary sciences are consulted only insofar as their
findings help to fulfill that mandate; their internal debates, jargon,
and prestige contests can be safely ignored. A practitioner needs
just enough familiarity to choose the right tool and to notice when
it fails—no more, no less.
The required familiarity depends on packaging. When an
auxiliary result is embedded in a robust artifact—a barcode scanner,
an optimization engine, an integrated circuit—the supply-chain
expert can treat it as a black box and concentrate on the economic
trade-offs. When the packaging is thin—as is still the case with
most behavioral-economics research—the expert must open the
box, grasp the limits, and compensate manually.
Historians offer the analogy. Dendrochronology can date a
shield to the summer of 1519 and thereby refute any claim that
it was wielded at the 1515 battle of Marignano, yet no historian
confuses the study of tree rings with the writing of history. Supply
chain should cultivate the same discipline: exploit every auxiliary
science ruthlessly, but never surrender the steering wheel.
The subordination principle is clear when microbiology sets
non-negotiable boundaries. Laboratory work shows that most
food-borne pathogens enter exponential growth once product tem-
perature rises above 5 °C for more than a few hours. Cold-chain
guidelines therefore mandate that chilled meat leave the plant below
4 °C and never exceed 7 °C before it reaches the customer. Whether
planners can recite the growth curves of Listeria is irrelevant; once
microbiologists fix these thresholds, the supply-chain expert must
decide whether to pay for reefer containers, run denser delivery
routes, or shorten the distribution radius—but the temperature
constraint itself is not up for debate.
Materials science offers the same lesson. Compression tests re-
54 CHAPTER 3. EPISTEMOLOGY
veal that a standard 7 mm double-wall corrugated carton collapses
once the top load exceeds roughly 8 kN. A distribution center
that stacks such cartons six pallets high has therefore capped its
per-carton weight at about 18 kg—unless it upgrades to triple-wall
packaging, lowers the stack height, or invests in racking. The
auxiliary science supplies the boundary conditions; supply chain
weighs the economic trade-offs that live inside those boundaries.
Supply chain defers to its auxiliary sciences because those fields
possess faster and sharper ways to separate truth from error. In
physics or chemistry a benchtop experiment, run before lunch,
can annihilate a bad hypothesis at negligible cost; results are
quick, repeatable, and decisive. Supply-chain claims, like those in
economics, can be vetted only in the messy world of plants, trucks,
and stores. Field trials stretch over months, burn real money, and
rarely deliver a crisp yes-or-no verdict. Because its feedback loop
is slower and noisier, supply chain must borrow the rigor of its
auxiliaries while accepting that its own knowledge will accrue more
gradually.
Nonetheless, supply chain does not inherit the objectives of
its auxiliary sciences; it merely buys their findings as raw inputs.
Aircraft engineers may prove that inserting a mid-route refuel-
ing stop trims fuel burn by eight percent on an ultra-long-haul
flight, yet the practitioner must still weigh that saving against
longer transit time, extra landing fees, crew overtime, and possible
customer churn. The criterion is economic payoff, not technical
elegance: every resource—jet fuel, hours, capital, goodwill—carries
an opportunity cost that must be priced before a decision is made.
Auxiliary sciences deliver facts; supply chain assigns value.
Supply chain may appear multidisciplinary because its daily
practice borrows statistics, computer science, mechanical engineer-
ing, and even behavioral psychology. This impression confuses
tools with objectives. As a science, supply chain is a branch of
applied economics whose single mandate is to decide how scarce
resources should flow through space and time. Auxiliary sciences
merely furnish measurements, algorithms, or machines that serve
that mandate; they do not redefine it. By contrast, the natural sci-
3.5. AUXILIARY SCIENCES 55
ences bleed into one another precisely because atoms know nothing
of the departmental walls that separate physics from chemistry or
biology—their borders are pedagogical conveniences. Supply chain
is therefore not multidisciplinary in the epistemic sense—only in
the practical sense that it consumes many imported results.
Whether a practitioner must dive into an auxiliary field depends
on two levers. Economic leverage measures how much profit hinges
on the ancillary knowledge; packaging quality measures how much of
that knowledge comes embedded in reliable artifacts. The higher
the leverage and the poorer the packaging, the more personal
mastery is required.
Economic leverage is the profit sensitivity tied to an auxiliary
science. If tweaking a clean-room’s airflow rate saves only a token
amount per batch, leverage is low and the setting can remain a
fixed procedure. By contrast, shortening the residence time of a
continuous chemical reactor by just five minutes can lift hourly
throughput by six percent, yet it also depresses conversion yield and
shifts the slate of co-products: the primary grade sells at a premium,
while the secondary and tertiary streams trade at a discount or
must be reprocessed. The extra volume, the downgraded yield,
and the altered co-product mix ripple through inventories, contract
allocations, and price tiers. Because the trade-off is large and the
control logic hides inside dense process diagrams, planners must
understand the reaction kinetics well enough to weigh capacity
against margin before authorizing the change. Deep expertise is
therefore required only where high leverage meets thin packaging;
elsewhere the auxiliary field may safely remain a sealed module.
Conversely, integrated circuits illustrate superb packaging.
They embody a century of quantum mechanics behind a five-
volt interface; planners exploit the computational power without
ever studying band theory. Jet engines hide thermodynamics and
materials science just as neatly; a fuel-burn table is all a network
optimizer needs. At the opposite end lies empirical psychology.
Findings such as anchoring, loss aversion, and hyperbolic discount-
ing reach managers mostly as prose. Because the packaging is
thin while the leverage—pricing, promotion, negotiation—is high,
56 CHAPTER 3. EPISTEMOLOGY
supply-chain leaders must grasp the concepts themselves or risk
repeating the biases the field documents.
A practical rule of thumb follows: deep expertise is needed only
where high leverage meets thin packaging. When one of those levers
is absent, the auxiliary science can stay a sealed module.
In short, auxiliary sciences push the frontiers while supply-
chain practice decides when those frontiers matter. Whenever
statistics yields a tighter interval, materials science a lighter pallet,
or computer science a faster solver, the economic leverage of the
corresponding decision shifts—and so must managerial attention.
Conversely, once a breakthrough is sealed inside a reliable artifact,
it can be treated as infrastructure and mentally offloaded. The
craft is to keep a live map of which pieces still demand scrutiny
and which have crossed the “black-box” threshold. Neglect that
map and a vacuum appears, soon filled by exaggerated vendor
pitches and consultant folklore—a pattern examined next under
the heading of epistemic corruption.
3.6 Epistemic corruption
When a knowledge system importantly loses integrity, ceas-
ing to provide the kinds of trusted knowledge expected of
it, we can label this epistemic corruption. Epistemic cor-
ruption often occurs because the system has been co-opted
for interests at odds with some of the central goals thought
to lie behind it. Epistemic Corruption, the Pharmaceutical
Industry, and the Body of Medical Science (2021), Sergio
Sismondo.
Much of what passes for supply chain knowledge today is
corrupt—it fails to improve supply chains even as it claims to do
so. Yet no grand conspiracies, evil overlords, or bribes are at work.
Authors who produce severely biased supply chain material are
rarely dishonest—more often than not, they are unaware of the
problem. The bulk of the epistemic corruption plaguing supply
chain stems from structural perverse incentives. The remainder
3.6. EPISTEMIC CORRUPTION 57
of this chapter addresses enduring misconceptions whose damage
extends far beyond supply chain.
As an applied branch of economics, supply chain knowledge
promises wealth—making companies richer, sometimes wealthy.
Every supply chain practitioner or in-house team is driven to
develop superior knowledge to outcompete rivals. Consequently,
they have little incentive to publicize their discoveries; if they
uncover a counterintuitive yet effective heuristic for inventory
optimization, companies prefer to protect it as a trade secret.
The most vocal purveyors of supply chain “truths” are those
who monetize advice—academics, consultants, and software ven-
dors—each operating under incentives that skew what they choose
to publish.
First, academia: funded by private or public sources, professors
vie for students and recognition through publications—a classic
case of “publish or perish”. Academic authors seldom have the
opportunity to verify their supply chain claims in practice; even
when they engage with companies, their careers remain largely
insulated from real-world outcomes. In fact, the tendency to view
supply chain as a collection of mathematical puzzles underscores
the influence of structural incentives.
Imagining and solving mathematical puzzles requires limited
ingenuity, especially when both the new puzzle and its resolution
are mere variants of known instances. Since the validity of the
resolution is purely a matter of internal consistency (as in any
mathematical proof), the author is relieved of the burden of seeking
real-world validation. Using real-world data in synthetic bench-
marks is little more than a smokescreen; it still does not show what
outcomes deployment would have produced.
Second, consultants, whose clients are corporate managers. Un-
like academics, consultants face the real-world consequences of
their theories, and this liability often becomes an asset. When a
manager doubts a project, a consultant is brought in as a buffer:
success earns the manager credit, while failure falls on the consul-
tant. Aware of this dynamic, managers rarely dismiss consultants
with mediocre records. More broadly, consultants typically avoid
58 CHAPTER 3. EPISTEMOLOGY
offering firm recommendations, as their role is fundamentally to
never say “no” to a client.
Third, software vendors, including the present author. Engi-
neering enterprise software is a slow and messy process. The final
product is usually opaque and convoluted—were it otherwise, it
would be marketed as a self-service product for individuals and
small businesses. The clients of software vendors are also managers;
however, the managers who select the vendors are typically not the
ones who use the product. Managers often consult subordinates
to make a more informed decision, and it is here that adversarial
incentives enter. Most software products are intended, at least in
part, to improve productivity, which often involves reducing head-
count. Relying on subordinate opinions systematically favors the
vendor that best preserves the status quo (and thus headcount).
Since managers often lack the time to acquire the expertise to
assess a product, they defer much of the evaluation to third-party
authorities.
Because of this dynamic, the case study has become the canon-
ical promotional instrument in enterprise software. Since so much
of the field’s “evidence” now flows through this channel, it warrants
close examination in the next section.
3.6.1 Case studies
Of the channels through which these adversarial incentives distort
evidence, one dominates: the corporate case study. Case studies are
elaborate infomercials intended for a corporate audience—nothing
more, nothing less. Yet many companies operate under the dan-
gerous delusion that case studies are truthful, especially when the
cited client is reachable; the reality is otherwise. At best, a case
study clarifies what the vendor is trying to sell; it cannot assess
how much the quoted client benefited, let alone what the vendor
would deliver to another company. Let us dispel this delusion.
A “software” case study suffers from two intrinsic, indepen-
dently fatal flaws: perverse incentives and selection bias.
There is no doubt the vendor is heavily incentivized to embel-
3.6. EPISTEMIC CORRUPTION 59
lish—magnifying benefits and suppressing issues that undermine
returns. The client’s voice is meant to provide a reality check; in
practice, the client is just as incentivized as the vendor to embel-
lish. Indeed, an enterprise sale requires one or more executives to
champion the option. Those champions are effectively vouching
for the vendor
7
. Moreover, their claims will be backed by highly
convincing metrics. Supply chains are complex enough to guaran-
tee that, no matter how dysfunctional the initiative, a shortlist of
favorable metrics can be identified. If needed, metrics can even
be quite blatantly forged. No one—outsiders least of all—except
those with intimate knowledge of the initiative will be in a position
to contradict the case study’s claims. Corporate case studies are,
by design, not reproducible.
In the presence of such an egregious conflict of interest magnified
by a complete lack of accountability, bias is inevitable. Even
individuals of sound character, known for their honesty in daily
affairs, can succumb to distorted incentives when reputations or
livelihoods are on the line. This is not to impugn their moral
fiber, but rather to recognize the powerful sway that professional
pressures, peer perception, and personal ambition can exert on
otherwise admirable people. When a project’s success translates
into career advancement, speaking invitations, or intra-company
prestige, even the most ethical may find it nearly impossible to
be neutral in their assessments. The conflict of interest is simply
baked in.
For those reasons, calling the client to “verify” a case study is
entirely ineffective. Barring the rare case of outright fabrication, the
managers involved will gladly confirm everything that was written—
exactly as their incentives predict. Worse, the “human touch” lends
undue credibility to the claims, simply because most corporate
managers make a favorable impression on whoever reaches out to
7
Two millennia ago, Caesar had employed this exact approach with his
Commentarii de Bello Gallico (Gallic War), going as far as claiming the Romans
suffered no deaths in very large battles; a feat considered as radically implausible
by historians.
60 CHAPTER 3. EPISTEMOLOGY
them
8
.
Even granting, implausibly, that the client is truthful, the case
study—as a method—is still terminally flawed by selection bias.
Vendors serve many clients, and case studies are not picked at
random. The vendor naturally picks the most favorable cases. This
vendor-driven selection guarantees massive bias.
Indeed, a vendor may be incompetent—most initiatives quietly
folded as failures—yet still display successes. All it takes to achieve
the optics of success in enterprise software is to take over a mess
left by another incompetent vendor.
Incompetent vendors abound, and companies that pick them
demonstrate a lack of judgment. It is therefore unsurprising that
a company that once picked an incompetent vendor ends up pick-
ing another similarly incompetent vendor once the first failure is
discreetly acknowledged. Yet, for this second round, a few lessons
will have been learned, and some severe mistakes will be avoided.
Thus the second vendor, despite being just as incompetent as the
first, will most likely deliver something comparatively better than
the first. Even the crippled can clear the bar if it is set sufficiently
low. Out of this relative success, a case study glorifying the second
vendor’s competence will follow.
It is futile to lament the biases of authors in the field of supply
chain, just as it is futile to hope that one day an “unbiased” author
will appear. There is no such thing as a neutral observer; putting
any supply chain theory to the test requires the cooperation of
a profit-driven company. Remaining neutral is tantamount to
remaining incompetent. There is no solution to be found in the
ivory tower of pure research. Adversarial incentives, and the biases
they produce, should be considered an intrinsic feature of supply
chain.
This means the methods used to produce supply chain knowl-
edge should be assessed for their capacity to mitigate, rather than
8
Projecting a great deal of confidence, competence, and sympathy is second
nature for those who rise through the ranks of large corporations. They are
essentially prerequisites for corporate careers.
3.7. NEGATIVE KNOWLEDGE 61
amplify, those biases. Ideally, those methods should be designed
and directed to this very purpose.
3.7 Negative knowledge
Having surveyed how supply chain “knowledge” goes wrong, we
need a practical countermeasure. The most reliable first step is
negative knowledge: a curated register of what predictably fails,
kept close at hand and enforced as a veto against attractive but
ruinous schemes. The brief that follows explains why this register
matters and how to use it.
If positive knowledge steers the mind in the right directions,
negative knowledge steers it away from the wrong ones. Nega-
tive knowledge is largely absent from modern education systems.
Teaching is seen as an exercise in imparting “valid” knowledge
to students. Even within higher education circles, lectures where
professors venture into topics such as “widespread delusions” or
“what sounds good but doesn’t work” are few and far between.
As a result, negative knowledge is largely absent from the supply
chain literature, and supply chain is unremarkable in this regard.
Supply chain is, however, remarkably suited for negative knowl-
edge. Failed supply chain initiatives are plentiful, yet too often
brushed aside when they should be treated as a precious source of
negative knowledge. Any company that has run a sizable supply
chain since the 1980s and still operates on spreadsheets rather than
genuinely automated systems has almost certainly endured half
a dozen such fiascos. The irony is as old as the late 1970s, when
software vendors first pledged complete automation, raising expec-
tations they have yet to fulfill. Instead of letting these mishaps
fade into memory, a company should examine each one in detail,
producing post-mortems that upper management safeguards like
crown jewels. By dissecting what went wrong—whether it was
an inflated sales pitch, a muddled chain of command, or a stub-
born legacy process—the organization learns to spot early signs of
trouble and avoid repeating the same pattern a few years later.
62 CHAPTER 3. EPISTEMOLOGY
A firm might, for example, embark on a grand plan to imple-
ment a fully automated inventory system across dozens of ware-
houses. The initiative can fail for any number of reasons: newly
introduced software contradicting decades-old procedures, conflict-
ing demands from different departments, or a vendor’s technology
proving inadequate once the shiny demos give way to real-world
conditions. Without a proper post-mortem, the firm risks sleep-
walking into another, eerily similar venture three years later. A
carefully documented failure, however painful to revisit, strength-
ens the company by exposing the pitfalls that slick promises or
sloppy leadership create. Those who do not learn from their failed
initiatives are doomed to bankroll the same mistakes again—and
in supply chain, the costs only compound as the years go by.
When a supply chain fiasco comes to light, it deserves special
attention and greater credence precisely because no one has an
incentive to expose it. The parties involved almost always keep
their failures hidden, so when a calamity becomes public, it is
usually too big or too costly to be swept under the rug. This
negative knowledge should be safeguarded and studied, as details
of a failure do not spread easily and are nearly impossible to find
in books or academic papers. A manager hoping to avoid the same
expensive lesson should seize these rare disclosures and examine
them carefully, for each one reveals not just a bad outcome but
the bad idea that led to it.
Negative knowledge endures because its foundations do not rest
on passing quirks of the market. When a method or a paradigm
is defective at its core, no change in circumstances will ever fix
it. Supply chain abounds in such misfires. Stack ranking
9
, for
one, might seem straightforward; but in a field that demands close
cooperation among teams, it corrodes the very trust those teams
need to function. Likewise, imposing a normal distribution to
9
Stack ranking, also called forced ranking or “rank-and-yank”, is a perfor-
mance review method where employees are placed on a bell curve—from top
performers at one end to low performers at the other—compelling managers to
“reward the best” and “punish the worst,” regardless of the absolute performance
level of each individual.
3.7. NEGATIVE KNOWLEDGE 63
model demand may look neat on paper, but supply chain data
rarely follow a tidy bell curve. The demand shocks and rare events
that shape real-world operations are exactly what this model
overlooks. These flaws are not products of the moment, and it is
wholly unreasonable to imagine that, with enough tweaking, stack
ranking or the normal distribution could suddenly become suitable.
Their failings are built in, and supply chain leaders waste precious
time and resources whenever they reach for these tools in the hope
of a different result.
Negative knowledge is often far easier to grasp than its positive
counterpart. One can spend days poring over how a columnar
database functions, but it is much simpler—and more practical—
to remember that such databases are unfit for systems of record.
Any firm that runs a highly transactional workflow through a
columnar architecture quickly discovers it runs painfully slowly
and burns through computing resources, hiking hosting expenses.
A company might endure this lesson only once before scrapping the
entire setup, never repeating the misstep. In that sense, negative
knowledge spares a manager from mastering every nuance of why
a technology fails; it is enough to know it will fail, and to steer
clear of that dead end altogether.
Conversely, negative knowledge is particularly suitable for sup-
ply chain because gathering trustworthy positive knowledge is so
difficult.
Experiments in supply chain are deceptively difficult to conduct.
No large company wants to gamble with the performance of a vast
network whose failures can ripple through inventory, transportation,
and customer satisfaction all at once. Even a seemingly modest
trial can disrupt entire segments of the operation yet offer only
a fleeting sense of whether it truly helped or merely shifted the
burden onto another part of the chain. A retailer that experiments
with smaller, more frequent deliveries might reduce local stockouts
for a handful of stores, only to find that shipping costs surge
and cause complications for distribution centers already juggling
complex schedules. This is the recurring curse of supply chain:
problems often get nudged around rather than eliminated.
64 CHAPTER 3. EPISTEMOLOGY
Moreover, any experiment in supply chain, like an experiment
in economics, is notoriously hard to replicate. No two moments in
time offer identical conditions, and market pressures, organizational
constraints, and the human factor can never be reproduced in
full. What worked last year under a certain demand pattern
might fail spectacularly when consumer tastes shift, or a new
competitor enters the market. Confounding factors always lurk
in the background, obscuring cause and effect. When a company
declares an experiment “successful,” it is usually interpreting local
data through a particular lens—often unaware that broader realities
have already changed. Instead of guiding us toward conclusive
truths, such trials mostly remind us how fragile and slippery those
truths can be.
So-called “best practices” cannot be taken at face value. Profes-
sors, consultants, and software vendors each push their own agenda,
and none are aligned with what a company genuinely needs. Profes-
sors focus on theories that lend themselves to research papers and
classroom grading, neither of which has much bearing on whether
those theories deliver real-world results. Consultants, for their part,
clamor for originality so they can stand out among their peers,
yet novelty alone proves nothing. By spotlighting fringe issues
and framing them as urgent, they divert top management from far
more pressing concerns. Meanwhile, software vendors relentlessly
expand their feature sets to tick every box in an RFP, yet seldom
acknowledge how these added layers push maintainability to the
brink.
Even when a bright idea does find practical footing, it can
lose its edge the moment too many adopt it. A so-called best
practice, once widely diffused across an industry, no longer confers
an advantage. Rivals adapt, eroding whatever gains the practice
once yielded. This effect is not peculiar to supply chain, but it
looms especially large here because so many competing players—
vendors, consultants, even academic circles—are motivated to
brand a new technique as a universal solution. Real progress lies
elsewhere, for there is no single method so robust that it survives
unchanged under the glare of mass adoption. What counts as a
3.7. NEGATIVE KNOWLEDGE 65
best practice depends on rivals not doing it, and the moment they
catch up, its value vanishes.
The map of epistemic pitfalls is now complete: fashionable
mathematics, story-driven sociology, corrupted evidence pipelines,
and the perverse incentives that keep all three alive. Escaping this
maze requires a reference frame that does not shift when fashions
do, and a guardrail that keeps us from steering back into the same
ditches. Economics supplies the frame; negative knowledge supplies
the guardrail. Only by translating every supply
-
chain claim into
opportunity cost, profit, and loss—while actively rejecting patterns
that predictably destroy value—can we compare alternatives on
neutral ground and expose wishful thinking for what it is. With
these standards in place, the next chapter leaves epistemology for
first principles, revisiting the elementary laws of economics and
showing how they anchor every subsequent decision rule in this
book.
66 CHAPTER 3. EPISTEMOLOGY
Chapter 4
Economics
Supply chain is an applied branch of economics, itself a branch
of praxeology, the theory of human action. Supply chains emerge
from people acting in concert at scale to improve their material
conditions.
In Chapter 1, we defined supply chain as the mastery of op-
tionality under variability. Optionality reframes the “alternative
uses” of general economics through a corporate lens: every deci-
sion allocates a scarce resource to one option while excluding the
rest. Economics is not a distant backdrop—it is the grammar of
supply-chain practice.
Economics unfortunately remains a science that is both largely
disregarded and ignored. Across legacy and social media, many
self-described “economists” speak not as scientists but as political
ideologues
1
.
Conflicts of interest worsen matters: many economists draw
their salaries, directly or indirectly, from the very states whose
policies they comment on. Whether consciously or not, such
funding narrows the range of views that can be voiced without
1
As already observed by Thomas Sowell in Knowledge and Decisions (1980)
and Intellectuals and Society (2010), as well as by Ludwig von Mises in Human
Action (1949), political commentary from economists frequently stems from
ideology rather than genuine economic analysis. The author is hardly the first
to point out this phenomenon.
67
68 CHAPTER 4. ECONOMICS
professional risk
2
. When ideology masquerades as neutral analysis,
the public’s distrust only deepens.
4.1 Essential economic principles
For readers unfamiliar with economics, we recommend Basic Eco-
nomics (2000) by Thomas Sowell. While not a prerequisite, its
insights are foundational to the study and practice of supply chain.
Like Sowell’s other work, it is exceptionally clear and accessible
to a non-technical audience. Above all, this book clarifies that
economics is strictly neutral with regard to politics and other ide-
ologies. The general laws of economics cannot be evaded: they
apply equally no matter how society organizes itself; whether it be
capitalism, socialism or feudalism. Laws such as supply and de-
mand, diminishing returns, and Ricardian comparative advantage
are fundamental to supply chain.
These fundamental laws govern every exchange that a supply
chain orchestrates.
Law of supply and demand. In a free market, the price of
a good rises when demand outstrips supply and falls when supply
exceeds demand. The price signal is not an arbitrary convention; it
is the device that rations scarcity. When inbound ocean containers
were in short supply during the 2021 post-lockdown rebound,
freight rates quadrupled. Shippers that valued the containers the
most accepted the premium and shipped on time; lower-value loads
waited until capacity returned. The mechanism can seem brutal,
yet it silently assigns a scarce resource to its most valuable use
while encouraging carriers to invest in extra capacity.
Law of diminishing returns. If one factor of production is
increased while all others are held constant, the additional output
2
One of the most basic aspects of managing a conflict of interest is that
the person subject to the conflicting position doesn’t get to decide whether his
views are biased or not. The conflict of interest is an a priori cause to assume
severe bias. Moreover, considering the considerable role that 21
st
-century states
play in the economy at large, proposing that an economist funded by the state
apparatus isn’t massively conflicted is quite an extravagant proposition.
4.2. MARKETS AND SUPPLY CHAINS 69
generated by each extra unit eventually declines. Double the
number of pickers in a fulfillment center and parcel throughput
nearly doubles—until congestion at the packing stations begins to
bind. The same curve appears when adding buffer stock: the first
pallet slashes the risk of a stockout, the tenth pallet offers only
a sliver of extra protection. Locating the knee of this curve is a
central engineering problem of supply chain.
Ricardian law of comparative advantage. Even when
one party is absolutely more efficient at producing every good,
both parties gain from trade when each specializes in what it does
relatively best. A Swiss micromachining workshop that makes
high-precision gears and an Indonesian plant that weaves cotton
shirts both prosper by exchanging output, despite wage and skill
gaps. Supply chains furnish the infrastructure—shipping lanes,
warehouses, information systems, and contracts—that turns com-
parative advantage into tangible surplus.
These three laws are illustrative, not exhaustive. Plenty of fur-
ther concepts—opportunity cost, time preference, entrepreneurial
profit and loss—enter the picture, yet this trio is enough to show
why every stocking, routing, or pricing choice ultimately reduces
to the same question: how best to direct scarce resources toward
their highest-valued use under uncertainty.
4.2 Markets and supply chains
The remarkable improvement in humanity’s material conditions
over the last three centuries can largely be attributed to the free-
market economy, which has financed the development of nearly
all the sciences and technologies we now enjoy. In developed
countries—now covering an ever larger share of humanity—real
prices of food and manufactured goods are a fraction of their levels
a century ago.
The mastery of supply chain is one of the many innovations
driven by free markets to deliver more with fewer resources. While
supply chains rely on complementary technologies, their presence
70 CHAPTER 4. ECONOMICS
typically signals mass accessibility; without them, goods can still
be produced, but at costs affordable only to a privileged few.
The superior material conditions in developed countries largely
result from efficient supply chain operations. While advances in sci-
ence and technology allow us to achieve extraordinary feats—such
as sustaining life in orbit—there is a vast difference between isolated
technological stunts and the packaged, reliable miracles accessible
to the masses. This gap is largely due to robust supply chains,
which serve as the backbone of our industrial civilization.
A revealing counterexample comes from the Soviet Union. Un-
der central planning, almost every large manufacturer was forced
into extreme vertical integration because reliable partners simply
did not exist. A refrigerator plant in Leningrad ran its own timber
yard for crates, its own glass foundry for shelves, and even a dairy
farm to feed the company canteen. What looked on paper like envi-
able self-sufficiency was, in fact, the symptom of an economy where
trust, specialization, and price signals had been extinguished.
Lacking markets that rewarded punctuality and quality, “part-
ners” were merely other ministries obliged to deliver whatever they
could, whenever they could. Factories therefore hedged by dupli-
cating upstream capacity in-house and by hoarding months—or
sometimes years—of input stock. Finished goods fared no bet-
ter: they often waited weeks for railcars, themselves hostage to
the chronic wagon shortages of Gosplan’s schedules. Lead times
that would have bankrupted a Western firm were accepted as
routine, and the waste showed in every metric. Ton-kilometers of
freight per unit of output were several times higher than in market
economies, while specific energy consumption for steel, textiles, or
bread dwarfed Western benchmarks.
This episode matters for present-day managers: supply-chain
performance is not chiefly a matter of clever heuristics or larger
warehouses. It is first and foremost a matter of incentives coupled
with adequate economic calculation. When prices move freely and
contracts are enforced, specialization flourishes and long, delicate
networks become not only feasible but vastly more efficient than
monolithic giants. Remove those incentives or that calculation, and
4.2. MARKETS AND SUPPLY CHAINS 71
the networks collapse inward until every plant becomes a fortress
hoarding its own raw materials. Complexity does not disappear; it
merely resurfaces as waste.
Before the 1950s, air travel was an extravagant luxury reserved
for the military and the wealthy. A series of innovations then
made flying safe and affordable, rendering it commonplace. A
key innovation was podded engines—aircraft engines mounted
under the wings in pods—a concept pioneered by the Sud Aviation
Caravelle in 1955. These pods decoupled engine maintenance from
that of the main fuselage, substantially lowering operating costs
and paving the way for aircraft like the Boeing 707 in 1957 to
democratize air travel.
Currently, the space industry has yet to benefit from modern
supply chain practices, mainly due to technological limitations.
A handful of wealthy entrepreneurs have bought flights to orbit,
though the prices are still prohibitive for most. After decades
of stagnation, SpaceX is advancing technologies—most notably
reusable rockets—that could eventually support a space supply
chain. Once these technologies are mastered, established supply
chain principles will lower costs substantially.
A historical example of this dynamic is the rapid adoption
of containerization in maritime shipping during the 1950s and
1960s. The standardized intermodal container offered a break-
through that drastically reduced handling costs and transit times.
Once this technology was established—requiring suitably equipped
ports, cranes, and ships—a new generation of supply chains formed
around it. Ports that failed to upgrade and integrate container
standards lagged behind, while those that embraced containeriza-
tion became major global trade hubs. Likewise, whenever a new
technology matures—whether it be reusable rockets or advanced
robotics—innovators quickly organize their supply chains to exploit
these advances, leveraging both modularity and standardization
to achieve unprecedented scale and cost efficiency.
More generally, technological advances expand the range of
available options, and supply chain decides how best to leverage
them for maximum economic efficiency.
72 CHAPTER 4. ECONOMICS
4.3 The value of supply chains
Supply chains emerged from the market economy because they are
efficient. Dating the first true supply chain is vain: the phenomenon
emerged gradually and can be traced back to antiquity. Moreover,
supply chains emerged without visionary planners deliberately
steering companies toward what we now recognize as supply chains.
Companies simply followed empirical efficiency, and it led them to
the supply chains that exist today. While modern companies can
now be engineered with the explicit intent of bringing a superior
supply chain to market, this perspective is fairly recent—and
largely posterior to the supply chains themselves.
Supply chains exist because they unlock a handful of structural
mechanisms that confer durable competitive advantages. The
pages ahead examine these mechanisms and show how supply-
chain design can generate more value than sheer size ever could.
These mechanisms differ somewhat from the traditional concept
of economies of scale, generally understood as achieving greater
efficiency with increasing size. That efficiency typically arises from
the mix of fixed and variable costs in production. Functions such as
engineering, marketing, strategic management, and compliance are
largely fixed regardless of output, whereas acquiring, transporting,
and processing goods scale with volume.
Economies of scale represent a basic form of flow optimization:
increasing volume reduces marginal cost. In everyday operations,
however, supply chain professionals rarely have levers to raise
volume enough for additional economies of scale; step-changes in
flow belong to M&A (mergers and acquisitions), not supply chain.
Beyond economies of scale, additional value stems from how a
company shapes its flow through four recurring forces: standardiza-
tion (taming innate variability), pooling (consolidating inventory
to buffer randomness), batching (grouping movements to cut unit-
handling costs), and versatility (designing assets to serve several
purposes). These forces pull on different levers—information, space,
time, and capital—yet converge on the same goal: delivering more
service from the same scarce resources. The next sections examine
4.3. THE VALUE OF SUPPLY CHAINS 73
each force in turn, starting with standardization.
4.3.1 Standardization
Variability is the natural state of physical things. No two trees are
exactly alike; the same holds for stones, cows, people, and parcels
of land. Boundaries often blur: do fallen fruits still count as part
of the tree? Does the ownership of a piece of land extend from the
depths of the earth to the stars above? Our actions, too, exhibit
inherent variability. Even arranging an apple admits countless
variants; no two placements are exactly the same. The apple may
fall on the floor, be picked up again, and put back on the shelf.
Does this still count as the same action? Even if the floor is dirty?
Yet in industrial civilization we take for granted that, within the
flow of physical goods, almost every “thing” and “action” can be
neatly identified and counted. This is no accident; it is the result
of centuries spent making the flow manageable—a process called
standardization.
Standardization is a loose collection of practices that make
the flow of goods manageable within firms and across markets.
Indeed, inherent variability in all things complicates everything.
Every item purchased or sold varies in its characteristics and,
consequently, often in price. Every manufacturing process must be
continuously adjusted to accommodate the minute differences in
its ever-changing inputs. These complications consume managerial
attention—a costly input—and blunt the company’s capacity to
achieve economies of scale. They also impose cognitive costs on
customers, who must judge whether an item suits its intended use.
To a present-day reader, this may seem abstract. The vast
majority of goods are standardized—mass-produced within nar-
row physical tolerances. Variations between units are expected
to be imperceptible; two units of the same product should be
visually indistinguishable. When such a feat is possible, we, the
customers, instinctively attribute the difference to a defect of some
74 CHAPTER 4. ECONOMICS
kind
3
. Even store-bought fruits and vegetables are now carefully
calibrated, reducing variability to a fraction of their natural differ-
ences at harvest. In industry, quality chiefly means the absence of
unintended variation.
We also expect high modularity and interoperability among
assets. Containers flow seamlessly from ships to trucks, thanks
to dedicated container cranes. A limited set of tire sizes covers
nearly all trucks. Standard pallet patterns optimize container
space, and forklifts—built for pallets—move them efficiently. At
every stage, barcodes are printed and later scanned—often by
another company. Ultimately, modularity and interoperability
stem from careful standardization, which simplifies entire classes
of operations.
Yet, just a century ago, the situation was radically different.
Most mass-produced items still exhibited perceptible variation.
Interoperability was local, not global, and modularity largely con-
fined to individual companies. After World War II, standards
expanded in scope and depth alongside modern supply chains. The
movement had forceful advocates, notably W. Edwards Deming,
whose quality methods made standardization a managerial priority.
Standardization is a critical requirement for massifying the
flow: without it, many processes are infeasible, brands are exposed
to defects, managers chase outliers, and most economies of scale
remain inaccessible.
As a corollary, the flow has become predominantly discrete:
itemized units—counted one by one—now travel through the net-
work instead of anonymous tons or liters. Bulk modes persist for
sand, crude, and grain, but for anything with a hint of value, the
default is a barcoded, scan-ready unit.
3
Some products are mass produced in a way that ensures that every unit
is visually unique, typically through a pseudo-random coloring or patterning
technique. However, those variations are intentional and carefully controlled.
These products are not an exception to the standardization principle. On
the contrary, they reflect an even tighter mastery of production processes, as
they rely on an intentional variability without compromising any other product
qualities.
4.3. THE VALUE OF SUPPLY CHAINS 75
This discreteness supercharges optionality. A railcar of ethanol
presents one choice—ship or hold—whereas a pallet of 120 power
drills presents 120 separate choices: every drill can be split, rerouted,
repriced, kitted, or postponed. Each SKU thus functions as an
option ticket whose value depends on where, when, and to whom
the option is ultimately exercised.
The upside is obvious: fine granularity lets a company redi-
rect stock to emerging pockets of demand, arbitrage tiny price
spreads, and delay commitment until the latest possible moment.
The downside is equally clear: the decision space explodes. A
planner who once scheduled two containers a month must now
orchestrate thousands of micro-moves a day. When cognitive or
computational bandwidth is lacking, management falls back on
coarse rules—MOQs, fixed cycles, blanket allocations—thereby
erasing much of the flexibility it paid for. Discreteness is thus a
double-edged sword: it multiplies opportunities while magnifying
the need for automation capable of scanning and ranking them.
We will return to this tension when discussing decision-making
processes.
Handling individual units is not cost-free—the carton, the
barcode, the extra touches all add pennies—but for anything
above the cheapest commodities, those pennies are dwarfed by the
upside of precise allocation. In the pages that follow, every unit
will be treated as a negotiable option, and the question will always
be which exercise maximizes long-run profit. With this discrete
foundation in place, we can now turn to the second structural force:
pooling.
4.3.2 Pooling
Pooling concentrates inventory in one place—or under one corpo-
rate entity—to serve multiple demand streams. If the overhead
from concentration is negligible, pooling delivers the same service
with less capital than multiple independent inventories. In practice,
pooling is profitable when the inventory cost savings exceed the
additional service overhead. For example, concentrating inventory
76 CHAPTER 4. ECONOMICS
in one place typically raises transportation costs relative to serving
goods from multiple locations.
Pooling mitigates supply-chain variability by the law of large
numbers: a central pool can flexibly absorb uneven surges across
segments, reducing the total stock needed to maintain service levels.
Imagine one table holding champagne for the entire reception rather
than many smaller tables. Pooling resources reduces the risk of
any one table running dry, ensuring that everyone is served. Of
course, pooling can hinder last-mile service if it places goods too
far from consumption.
Consider a company weighing a single large distribution center
against multiple smaller warehouses. A single center pools demand
efficiently and reduces total stock, but it also adds overhead—longer
average hauls and more handling at the hub. Managers must weigh
these logistics costs against working-capital savings. If regional
product demands are closely correlated—due to seasonality or
nationwide campaigns—a single warehouse delivers less benefit.
Correlated demand and customer flexibility to switch stores or
channels erode theoretical pooling gains. Side-by-side simulation
or a pilot usually tests whether inventory savings outweigh added
expenses and delays, ensuring a positive net value.
In theory, pooling can dramatically reduce working capital; in
practice, frictions pare back the gains and add overheads. Even so,
pooling remains attractive in many contexts.
Putting transportation overheads aside, the gains are lower for
two reasons. First, the streams of demand are never independent;
on the contrary, those streams should be expected to be quite cor-
related. Markets are connected; while offsetting surges and drops
can occur, the typical pattern is co-movement, not cancellation.
Correlation has many sources, often the company’s own actions.
For example, a nationwide advertisement raises demand across
many locations simultaneously. If streams are perfectly correlated,
pooling requires as much working capital as decentralization for a
given service level.
Second, when demand surges, a customer facing a local stockout
may seek service from another location, incurring the associated
4.3. THE VALUE OF SUPPLY CHAINS 77
overheads. This is common in dense retail networks where multiple
stores compete within the same city. Moreover, letting customers
check availability online amplifies the effect, nudging them—up
to a point—to cooperate with the company’s stocking policies. If
customers fully accept the overhead of alternative service loca-
tions, pooling brings no benefit: the same inventory reduction was
achievable without consolidation.
If the future were perfectly anticipated, pooling would confer
no benefit; it would only add overhead from suboptimal place-
ment. Pooling is widespread precisely because future uncertainty
is irreducible.
In terms of optionality, pooling reduces the number of options
but neither their frequency nor their diversity. Thus, pooling
somewhat simplifies the supply chain challenge. However, firms
rarely adopt pooling for the simplification alone. Empirically,
concerns about dead stock and shelf space dominate the selection
of locations suitable for storing a given product.
4.3.3 Batching
If pooling concentrates inventory in space, batching concentrates
the flow in time. Batching processes items in larger lots rather
than singly, lowering per-unit processing cost but reducing the
ability to serve small quantities.
Batching is ubiquitous in logistics: once the flow exceeds a
threshold, units are repackaged into larger lots. A typical progres-
sion is: group
N
units into a box; bundle
N
boxes into a pallet
layer; combine
N
layers into a full pallet; finally pack
N
pallets
into a container. Bundling straightforwardly reduces handling
costs, both within a facility (intralogistics) and between facilities.
Bundles simplify handling and counting, and their packaging often
adds physical protection.
Batching is also common in manufacturing: it typically means
running a large lot of a single product before switching or pausing.
This approach minimizes setup time and other non-productive work
(e.g., recalibration) and also facilitates post-production logistics.
78 CHAPTER 4. ECONOMICS
In process industries (e.g., chemicals), scale effects such as higher
yields push toward enormous plants and correspondingly large
batches.
Batching increases the inherent variability of the flow and
amplifies the effects of external market fluctuations. Internally,
variability rises because batching ties the fate of many units to-
gether. If 100 units are shipped individually by various couriers,
it would be surprising for all to be lost. However, if all 100 units
travel on one truck, the (small) traffic accident risk now applies to
all at once. Formally, batching raises variance in the flow.
Batching amplifies external variability. The most frequent
mechanism is a longer flow cycle. Batching reduces the frequency
of acquisition, transport, and production. Colloquially, this ex-
tended cycle is called “longer lead time”. In batching, lead time is
the period the company must wait before launching the next batch.
Larger or less frequent batches tie up more working capital in inven-
tory because items must be purchased or produced well in advance.
They also diminish flexibility: misjudged demand turns oversized
batches into overstocks or prolonged stockouts, both costly to cor-
rect. A longer cycle forces the company to operate longer before it
can decide again. For example, with batching, misjudged demand
enlarges overstocks or lengthens stockouts, depending on whether
demand was overestimated or underestimated.
Batching lowers optionality by reducing both the frequency
and the number of decisions within a period (e.g., a month). In
the absence of adequate automation, batching is often employed
primarily to simplify the supply chain. But this trade-off leaves
the company with fewer options and less flexibility to adapt to
market variation. In practice, companies too often opt for large
batches because they lack the capacity or managerial resources to
handle more frequent, detailed decisions. This reflects a human or
organizational limitation, not a physical necessity.
With proper automation, the overhead of frequent decisions
about physical goods becomes minimal, often negligible. As au-
tomation lowers the cost of orchestration, the risk–reward of smaller
batches improves, reducing both inventory costs and lead times.
4.3. THE VALUE OF SUPPLY CHAINS 79
Indeed, while modern computers operate on a nanosecond scale,
physical goods are limited to timescales of tenths of a second (0.1
s) a disparity that accommodates the slower pace of physical
processes without causing damage. Consequently, to a modern
computer, the flow of physical goods appears almost frozen
4
.
4.3.4 Versatility
Versatility denotes capital goods—durable assets such as machines,
vehicles, or facilities that enable the flow of physical goods—that
can serve multiple distinct functions or even an entire spectrum
of operations. Yet at any moment a capital good can serve only
one purpose; thus the company must assign each capital good a
specific purpose at that time.
For example, a warehouse designed to maintain a controlled
room temperature (e.g., 15°C–25°C) might also be equipped with
devices capable of supporting a cool chain (e.g., 12°C–14°C). Even
if the cold-chain capability remains unused at first, it can be a
sound investment to accommodate future market fluctuations.
Versatility is deliberate: it enhances the company’s agility and
expands its range of options. It always entails overhead associated
with its wider capability set. All else equal, a versatile capital
good costs more than a specialized one because of its broader
capabilities. In practice, these overheads are usually offset by
modularization—and its frequent companion, standardization.
Modularization breaks a process into smaller, interchangeable
units that can be easily reconfigured for different tasks. Once
standardized, companies aggressively compete to produce these
units. This typically yields significant economies of scale, leaving
4
Enterprise software products are frequently sluggish and unresponsive. At
the time of this writing, it remains frequent for an operation triggered by a user
to take many seconds to complete. However, considering that even entry-level
computing systems now feature gigabytes of memory, gigaflops of processing,
and gigabits per second of bandwidth, such events must almost always be
attributed to the incompetence of certain software vendors, not to any inherent
limitations of the computing hardware with respect to the complexity of the
operation requested by the user.
80 CHAPTER 4. ECONOMICS
companies little choice but to adopt modular capital goods to stay
competitive. A downside is increased overall flow complexity as
operations are subdivided merely to fit widely available modules.
Modern logistics exemplifies modularization in its barcodes,
loading docks, and shipping containers. Without the intense stan-
dardization of the 20
th
century, the benefits of modularization
would be far smaller. A company can print a barcode and expect
another to read it because they share a standard. A truck can be
loaded at one site and unloaded at another because dock heights
match the truck’s design.
Modern manufacturing has likewise transformed. Companies
now leverage standardized units for most processes, resorting to
custom-made capital goods only for steps that offer a unique com-
petitive edge. Often, the entire manufacturing setup consists of
standardized components, so a company’s distinctiveness stems
from its unique selection and arrangement of these units. More-
over, the steady progress of programmable controllers—in various
forms—has rendered these components not only modular but also
highly versatile.
Every versatile capital good requires its own allocation schedule.
At any given moment, the company must decide whether to reassign
an asset to a different purpose. For some assets, this decision is
straightforward and embedded in practice—for example, truck
fleets typically operate on fixed routes. Conversely, some assets’
versatility remains untapped; a facility might produce a narrow
range for years even as market conditions fluctuate.
The versatility of capital goods vastly inflates the number of
options available to the company. Not only can alternative goods be
served with the same apparatus, but the same service (transport or
transformation) can be performed many ways, as versatile capital
goods often overlap in capability. For instance, box trucks and
semi-trailers are nearly interchangeable in transporting goods; only
the heaviest loads require a semi-trailer.
Yet each asset carries distinct acquisition and operating costs.
Furthermore, every asset, whether labeled perishable or not, will
in time perish through wear and tear. Consequently, much is at
4.3. THE VALUE OF SUPPLY CHAINS 81
stake in deploying the right asset at the right time for each task
to achieve the outcome while minimizing costs.
4.3.5 From forces to policies
Viewed through the economist’s lens, these forces—standardization,
pooling, batching, and versatility—are four ways to reshape the
frontier where scarce resources meet alternative uses.
Each force corresponds to an elementary economic trade-off.
Standardization lowers marginal cost at the price of variety; pooling
frees working capital by accepting longer first-mile hauls; batching
swaps inventory risk for lower handling cost; and versatility converts
capex into real options whose value materializes only if the decision
system can pick the right moment to exercise them. Together they
expand the option space far faster than managerial attention can
track.
Faced with a combinatorial explosion of options, companies
often adopt policies expressed as a simple allocation rule. For in-
stance, a company with two warehouses and a fleet of trucks might
decide that one truck should make a daily round trip between the
facilities to rebalance inventory. This policy is adopted with the
implicit intent to use the rigid schedule for inventory rebalanc-
ing. The goods to be transported are unspecified, but the tacit
understanding is that managers will use the daily allotted capacity
to keep inventory balanced. However, this "one truck" policy is
simplistic; some days, two trucks would be preferable, while on
others, minimal inventory imbalances would justify postponing the
round trip. Such rule-based policies may be implemented through
software to notify the right employees at the right time, but a clerk
could have otherwise supervised them with little effort.
However, governing the supply chain through simple rule-based
policies is largely vestigial. It reflects constraints that ceased to
be relevant with the advent of digital supply chains. Sticking
to simple rule-based policies was essential when a clerk was in
charge of executing the policy. However, if the policy is delegated
to a computer, then it can be made much more dynamic while
82 CHAPTER 4. ECONOMICS
keeping the processing overhead negligible. If this added complexity
improves the supply chain, a far superior allocation of resources
follows. Revisiting the example, a more advanced policy should
not only consider the amount of inventory to rebalance, but also
compare the benefits of rebalancing with the alternative uses of the
trucks, such as serving customer orders. If trucks are the bottleneck
and the inventory imbalances between the two warehouses are
tolerable, all trucks should be allotted to customer deliveries.
Unlike pooling and batching, which reduce the number of
options, the versatility of capital goods inflates them. There is no
a priori reason to think that all those options are economically
equivalent. On the contrary, many options are poor, and one is
best under a given economic model of the supply chain. Software
supported by modern computing hardware makes it relatively
straightforward to search through an extremely large number of
options. The fine print of this process is unimportant here; what
matters is that versatility is profoundly consequential for supply
chains.
Because every policy is ultimately an allocation rule, its merit
is economic, not procedural: it must raise the risk-adjusted rate of
return of the resources it steers. Static heuristics were defensible
when humans were the computational bottleneck, but once a
computer owns the rule, the benchmark becomes opportunity cost,
not clerical convenience. Automation therefore matters only insofar
as it lets the price signals—expressed inside the model as valuations
and discount rates—propagate to every micro-decision. In the end,
a policy is justified exactly to the extent that it generates profit.
In practice, the combinatorial explosion caused by versatile
capital goods can appear daunting and often easily outstrips human
oversight. Yet modern supply-chain software does not simply
attempt to brute-force every possibility. As will be discussed in
Chapter Decisions, advanced analytics—supported by solvers and
heuristic algorithms—can render this complexity tractable.
4.4. THE GOAL OF SUPPLY CHAIN 83
4.4 The goal of supply chain
Supply chain was defined in the first chapter as the mastery of
optionality in the presence of variability when managing the flow
of physical goods. Mastery here is not an abstract virtue: it is the
concrete ability to turn optionality into money while shielding the
company from the downside of uncertainty. In a market economy,
the yardstick for such mastery is unambiguous—long-run profit
expressed in hard currency. The practical goal of supply chain
practice is therefore to raise the firm’s risk-adjusted rate of return
on every scarce resource it touches—capital, capacity, time, good-
will—by continually improving both the range and the quality of
the options that can be exercised.
This goal subsumes every secondary intention a practitioner
might entertain. Higher service levels, shorter lead times, leaner
inventories, greener transportation, happier suppliers, safer working
conditions—all matter, but only insofar as they contribute, directly
or through risk mitigation, to the net present value of the business.
Supply chain is not moral philosophy; it is an applied branch of
economics that survives or perishes by the ledger. Pursuing profit
must not be confused with myopic cost cutting or the caricature of
a “naked” profit motive; it is the disciplined quest for the allocation
that maximizes long-run surplus while respecting both variability
and optionality.
The remainder of this section unpacks that proposition. First,
we revisit profit and loss as the indispensable feedback loop that
tells a company whether its supply-chain decisions create or destroy
value. Next, we explain why objective valuations are mandatory
inside the firm even though all valuations are subjective at the indi-
vidual level, and how the notion of rate of return makes disparate
decisions comparable across time. Finally, we address frequent
objections—moralistic, apocalyptic, or institutional—to profit seek-
ing and show why none of them exempts a supply chain from the
laws of economics.
84 CHAPTER 4. ECONOMICS
4.4.1 Profit and loss
Profit and loss is not an accounting gadget; it is the scoreboard
of a society built on consent rather than coercion. Whenever
two parties trade, each expects to be better off—otherwise no
deal occurs. Profits, therefore, arise only within the domain of
voluntary exchange. Conversely, whenever allocation is imposed by
force—or by regulation that mimics it—the very possibility of profit
vanishes—replaced by tribute, tax, or plunder
5
. In economic terms,
human relations lie on a continuum from hegemonic command to
unfettered trade. Profits arise only on the latter side.
Yet a vocal strand of public opinion still portrays profit as
intrinsically exploitative
6
. This view conflates profit with plunder:
the latter is taken by force, whereas the former can arise only
when both parties consent and deem themselves better off after
the exchange. Seen this way, profit has a moral edge: it is earned
solely by serving others. Losses play the mirror role. They warn
entrepreneurs that customers prefer other uses for their scarce
resources. Far from being damaging, losses act as the market’s
immune system; they halt further waste and redeploy labor, capital,
and ingenuity to more urgent purposes.
History makes the contrast vivid: before the commercial rev-
olution of the 17
th
century, fortunes were typically amassed by
conquering a province, taxing the peasantry, or courting royal
favors. Since the rise of open markets, the road to wealth has
inverted: Rockefeller refined kerosene more cheaply than anyone,
Akio Morita miniaturized electronics for the masses, Ingvar Kam-
prad taught flat-pack furniture to a frugal planet. Where profits
are contestable, efficiency soars; where allocation is decided by
5
Government interventions in the economy lead to crony capitalism, where
corporate success depends on collecting handouts rather than satisfying cus-
tomers. The term “profit” ought therefore to be reserved for gains achieved
in an unhampered market. Wealth acquired through coercion—even when the
coercer is a government—differs categorically from the surplus earned through
voluntary exchange, although conventional P&L statements blur the distinction.
6
In France, for decades, the slogan of left-leaning parties has been “Nos vies
valent plus que leurs profits” (our lives are worth more than their profits).
4.4. THE GOAL OF SUPPLY CHAIN 85
decree, queues, shortages, and rust follow.
Supply chains translate this principle into pallets, containers,
and transport lanes. Standardization shrinks unit cost, pooling
frees working capital, batching arbitrates handling expenses against
responsiveness, and versatility turns fixed assets into spring-loaded
options. Yet none of these levers tells you when to stop. The only
impartial referee is the profit and loss statement: does the extra
SKU, the larger batch, or the reinforced crate pay for itself once
variability and opportunity costs are tallied? If the answer is yes,
the flow expands; if the answer is no, the flow contracts and capital
migrates to better uses.
Practitioners, therefore, miss the point when they chase “service
level”, “inventory turn”, or any other local metric in isolation. Such
indicators matter only insofar as they raise the firm’s long-run rate
of return. The objective never changes: maximize the gap between
what customers willingly pay and what supply chain irrevocably
consumes, subject to the risks imposed by uncertainty.
The sections that follow dismantle the stock objections to the
profit motive and show how a clear P&L perspective inoculates
supply-chain design against both spreadsheet sophistry and pious
slogans.
4.4.2 Objective valuations
Economics begins with a simple observation: only individuals can
feel, think, and act; therefore, only individuals can ascribe value
to anything. These valuations are subjective and ordinal—one
can say he prefers coffee to tea, but there is no arithmetic to add,
subtract, or average the two preferences. Worse for the analyst,
the ranking can flip tomorrow without warning. No spreadsheet
can pin down an “inherent” price for the first sip of water, the
hundredth, or the thousandth; the paradox of value remains a
feature, not a bug, of human action.
Statistical models can, of course, learn to predict how much a
person of average means will pay for an espresso or how likely he is
to abandon a shopping cart, yet the forecast does not turn a private
86 CHAPTER 4. ECONOMICS
preference into an objective quantity. It merely characterizes the
subjectivity as a probability distribution, useful in commerce but
silent on worth.
This point torpedoes the labor theory of value, which tried to
anchor prices to the number of work hours “embodied” in a good.
Diamonds are dear not because miners toil, but because buyers
prize the sparkle. If tomorrow everyone decided that quartz is
prettier, diamond mines would shutter despite unchanged labor
inputs.
In 2019, when a massive fire broke out in Notre-Dame, the
public was swiftly evacuated. While no human lives were at stake
anymore, some firefighters put their lives at great risk to save what
they considered timeless works of art. In economic terms, they
exercised a value judgment, placing mere artifacts above their own
lives. Most men, in the same situation, would not have acted
likewise and would have stopped risking their lives once it was
clear that no further lives were endangered. It is pointless to
assess how much those artifacts are “worth” in hours of labor, and
how much it would cost to produce near-perfect replicas. From
the perspective of those men, the historical and artistic value of
those artifacts was categorically superior to any later attempts
at replacing them. Through their actions, free individuals can
exercise entirely arbitrary value judgments. The role of economics
is not to assess whether such value judgments are good or bad; it
merely recognizes their irreducible subjectivity.
Companies, however, cannot indulge such idiosyncrasies. Inside
a firm, the CEO may love truffles and the CFO may hate them,
yet the only admissible question is: “Do customers pay enough,
soon enough, to cover our costs?” Individual preferences are not
additive; they cancel out. What remains is the impersonal verdict
of the market, recorded as money in, money out.
If an employee decides that his company must value certain
options disregarding the profit motive—“we should stock this
eco-friendly widget even if it never sells”—this employee is not
being good or caring; he is just deluding himself by projecting
his own value judgment onto the company. By doing this, he
4.4. THE GOAL OF SUPPLY CHAIN 87
is doing a disfavor to the company, and to the market, because
he is acting against the wants of the consumers, favoring instead
his own pet perspective of the day. Naturally, if the employee
disagrees with the general mission of the company, then it is his
duty, as a free individual, to resign and seek employment elsewhere.
Disagreements with the company’s stated purpose are not to be
settled by sabotaging operations from the inside.
A firm is therefore an instrument of social cooperation: it
aggregates capital, talent, and information to satisfy the most
urgent wants that other people express through their purchases.
Profits and losses turn those dispersed signals into an objective
yardstick that every planner can read: more coins earned than spent
means resources flow toward higher-ranked wants; the opposite
means they are wasted. Supply-chain practice must anchor every
valuation to that yardstick—never to personal taste—because only
prices let millions of subjective scales collapse into one objective
measure.
The next question, then, is how to compare two profitable
options that differ in timing—as with rates of return.
4.4.3 Rate of return
Whenever a decision separates committing a resource from its
payoff, plain profit arithmetic no longer suffices. Economists call
this universal phenomenon time preference: identical goods are
worth more the sooner they are available. A cup of coffee served
now is worth more than the same cup promised next week; a
supplier usually demands a premium for payment deferred by
ninety days.
Time preference is first an individual trait. Each person pri-
vately discounts future benefits according to circumstance, and
the market aggregates those discounts into the interest rate. The
interest rate therefore quantifies, at the scale of society, how much
present goods are preferred to future goods.
Corporations inherit the same logic. Cash received today can
be reinvested, used to settle liabilities, or serve as a buffer against
88 CHAPTER 4. ECONOMICS
uncertainty; cash received next year cannot. Firms therefore embed
an explicit cost of capital—their collective time preference—into
every investment test. However appealing in isolation, any proposal
that fails to clear that hurdle erodes value once the opportunity
cost of waiting is recognized.
Once time preference is acknowledged, optimizing profit re-
quires attention to time; specifically, the company should pick the
options with the highest rate of return. Conceptually—setting
aside secondary complications—the rate is computed as
RoR =
revenues spending
spending × period
where period is the time required to realize the associated revenues
and spending. The rate of return (RoR) is expressed as a percentage
per unit of time. If the RoR is greater than zero then the company
operates profitably; if the RoR is less than zero then the company
operates at a loss. When comparing two options, prefer the higher
rate—provided both are normalized to the same period.
For example, let’s consider a retail store that can choose be-
tween two products to be put on display on a given shelf. Both
products are sold at 3 coins with an average demand of 1 unit
per day. The first product costs 1 coin per unit, while the second
product costs 2 coins per unit. Storage costs, setting shelf space
aside, are considered negligible. Also, the shelf is large enough to
hold enough units to make stockouts negligible. In this situation,
the first product has a RoR of 200% per day, against 50% per day
for the second product. Thus, the first product should be preferred
because it has the higher RoR. This result is unsurprising: the
products are identical except that the first has twice the gross mar-
gin of the second. The RoR simply provides the ranking criterion
that formalizes this choice.
In typical supply chain settings, maximizing RoR tends to
allocate more capacity, stock, and shelf space to products with
higher volumes or faster rotations. However, RoR—not arbitrary
noneconomic metrics such as service levels—must be optimized.
Many supply chain authors mistake the loose correlation between
4.4. THE GOAL OF SUPPLY CHAIN 89
higher RoR and higher volumes or rotations for causation. As a
result, inventory classification methods are frequently proposed
to segment products into classes
7
reflecting their volumes and/or
their erraticity and adjust the resource allocations according to this
segmentation. These methods confuse correlation with causation:
RoR must be optimized. Mimicking its loose consequences is a
categorical error that yields diminished returns.
In practice, the rate of return presents a series of complications:
1.
Calculating RoR depends on revenue and spending attribu-
tion—a difficult problem.
2.
Revenues and spending do not occur at the same time; a
shared period rarely exists.
3.
Revenues should occur as early as possible and spending as
late as possible; cash flows must be discounted.
4.
The lag between future spending and future revenues must
respect the firm’s liquidity constraint.
5.
Dependencies between options make RoR path-dependent
on the picking order.
6.
Both future revenues and spending are uncertain, and this
uncertainty must be modeled.
None of these complications prove insurmountable in practice, even
if they all entail some degree of approximation.
7
For example, from an inventory replenishment perspective, the method
referred to as “ABC analysis” proposes to organize products in 3 to 5 classes
depending on their respective volumes. Each class is assigned its own target
service level used to dimension the corresponding safety stocks. The ABC/XYZ
is a variant that segments products based on projected demand and projected
erraticity. Those noneconomic methods betray a lack of understanding of the
nature of supply chain.
90 CHAPTER 4. ECONOMICS
Revenue and spending attribution
RoR reflects an idealized view of physical flows. However, con-
straints require nontrivial economic modeling. As a result, in
order to compute the RoR of a given option, the option must be
disentangled from the firm, typically through economic attribution.
This point is discussed below; for now note there is no one-to-one
mapping between revenues, spending, and options. As a result,
RoR is a modeling perspective on supply chain, not a self-evident
truth.
For example, consider a retail store with two alternative prod-
ucts. The first product requires 100 coins of inventory—renewed
daily—to generate 140 coins of daily revenue. The second prod-
uct requires 150 coins of inventory to generate 200 coins of daily
revenue. The two products take exactly the same shelf space in
the store. In this case, the RoR of the first product is 40% per
day while the RoR of the second is 33%. Thus, by the principle
of picking the options yielding the highest RoR, the store owner
should pick the first product. However, this choice is surprising:
it selects the product that will ultimately generate less absolute
gross profit over time. For a store dominated by fixed costs, this
seems incorrect.
Indeed, the naive model above ignored the store’s fixed costs.
Let’s refine the model by attributing a portion of fixed costs to
the products. An attributed daily fixed cost is 40 coins. Thus, the
daily cost for the first product becomes 100 + 40 = 140 coins, and
for the second 150 + 40 = 190 coins. Under this revised model,
the RoR of the first product becomes 0% per day while the second
becomes 5.3%. Therefore, under this refined model, the second
product should be favored.
This situation is not exceptional: what qualifies as best depends
on the adopted economic model for the supply chain. Unfortunately,
no single, straightforward criterion decides which model is superior;
adequacy hinges on how well each captures the economic reality
it is meant to inform. This challenge is a problem of general
intelligence. That does not mean anything goes; we revisit this
4.4. THE GOAL OF SUPPLY CHAIN 91
class of challenges later on.
Aperiodic rate of return
In finance textbooks the rate of return (RoR) is almost always
tied to a fixed horizon—a month, a quarter, a year—because that
is the cadence of accounting and taxation. At the micro level of
supply-chain execution, however, such calendars are an awkward
fit. Raising a purchase order, launching a markdown, allocating
a pallet, or scheduling a truck are atomic decisions whose cash
outflows and inflows trickle in at irregular intervals. A twelve-
month yardstick blurs the very reason they exist: to recycle capital
as fast as possible.
The aperiodic RoR removes the calendar straitjacket. Instead
of forcing every option into an arbitrary window, it asks a single
question: after how long will the option have repaid the cash it
tied up and started compounding its surplus? That self-selected
delay becomes the implicit period in the RoR formula, yielding
a compound rate comparable across decisions, even though each
chooses its own clock.
Because the computation is performed at the level of individual
options, the spread of results is wide—differences of one and even
two orders of magnitude are routine. A fast-moving, high-margin
refill can clear 1,000% per annum once expressed in aperiodic
terms, while the replenishment of a slow-seller in the next aisle
may struggle to reach 10%. Such dispersion is not a modeling flaw;
it is the signal that tells capital where it is urgently needed and
where it quietly dies.
Operationally, the aperiodic RoR slides a cursor along the
option’s cash-flow timeline, computing at each date the rate that
would prevail if the position were closed out then. It keeps the
maximum value; the date that yields this peak becomes the option’s
implicit horizon. In one number, the metric captures both the
scale of the gain and the fastest pace at which the tied-up capital
can compound. The exact rule is spelled out in the annex.
92 CHAPTER 4. ECONOMICS
Discounted cash flows
The previous subsection introduced the rate of return as a yardstick
for how fast committed capital compounds. Speed, however, is
only half the temporal story. A coin received today is worth more
than the same coin received next quarter, because the earlier coin
can be reinvested, held as a buffer, or simply earn interest in the
meantime. Discounted cash flow is the numerical translation of
the time preference: each future inflow or outflow is multiplied by
a discount factor that shrinks with time, so every cash movement
is expressed in “today-coins”. The discount factor derives from the
firm’s cost of capital—the interest effectively paid for liquidity. For
the precise formula, see the annex.
Put differently, discounted cash flow measures the absolute
amount earned (or lost) while accounting for the time value of
money. From the supply chain perspective, the interest rate repre-
sents the company’s financing cost. In finance, this rate often also
proxies uncertainty associated with delays, but in supply chain
use cases uncertainty is better handled through other mechanisms
detailed later.
A practical illustration of discounted cash flows in shipping
and stock decisions appears in fashion, which often faces long lead
times and cash constraints. Imagine a mid-tier fashion brand that
must place orders for its new collection six months in advance of
the season. The brand negotiates terms requiring a substantial
deposit upfront, with the remainder due upon shipment. Even if
borrowed funds cost only, say, 10% per year, the brand’s effective
rate of return on these early orders can dwarf that figure. This
is because the brand’s capital is tied up long before revenues
materialize, and any misjudgment in sizing, color, or style inflates
the effective cost of the goods. When the season arrives, the
brand may discount slow-moving items, adding to the opportunity
cost of money that could have been invested in a more promising
line. Conversely, a well-placed order that aligns with demand can
rotate quickly at full price—delivering a rate of return that far
surpasses the cost of capital. Thus, the interplay of long lead times,
4.4. THE GOAL OF SUPPLY CHAIN 93
extended payment terms, and market uncertainty often outweighs
the ambient interest rate in determining the real profitability of
each supply chain decision.
In practice, for supply chain decisions, the ambient interest rate
matters less than the rate of return, largely driven by inventory
rotations. This is not to deny the importance of interest rates.
Yet a unit of inventory yielding 10% per rotation and turning ten
times per year compounds to 1
.
1
10
2
.
59, i.e. 159% per year. A
single company often sells products whose rates of return range
from 1% to 1,000% per year. This wide spectrum stems largely
from inventory rotations that range from daily to annual. As a
result of such extremes, the profit-maximizing quality of service
varies greatly from one product to the next. Stocking strategies
diverge accordingly.
An annual rate of return of 1,000%, while not atypical for the
flow of an isolated product within supply chain, exceeds by more
than an order of magnitude what finance deems “plausible”. Natu-
rally, this calculation ignores fixed costs—above all, infrastructure.
Once those costs are accounted for, effective rates of return in
supply chains align with those in the general economy.
Cash flow and working capital
Interest rates set the floor for the cost of liquidity; they therefore
matter to every supply chain. In practice, however, they usually
hover within a relatively narrow corridor—say roughly 10–20% per
annum—whereas inventory-cycle returns routinely span two orders
of magnitude. Consequently, what most often constrains planners
is not the precise price of money but its availability. A buyer who
can draw cash at 12% instead of 8% will still proceed if the stock
he finances compounds at 300% per year; conversely, a nominal
2% loan is useless once the credit line is exhausted. Cash-flow
stewardship is therefore critical—especially in businesses that must
wire funds months before the goods clear customs or reach the
shelf; fashion collections, holiday toys, and high-tech gadgets are
everyday examples.
94 CHAPTER 4. ECONOMICS
Take, for instance, a firm whose bank extends a revolving
credit facility capped at 25% of the company’s average turnover
over the past three years, charged at a flat 10% per annum. Up
to that limit, cash can be drawn overnight; the instant the cap
is reached, however, every extra coin requires a fresh negotiation
and is subject to refusal. Alternative lifelines—equity injections,
mezzanine debt, supplier factoring—certainly exist, but none can
be secured overnight, and all carry significantly higher explicit and
implicit costs. Liquidity is therefore priced by a piecewise curve
rather than a single number, and any model that assumes one
uniform interest rate ignores precisely the nonlinearity that most
often bites in practice.
The nonlinearities associated with access to liquidity explain
why payment terms are often of prime importance in supply chain.
Those payment terms critically shape the amount of working capital
required by the company. In the long run, lower working-capital
requirements improve profitability by reducing interest paid to
lenders; in the short run, they often make the difference between a
company that operates smoothly and one forced into inferior choices
because better options are foreclosed by insufficient liquidity.
For example, a fashion brand may have to place purchase
orders for pieces in its new collection more than six months before
the season starts. Overseas suppliers may require substantial up-
front payments. If the fashion brand anticipates stronger sales for
this collection, it may issue larger purchase orders. However, the
company may be blocked by an insufficient credit line. In this
case, smaller purchase orders follow, possibly causing substantial
stockouts downstream. Naturally, from the lender’s perspective,
the cap is not arbitrary: the lender simply does not share the
brand’s optimism about the collection’s outsized success.
The flat-interest-rate assumption rarely holds, though there are
situations where it is an acceptable approximation. The simplest
case is when the company has access to liquidity that dwarfs
profitable supply chain opportunities. A slightly more subtle case
is when the company has negative working-capital requirements.
To illustrate, consider a retailer that keeps store inventory turn-
4.4. THE GOAL OF SUPPLY CHAIN 95
ing monthly with negligible supplier lead time. Assume suppliers
are paid two months after delivery, and products are sold in the
store with a 20% gross margin. Under those conditions, for every
four coins’ worth of goods held in store, the retailer has 2 coins
of working-capital requirement. Indeed, by the time the goods
must be paid, the inventory will have rotated twice on average,
yielding two coins of gross profit. Here the retailer has negative
working-capital requirements—at least from the perspective of cash
flow alone.
This example must not be confused with negative interest rates;
the retailer remains subject to market rates.
Returning to the more typical case with a short-run cap on
working capital, one needs a numerical method to compare the
options. One can, conceptually, pick a feasible set of options that,
in aggregate, enforce the working-capital constraint—an approach
akin to invoking a general solver. However, simpler heuristics can
be used, such as introducing a time-dependent component for the
cost of money.
Dependent options
The rate-of-return metric singles out the best next use of capital,
yet the options themselves rarely stand alone. Most tap the same
finite pools—cash, shelf space, vehicle capacity—and therefore
interact. The interactions can be competitive, when two SKUs vie
for the same slot; complementary, when coffee and sugar lift each
other’s sales; or sequential, when the second pallet only makes sense
after the first has been allocated. Recognizing these couplings is a
prerequisite for translating the metric into sound decisions.
A practical heuristic for dependent options is to let the rate
of return drive a rolling, one-by-one selection. Pick the option
with the highest current rate, update every remaining rate to re-
flect the resources just committed (or released), pick again, and
repeat until all constraints are binding. Computer scientists call
this a greedy algorithm; its outcome is a prioritized list. Because
each refresh propagates current constraints, the list need not stay
96 CHAPTER 4. ECONOMICS
in perfect descending order—most rates drift downward as com-
petition stiffens; a few may even rise when a complementarity
is unlocked—yet every step remains the best move available at
that moment. Where couplings are mild, this greedy pass suffices;
where they are tight—vehicle routing or production sequencing—a
heavier solver that explores many combinatorial paths outperforms
the heuristic while resting on the same valuation bedrock.
For example, consider the challenge of replenishing a retail
store. Based on the rate of return, select one unit to add to the
store—the one that offers the highest rate. Once this one unit
has been, conceptually, added to the store, the rates of return
are adjusted. The next unit of the same product has a lower
rate of return, as piling up inventory in the same store exhibits
diminishing returns. A second unit is then picked based on the
revised rates of return. Repeat the process until the store’s shelf
capacity is reached.
Greedy optimization offers several major benefits. First, it
requires nothing more than establishing the options’ rates of return.
Second, the process is simple and observable: the prioritized list
provides a complete picture of the algorithm’s progression. Third,
when properly implemented, it is typically orders of magnitude
faster than non-greedy alternatives.
A naive implementation of this greedy optimization logic incurs
quadratic computational cost relative to the number of compet-
ing options. For most supply chain situations with thousands
of options, quadratic cost is unacceptably high. However, meth-
ods—beyond the scope of this discussion—mitigate this overhead,
reducing complexity to quasi-linear time.
Most supply chain challenges are well suited to greedy or quasi-
greedy methods because the options are loosely coupled. This is
no accident: companies deliberately avoid systems that resemble
cryptographic puzzles; what is difficult for a computer is also
challenging for the human mind.
For instance, a specialized retailer selling three technical prod-
ucts—A, B, and C—typically purchased together in a fixed ratio
(e.g., two units of A, five of B, seven of C) to form a kit can simplify
4.4. THE GOAL OF SUPPLY CHAIN 97
inventory management. Instead of tracking three separate items,
the retailer treats the preassembled kit
K
as a single inventory item,
while maintaining a small surplus of individual components for
exceptions. This consolidation reduces complexity and streamlines
replenishment.
However, greedy optimization is not suited to every supply-
chain scenario—vehicle routing and production scheduling are
notable exceptions. More generally, problems involving sequences
of highly interdependent decisions do not lend themselves to the
greedy method. That said, many—if not most—supply-chain
situations can be treated quasi-greedily with minor twists. For
example, when placing simultaneous purchase orders to multiple
suppliers—each with its own minimum order quantity (MOQ)—a
greedy strategy may yield infeasible results, such as a series of
incomplete orders that fail to meet thresholds under a limited
budget. Yet a minor heuristic suffices to address this limitation.
One can iteratively drop suppliers—starting with those yielding the
lowest returns—and redistribute the budget among the remaining
suppliers.
While the complications arising from dependent options do not
undermine the principle of selecting options with the highest rate
of return, they may render naive selection methods ineffective. To
handle these complexities, dedicated optimization software—often
called solvers—is required. Notably, these solvers optimize against
a wide array of loss functions (the quantity to be minimized), a
topic addressed later in Chapter Decisions.
Uncertain rate of return
Interdependencies were the first hurdle; uncertainty, the second.
Even after mapping how options compete for cash, capacity, or
shelf space, we still do not know which numbers will materialize
when those options unfold. Demand, lead times, yields, and spot
prices all waver outside the planner’s control. Yet the guiding
principle endures: choose the option with the greatest economic
return—measured not at a point, but in expectation. In practice,
98 CHAPTER 4. ECONOMICS
replace every deterministic payoff with its probability-weighted
counterpart: list plausible scenarios, assign each a likelihood, apply
the deterministic formula within each branch, then average the
results. This produces an expected rate of return that can be
compared across options as before.
While conceptually straightforward, the probabilistic perspec-
tive presents its own challenges. First, probabilities must be esti-
mated—this is the essence of probabilistic forecasting. Second, the
future hinges on decisions not yet made: an option’s rate of return
depends on how future options will be selected. Choosing present
and future options under uncertainty is the essence of stochastic
optimization.
Probabilistic forecasting and stochastic optimization are techni-
cal subjects that will be discussed in Chapter The Future. Yet the
probabilistic view of the rate of return is no mere theoretical curios-
ity. With proper software, real-world supply chain options can and
should be evaluated by this method. Naturally, approximations
are unavoidable—the number of potential futures is infinite—but
this does not diminish the approach’s practical value.
4.4.4 Cronyism and regulatory capture
The profit motive is often caricatured as a cold-blooded quest for
“more at any cost”. In reality, within a competitive market, profit
is simply the applause earned for satisfying the most urgent wants
of strangers while spending the fewest scarce resources. Profit is
therefore an outcome, not a creed.
Behind the single word “profit” lie three radically different
behaviors. Genuine profit seeking earns margins by offering a
superior mix of availability, quality, and price. Long-run profit
therefore depends on nurturing goodwill with customers, suppliers,
and employees. Slash-and-burn tactics that delight today’s spread-
sheet but erode tomorrow’s options prove self-defeating once their
full rate of return is computed.
A second behavior is the “naked” profit motive popular culture
loves to condemn: dumping waste in rivers, bullying suppliers,
4.4. THE GOAL OF SUPPLY CHAIN 99
or bait-and-switching customers. Such conduct is not the logical
outcome of profit seeking; it is the outcome of faulty economic
calculation. When legal fines, lawsuits, talent flight, and brand
damage are brought back to present value, the mirage vanishes.
Modern supply chain analytics exists precisely to surface those
latent costs by turning them into shadow valuations that discipline
every decision.
The third behavior is crony capitalism and regulatory capture.
A firm that lobbies for tariffs, exclusive licenses, or mandatory
standards does not earn profit; it extracts rent. No customer is
served better—resources are merely diverted by political privilege.
Rents distort every supply chain signal: optionality shrinks, in-
novation stalls, and the apparent rate of return is propped up by
statute rather than performance.
8
The discipline presented in this book targets the first path
while inoculating the firm against the latter two. All subsequent
techniques assume an open market: they benchmark each option
against what a vigilant competitor could do tomorrow, not against
a privilege secured in parliament yesterday. Restore genuine compe-
tition and the same optimization machinery becomes an engine of
wealth creation; let rents accumulate and no amount of forecasting
finesse can salvage the economics.
4.4.5 Supra-economic goals
A recurrent objection to profit-seeking firms is the appeal to supra-
economic goals—ends that allegedly outrank mere monetary con-
siderations and thus justify overriding the discipline of prices, costs,
and opportunity costs. These calls usually come in two flavors.
The first is moralistic: a conviction that the firm should advance
some ethical or social agenda beyond serving customers. The
second is apocalyptic: a prediction of looming catastrophe that
8
The sharp fall in U.S. freight rates after the deregulation of trucking in 1980
(Motor Carrier Act of 1980, Public Law 96-296.) shows how quickly efficiency
returns once artificial moats are removed.
100 CHAPTER 4. ECONOMICS
demands immediate sacrifice of profitability to avert disaster. Both
categories will be examined in the pages that follow.
For companies that operate in open markets, invoking a higher
purpose does not dissolve scarcity; it simply relabels trade-offs.
Every pallet, man-hour, and coin devoted to one objective is un-
avoidably withheld from another. Economic calculation is therefore
not an optional worldview but the only coherent way to compare
alternatives. Institutions that rely on coercion—armies, for ex-
ample—sit outside the scope of this book precisely because their
resources are not allocated through voluntary exchange. For all
other organizations, the alleged supremacy of supra-economic goals
is an illusion; in practice, they must still decide whether diverting
resources from their core mission creates or destroys long-run value.
Moralistic concerns
In supply chain terms, a moralistic concern asserts that people
should consume less—or more—of something; hence the company
should help steer the desired social change. Details vary by time
and place.
For example, in Ruritania, a dominant religion might declare
that milk consumption by adults is a sign of depravity, and that
milk should be reserved for children. In response, a Ruritanian
company might voluntarily reduce both the quantity and variety of
milk bottles marketed to adults, thereby proactively enacting the
seemingly desirable social change. Yet such a move will scarcely
affect the market. Contrary to the popular trope of “economic
power”, a company cannot force customers to buy or forgo its
products. If the company arbitrarily cuts production of adult-
branded milk bottles, a single competitor will raise output to keep
demand equally satisfied.
If, however, public sentiment in Ruritania against bottled milk
for adults is so strong that many customers boycott other unrelated
products, it may be reasonable to divest that line entirely—perhaps
by selling the unit if its production can be separated from other
lines. This approach differs fundamentally from merely lowering
4.4. THE GOAL OF SUPPLY CHAIN 101
supply in the misguided hope of lowering overall market demand.
Typical moralistic concerns include foods or beverages deemed
inappropriate, either for alleged harm to individuals or for harms
incurred during production. Veganism is, for example, one of the
more recent creeds in an otherwise extremely diverse set of creeds.
These concerns also extend to country of origin, often grounded in
incorrect mercantilist
9
views of trade (e.g., “we are at economic
war with Ruritania”) or in perceptions of work conditions in those
countries (e.g., “Ruritania lets its companies unduly exploit its
citizens”).
Supply chain is concerned with moralistic views only insofar as
serving a type of good, or using a particular factor of production,
might compromise its general capacity to operate. Brand damage
can alienate customers or diminish the company’s appeal to po-
tential employees. These concerns are real and can be quantified
just like any other risk. Economic calculations may indicate that
such risks are excessive, suggesting that the company should exit
a market segment to minimize future losses. However, these calcu-
lations should not be mistaken for the personal value judgments
of supply chain employees.
What makes the challenge thornier, however, is that employ-
ees
10
have perverse incentives when it comes to moralistic concerns.
9
Mercantilism, which was the dominant economic view from the 16th to the
18
th
century in Europe, presents countries as economic adversaries. Exports are
seen as favorable, while imports are seen as unfavorable. Local businesses are
frequently eager to promote a mercantile perspective as it is a proven method
to have the political apparatus block, or at least hinder, foreign competition.
However, while countries can be at war in a geopolitical sense, there is no such
thing as “economic warfare”. Free markets are based on voluntary transactions
that only happen if both parties view the transaction as beneficial. Mercantilism
is another long-disproven economic perspective that still remains popular to the
present day.
10
The fate of the company and the fate of its employees are distinct, and
employees are fully aware of that. Even if a loyalty program is in place through
stock options or similar means, getting an internal promotion is almost invariably
more impactful for the employee than whatever reward the loyalty program
might offer. If an employee has to choose between a promotion and the reward
of the loyalty program, it is relatively safe to bet that the promotion will be
102 CHAPTER 4. ECONOMICS
By publicly pressing a moralistic concern, in a process referred to
as virtue signaling, an employee presents himself as particularly
virtuous—more so than his peers. Virtue signaling was and re-
mains effective because corporate politics tend to reward those
who conform most to prevailing social norms rather than solely
those who drive the company’s profitability.
However, within a supply chain, it is not the virtue-signaling
individual who bears the cost of the revised course of action, but
the company itself—and, to some extent, the market, which will
be less adequately served until a competitor steps in to fill the gap.
Thus, corporate executives must remain vigilant against such virtue
signaling. Indeed, large companies are especially susceptible to
this class of antisocial behavior, which undermines the authority of
the nominal hierarchy and its ability to steer the company toward
serving customers.
Apocalyptic concerns
Throughout history, a fascination with annihilation has gripped
a significant portion of humanity. Exploring these morbid fasci-
nations and understanding why they have endured for millennia
is a matter for psychology, sociology, and history. Supply chains
are concerned only insofar as these fascinations interfere with the
conduct of business. History yields two observations: first, the
magnitude and impact of apocalyptic concerns have been vast;
second, their specifics have varied dramatically from decade to
decade. Let us review a short list of modern apocalypses once in
fashion.
Overpopulation (18
th
–19
th
century): The rapid popu-
lation growth during the first industrial revolution led nu-
merous intellectuals to believe that population growth would
inevitably outstrip food production, leading to widespread
famine and societal collapse. (See An Essay on the Principle
of Population (1798) by Thomas Malthus.)
chosen.
4.4. THE GOAL OF SUPPLY CHAIN 103
Degeneracy of the Race (1920s–1930s): Fears of racial
and genetic degeneration, influenced by eugenics and pseudo-
scientific racial theories, predicted the decline of civilizations
due to the mixing of races. (See The Passing of the Great
Race (1916) by Madison Grant.)
Communist Takeover and Red Scare (1950s): The
rise of the Soviet Union and the spread of communism led
to fears of a global communist takeover, resulting in the
Red Scare and McCarthyism in the United States. (See The
Naked Communist (1958) by W. Cleon Skousen.)
Chemical Agriculture (1960s): The detrimental effects
of pesticides, particularly DDT, on the environment and
public health highlighted broader concerns about industrial
pollution and ecological collapse. (See Silent Spring (1962)
by Rachel Carson.)
Nuclear Winter and Global Cooling (1970s): Amid
the Cold War, fears of a nuclear winter—an aftermath of
nuclear war causing global cooling—merged with concerns
about a potential natural ice age. (See The Cooling: Has the
Next Ice Age Already Begun? (1976) by Lowell Ponte.)
Resource Depletion (1970s–1980s): Numerous predic-
tions were made that humanity was running out of essential
resources like oil and minerals, leading to societal collapse.
(See The Limits to Growth (1972) by Meadows et al.)
Access to Fresh Water (1980s–1990s): The depletion
and pollution of freshwater resources were predicted to lead
to severe shortages and conflicts over water. (See Cadillac
Desert: The American West and Its Disappearing Water
(1986) by Marc Reisner.)
Acid Rain (1990s): Industrial emissions causing acid rain
were feared to be devastating forests, lakes, and buildings,
leading to widespread ecological damage. (See Acid Rain:
104 CHAPTER 4. ECONOMICS
Its Causes and its Effects on Inland Waters (1992) by B. J.
Mason.)
Ozone Depletion (1990s): The depletion of the ozone
layer due to refrigerant gases, leading to increased ultraviolet
radiation, skin cancer, and ecological disruptions. (See Ozone
Diplomacy: New Directions in Safeguarding the Planet (1991)
by Richard Elliot Benedick.)
Global Warming (2000s): Rising levels of “greenhouse”
gases were predicted to cause catastrophic climate changes,
including sea level rise, extreme weather, and loss of biodi-
versity. (See An Inconvenient Truth: The Crisis of Global
Warming (2006) by Al Gore.)
Climate Change (2010s–2020s): Building on the global
warming narrative, climate change encompasses broader con-
cerns about unpredictable and severe environmental impacts,
threatening the sustainability of human civilization. (See
The Uninhabitable Earth: Life After Warming (2019) by
David Wallace-Wells.)
This list spans the past 150 years, though it could extend back
millennia. Its purpose is not to validate or refute these threats—
such judgments lie outside supply chain and within specialized
sciences. Rather, it illustrates the phenomenon’s magnitude and
the inconsistency of the threats’ particulars.
Apocalyptic concerns are supra-economic because the view is
that, if unaddressed, the world—and thus the market economy—
will end or be left in an irretrievably degraded state. By definition,
an apocalyptic concern overrides any individual’s particular want.
After all, no individual can be anything but insignificant when
weighed against the possibility of oblivion for a sizable portion of
mankind, and potentially all of it.
For apocalyptic fears that have fallen out of fashion, it is evident
that companies which historically allocated resources to support
4.4. THE GOAL OF SUPPLY CHAIN 105
those causes were severely misguided, resulting in significant self-
inflicted harm. Few managers would still argue that a hiring policy
explicitly discriminating against a minority religion reflects the
best economic interests of the company. Similarly, few managers
would nowadays argue that generous donations to charities and
political movements advocating eugenics genuinely contribute to
the betterment of mankind. Yet, not so long ago, both policies
were widely practiced by educated and well-intentioned people,
operating under what was considered firm “scientific” consensus
11
.
These apocalyptic concerns disrupt supply chains because their
supra-economic claim invalidates economic calculation. Companies,
however, still must allocate scarce resources. Apocalypse does not
abolish scarcity. There is no alternative to economic calculation
when it comes to satisfying customers. If a concern outranks
satisfying customers, why allocate anything less than all resources
to it?
As King Richard cries in Richard III, “A horse, a horse! My
kingdom for a horse!”
More generally, the notion that any concern over allocating
scarce resources enjoys supra-economic status, categorically super-
seding mundane concerns, reflects a profound misunderstanding of
economics. There is no alternative to profits and losses.
Non-profit organizations
In most Western countries, tax laws grant special exemptions to or-
ganizations styled “non-profit”, unlike those presumed “for-profit”.
Whether such tax exemptions and regulatory relief
12
benefit soci-
11
For an extensive survey of disastrous ideas that enjoyed massive popularity
among intellectuals in their time, see Intellectuals and Society (2010) by Thomas
Sowell.
12
Anticapitalism has been dominant among intellectuals for the last 150 years
or so. The pursuit of profits is seen as one of the most serious defects of
capitalism. The non-profit status supposedly fixes this defect and thus must be
rewarded with some sort of tax exemption and/or regulatory relief with respect
to obligations imposed on regular for-profit businesses. Whether the pursuit of
profits is generally good or bad for society as a whole is a concern of praxeology,
106 CHAPTER 4. ECONOMICS
ety at large lies beyond supply chain. However, as far as general
economic understanding goes, this label is very unfortunate and
generates immense confusion among the general public. Worse,
many authors of supply chain books share the confusion and as-
cribe quasi-mystical qualities to non-profits, echoing popular but
misguided beliefs. These authors mistakenly believe that associa-
tions (i.e., so-called “non-profit” organizations) are exempt from
the principles of profits and losses. This is patently false. Associa-
tions operate under the same economic laws and, insofar as some
operate a supply chain, obey the same principles. Consequently,
a technique meant to improve supply chains is equally valid—or
equally invalid—regardless of taxation status.
A few facts suffice. Both kinds of organizations are driven by
entrepreneurs. Some entrepreneurs who run for-profits fail and end
poorer than they started. Conversely, some who run non-profits
succeed and command salaries far exceeding the national average.
Some profitable companies distribute dividends, while others do not.
Both companies and associations see corruption scandals. Both
require customers to fund them
13
. Customers have expectations,
and in both cases, if a competing organization delivers more for
less, customers favor the more efficient organization. Both expand
when they collect more than they spend; both go bankrupt when
they spend more than they collect.
Calling an association “non-profit” is a misnomer: profit and
loss are the basic facts of economics. The organization collects and
spends resources. If collections exceed outlays, there is profit. If
outlays exceed collections, there is loss. Profit and loss exist even
without an accountant. When an accountant keeps a ledger, the
situation is clarified; he creates neither profit nor loss, he merely
the science of human action, not of supply chain. For an extended discussion,
see Human Action (1949) by Ludwig von Mises.
13
Associations are frequently funded, at least in part, through taxation in
the form of government subsidies. This only makes the government one of
their customers. Associations routinely have enterprise customers as well. The
specifics of the customers are a technicality that has no bearing on the general
economic operating principles governing an association.
4.4. THE GOAL OF SUPPLY CHAIN 107
reveals them.
In the market economy, all actors strive to address customers’
most pressing wants. This is true for companies and associations
alike. For example, if customers are concerned with deforestation,
they may spend to plant trees, or have them planted. By giving to
an association dedicated to reforestation, customers make a value
judgment: they buy future forests; a world with more forests is their
reward. If presented with several equally dedicated organizations,
and one can grow twice as much forest per coin, customers will
favor it. If customers later discover that the association spent all
it collected on advertising and planted no trees, they will rightly
feel defrauded and likely abstain from further donations.
The absence of a direct correspondence between a customer’s
payment and the service rendered by an association is merely a
technicality. The same pattern appears in business: urban waste
collection is usually paid via an annual lump sum rather than per
service rendered, yet many such providers are for-profits.
Calling an organization “non-profit” is as nonsensical as calling
it “gravity-free”. Natural laws admit no objection; those that govern
economics are no exception. How profits are allocated—whether
distributed to shareholders, reinvested in further capital goods,
or used to raise employee wages—is a technical detail. Taxation
schemes materially affect these technicalities, but not in a way
categorically different from, say, subsidizing a sector such as solar
energy. It is merely one of many approaches by which governments
partition the market in distributing subsidies. Subsidies confer no
special economic power on associations.
Ultimately, entrepreneurs’ inner motivations remain unknow-
able to external observers, whether their organizations are for-profit
or non-profit. They may be genuinely moved by the improvement
of their fellow men, or merely chase vanity and the publicity that
attends a successful association. Men can lie; they can also change.
What starts in selfish intent may, over time, become genuine altru-
ism; the converse is also possible.
Thus, supply chain cannot categorically distinguish alleged non-
profits from for-profits. This distinction belongs to tax codes, not to
108 CHAPTER 4. ECONOMICS
supply chain. Authors who claim that associations possess unique
supply chain properties are either engaging in virtue-signaling—
posturing with noble causes—or are misled by ideology, attributing
quasi-mystical qualities to these organizations.
Armed forces
Non-profit organizations illustrate how profit and loss remain
inescapable even under altruistic pretenses. Armed forces offer a
distinct case: an institution squarely within economics yet lacking
the usual for-profit feedback loops. Armies do not earn revenue
by selling services to a clientele; rather, they are funded through
taxation raised by a sovereign power. This reliance on state coercion
does not exempt them from scarcity; it only changes how resources
are allocated. Economic laws do not evaporate in the presence of
arms and uniforms: every bullet and barrel of fuel meets the same
constraints that govern the rest of the economy.
From an economic perspective, a “military” is among the most
extensive central-planning structures in modern societies. The
armed forces must maintain equipment, procure supplies, support
troops, and project power across various geographies. However,
these activities cannot be assessed using the profit-and-loss frame-
work applied to market actors. While a private company constantly
balances expenditures against customer payments, the military is
funded without voluntary exchange, and thus lacks direct market
discipline. The invisible hand that fosters efficiency in private
ventures is replaced by top-down directives meant to integrate
strategy rather than short-run financial returns.
Yet the absence of profit signals does not spare the armed forces
from the problems of economic calculation. As Ludwig von Mises
demonstrated in Socialism (1922) and Human Action (1949), when
a central planner attempts to guide resource allocation outside the
pricing process, there is no reliable gauge to separate profitable
from unprofitable uses. Without the discipline enforced by free
markets, misallocation is almost guaranteed. Such misallocations
remain hidden in peacetime—when the budget appears as a sin-
4.4. THE GOAL OF SUPPLY CHAIN 109
gle line item—but become evident when the armed forces face
shortages of critical gear or surpluses of obsolete equipment. The
same challenges afflict large companies running complex supply
chains, but armies face them on a grander scale, with life-or-death
consequences.
In modern times, defense budgets running into the billions
reflect complex trade-offs made at the highest political level. The
classical “guns versus butter” analogy captures how each unit
devoted to the military is one not spent on civilian goods—schools,
hospitals, roads, or other productive ventures. The armed forces
do not “profit” when they fulfill their mission; rather, they receive
continued political support to do more of the same. When a
new generation of fighter jets or warships is commissioned, there
is no queue of customers whose willingness to pay would justify
the spending. Instead, the justification emerges from strategic
imperatives, real or perceived, that fall outside the marketplace’s
regular accountability mechanisms.
Naturally, militaries try to mitigate the pitfalls of central plan-
ning by resorting to surrogate metrics: “readiness”, “mission ca-
pability”, “force projection”, and other strategic or bureaucratic
indicators. While useful, these indicators are no substitute for the
immediacy of profit and loss. A victorious campaign produces no
revenue surplus for the armed forces, just as a protracted conflict
does not book directly as a loss. Extended conflicts can yield
larger defense budgets and expanded infrastructure, absent any
clear demonstration of effective economic use. Decision-making
reverts to politics—what the public or its representatives deem
justifiable—rather than to trade-offs shaped by voluntary demand.
These constraints generate unique supply chain challenges for
the armed forces. Demand is highly uncertain and driven by
geopolitical events rather than consumer preferences. Large stock-
piles are maintained to hedge against crises, and lead times can
be extraordinarily long for specialized equipment such as nuclear
submarines or stealth aircraft. When an army lacks sufficient
capacity to produce critical components itself, it must turn to
private defense contractors. At that juncture, a portion of the
110 CHAPTER 4. ECONOMICS
supply chain reenters the realm of market competition, at least
on paper: suppliers are paid according to negotiated prices, and
nations often try to pit multiple contractors against each other to
drive costs down. However, political interference and regulatory
complexities often distort these processes, leading to “cost-plus”
contracts and other arrangements that too often degenerate into
crony capitalism. As a result, while a veneer of market discipline
can exist, it is usually partial and overshadowed by bureaucratic
politics.
In the same vein, Hayek’s knowledge problem in The Use
of Knowledge in Society (1945) applies to the military as well.
Information relevant to resource allocation is widely dispersed.
Local units have up-to-date insights into their logistics and strategic
constraints, but higher-level commands make the grand allocation
decisions. Strict top-down hierarchies suppress bottom-up signals,
depriving upper echelons of the local knowledge needed to make
informed choices about procurement, deployment, or maintenance.
De facto, the arms industry acts as a closed microcosm of central
planning: no matter how competent or well-intentioned the officers
and bureaucrats, critical information is lost in transit, heightening
the risk of misallocation.
Nonetheless, all relevant resources remain scarce. Missing sup-
plies can sink entire campaigns, and, conversely, excessive stockpiles
of obsolete gear can hamper the military’s ability to finance mod-
ernizations. Moreover, as with any large-scale bureaucracy, routine
improvement efforts add layers of process: new committees, new
oversight boards, new planning steps. Over time, these layers
develop inertia of their own, making course corrections harder.
Without a profit test to cull waste, entire procurement lines may
survive long past their useful life. Politically favored programs
persist even when they yield little advantage on the battlefield.
The result, economically, is structural inability to optimize resource
use. Militaries are not free to shed these constraints: they cannot
become agile forces guided by profit. They fulfill functions outside
voluntary collaboration, and the price of the arrangement is the
absence of direct market feedback.
4.4. THE GOAL OF SUPPLY CHAIN 111
A final peculiarity of the armed forces is that they are subject
to abrupt reorientation, typically following changes in political
leadership. Rebudgeting can be drastic, upward or downward,
depending on the perceived threat. Shifting alliances, new treaties,
or war-weariness can all reshuffle military priorities. While private
businesses in a free market also adjust to sudden shifts, they can
rely on profit signals to guide them toward better allocations. By
contrast, armies must redesign their supply chains according to top-
down instructions, incurring the overhead of a large bureaucracy
that must simultaneously produce new doctrines, scrap old ones,
and realign entire fleets of vehicles and equipment.
Thus, armed forces exemplify an extreme manifestation of the
same planning conundrums faced by large integrated enterprises.
The state does not measure the success of its armies by how
effectively they satisfy consumers’ wants; no direct channel lets
consumer dissatisfaction act on the military budget. Yet resources
remain scarce, and the need to distribute them persists, so the logic
of economics still applies. The absence of profit-and-loss signals
can be partly offset by surrogate metrics, but such substitutes offer
nothing close to the breadth and refinement provided by market
prices. The upshot is that, even if armies operate in a non-market
context, every conflict, every peacetime mobilization, and every
behind-the-scenes logistical preparation exists under the iron law
of scarcity. While the state of war is often treated as pure politics
or strategy, at its heart it is also an elaborate allocation problem
no decree can escape.
4.4.6 Solutions and trade-offs
There are no solutions. There are only trade-offs.
A Conflict of Visions (1987), Thomas Sowell.
Supply chains are strictly bound by the laws of economics.
Each allocation of a resource forecloses all alternative uses that
were readily available. A decision is correct only if there is no
better use for the resource; profitability is insufficient—it must be
maximal.
112 CHAPTER 4. ECONOMICS
Any inventory allocation carries the opportunity cost of the
best alternative. In a retail network, once a warehouse unit is
dispatched to a store, it is no longer available to serve other stores.
If it was the last unit and it goes to a slow store while others sell
briskly, that allocation was likely a mistake. Better to route the
last unit to the store with the most urgent demand. Either way,
the unit will likely sell. Yet sending it where demand peaks aligns
the network with the customers’ most urgent wants. If the unit at
the mediocre store lingers, it may be returned to the warehouse
and redispatched to a more suitable store. However, this incurs
extra transportation costs, and while in transit the unit serves no
one.
Selling now trades off against the chance to sell more profitably
later. The later, more profitable outcome may not require a higher
price. For example, a screw bought at 1 coin and sold at 2 yields
1 coin of gross margin. However, if this screw—the last one in
inventory—belongs to a bundle (say, a machine sold for 100 coins
with component costs of 50), then selling the screw alone yields 1
coin now but forfeits 50 coins should a bundle buyer arrive before
more screws can be secured. If the firm expects a prompt bundle
buyer, selling the screw alone is the wrong supply chain decision
despite being profitable. The correct move is to keep the screw in
stock to preserve the bundle sale.
Resources are varied, and the resulting trade-offs are even more
so. Yet many supply chain authors reduce decisions to inventory
costs vs stockouts, cost vs cash vs service, or supply vs demand.
These low-dimensional perspectives are misguided. By elevating
a few engineering trade-offs—rooted more in bookkeeping than
in economics—to first principles, they obscure the only universal
yardstick: economic calculation.
Across any industry, supply chain is a continuous walk along a
tightrope of mutually exclusive improvements. Drop the ticket price
and volumes usually spike, yet every percentage point conceded
bites straight into the gross margin—and, in luxury segments, can
cheapen the brand so severely that unsold units end up destroyed
rather than discounted. Broaden the assortment with extra colors
4.4. THE GOAL OF SUPPLY CHAIN 113
or sizes and customers applaud, but procurement, storage, and
quality-control costs climb just as fast. Add backup suppliers to
hedge late deliveries, and the resilience is bought with weaker
purchasing leverage. Keep stores open longer and extra footfall
appears—along with a fatter wage bill. Hire temporary pickers to
clear a seasonal spike and throughput improves while errors and
shrinkage rise. Switch to a boutique component maker and the
finished good feels premium—provided the small plant does not
miss its slot. Relax the return policy to nurture loyalty and reverse-
logistics expense balloons. Rotate perishables daily to maximize
freshness and the shelf looks immaculate, yet deliberately thinner
inventory guarantees more stockouts. Every lever carries this dual
character; planners juggle dozens of such dilemmas at any moment.
This list could be extended almost endlessly across every ver-
tical and its specific trade-offs. Moreover, nearly every named
resource can be further decomposed. The label is handy but
ultimately an approximation. For example, let’s say that the ware-
house capacity is 10,000
m
2
. In reality, not all of those
m
2
are
equal. Some sit near loading docks or conveyor endpoints. Ca-
pacity may be refined to 2,000 “premium”
m
2
and 8,000 “regular”
m
2
.
One can always impose an arbitrary taxonomy on resources
and trade-offs with as many classes as one fancies. Within a given
company, such a taxonomy may ease communication. Across the
market, such exercises are largely pointless. It yields no better
results than the common tongue.
The only practical approach to these trade-offs is economic
calculation. Economic calculation balances these contradictions to
reach an adequate supply chain decision. It also makes clear that
no decision is an absolute good—a pure source of profit—but a
mix of expected gains and losses.
114 CHAPTER 4. ECONOMICS
4.5 Valuation concerns
Supply-chain decisions are, at bottom, wagers: each one stakes a
scarce resource today for a stream of uncertain consequences to-
morrow. Trying to evaluate all those wagers in a single, monolithic
calculation is hopeless—the decision space is far too large and
changes too quickly. The practical remedy is to slice the grand cal-
culation into bite-sized fragments—one fragment per option—and
to equip every fragment with a valuation. A valuation is not an ac-
counting curiosity but an instrument: a money-denominated score
that renders options commensurable, so they can be compared,
ranked, and picked piece-by-piece rather than all at once. Good
valuations expose opportunity costs and power the optimization
engines introduced earlier; flawed ones quietly steer the firm toward
systematic misallocation.
Seen in this light, valuations form the indispensable bridge
between the concrete world of pallets and trucks and the ab-
stract realm of economic calculation. They translate heterogeneous
events—an extra day of shelf life, an irritated customer, a late ven-
dor payment, a congested pick aisle—into one scale that software
can manipulate at millisecond pace. Building this bridge always
follows the same route.
First, the company frames an explicit economic model that
makes clear which flows matter and how they connect to profit and
loss. Second, it attributes every observed coin to the decisions that
generated it, so each micro-choice bears its fair share of shared
costs and benefits. Third, it projects those cash flows forward,
embracing uncertainty and adding shadow adjustments where a
bare ledger would miss material risks or benefits such as goodwill
or brand damage.
The next subsections examine these three steps in turn. They
show how an explicit economic model anchors the calculation,
why attribution is both unavoidable and contentious, and how
projections—augmented by shadow and private valuations—turn
yesterday’s data into tomorrow’s actionable insight.
4.5. VALUATION CONCERNS 115
4.5.1 The economic model
The economic calculation starts with an explicit economic model:
a concise yet transparent mapping that links every operational
choice—buy, make, move, hold, price—to its projected financial
footprint. In practice, the model translates the flow of atoms into
flows of coins by specifying four elements: the unit of analysis
(SKU, pallet, machine-hour, etc.), the time horizon over which con-
sequences unfold, the attribution rules that assign shared costs and
revenues, and the discounting convention that reconciles different
dates. Its purpose is not prophetic accuracy; it is to surface the
otherwise invisible trade-offs so that mutually exclusive options
can be ranked on a single yardstick—risk-adjusted return. Because
supply chains operate on several intertwined time-scales, multiple
sub-models usually coexist: a short-cycle replenishment model
centered on inventory rotations, a procurement model encoding
supplier rebates and MOQs, a capacity model showing how pro-
duction constraints convert into cash. Each is a facet of the same
discipline: turning physical decisions into discounted cash flows.
For example, one of the simplest economic models is that of the
pure trader. The pure trader merely buys and sells goods without
transporting them, often leaving custody with the seller for an
agreed period. The strategy of the trader consists of buying low
and selling high. The relevant model reduces to projected gross
margin: per-unit sell price minus per-unit buy price. Variability
comes from the sell price, unknown at purchase time. Pure traders
serve various functions in the market, such as protecting sellers
against price variability or sparing them interactions with numerous
smaller clients.
While the pure trader is an extreme simplification, real-world
supply chains invariably entail numerous subtleties. The economic
model is always approximate. Some approximations are intentional,
as they help to keep the model manageable. For example, all goods
are perishable: given enough time, even so-called stainless steel
rusts. Yet for many goods, shelf-life is so long that perishability is
negligible. In such cases, omitting perishability from the model is
116 CHAPTER 4. ECONOMICS
reasonable.
Other approximations are unintentional, reflecting operational
uncertainty. Knowledge can be refined, but it will remain in-
complete. For example, stock may harbor defects noticed only
by customers, some of whom later request refunds or exchanges.
Quality control may push defects to near-negligible levels, yet the
possibility remains and knowledge never becomes absolute.
Designing the model—including its structure and intentional
approximations—is a problem of general intelligence. Those prob-
lems do not lend themselves to “mechanistic” resolutions. The
adequacy of the model is always a matter of high-level judgment.
One can empirically falsify an economic model—or at least quan-
tify its approximations and judge them excessive—but designing
the model itself requires higher forms of reason. Guidance and
principles can be provided, but no concise formal rulebook will
accomplish this exercise satisfactorily.
Yet while the model’s design remains a matter of general in-
telligence, a great deal of automation can still be accomplished.
The model’s evaluation and its use in decision-making can be fully
automated. The model must be revised as the company and the
market evolve; however, this process is usually slow compared
with the fluctuations in the flow of goods and materials. For most
companies, a relevant economic model lasts months if not years; a
flow datum—such as a stock level—lasts hours.
4.5.2 Implicit valuations
Many companies sidestep the problem of economic calculation
for supply chain by resorting to percentage-based policies and
similar noneconomic methods. Those policies govern decision-
making in supply chain. They typically mix manual and automated
operations without explicitly adopting an economic perspective.
The sole redeeming quality of this approach is avoiding the overhead
associated with economic calculation itself. For the rest, no matter
how much success the company enjoys from factors independent
of its supply chain—such as superior branding—this approach can
4.5. VALUATION CONCERNS 117
no longer be considered anything but backward.
For a microbusiness—a single grocery, a family-run workshop,
a corner repair shop—the absence of formal valuations is not
backward; it is pragmatic. The owner’s scarcest resource is personal
time: every hour spent assembling a spreadsheet competes with
wiping down shelves, talking to customers, or simply keeping the
lights on. When two hours of arithmetic can at best trim a handful
of coins from next week’s purchase order, those hours deliver a
higher, safer return when invested in upkeep or direct selling. In
that setting, simple rules of thumb—“re-order when only two units
remain”, “never let the bread rack go empty”—approximate the
optimum well enough. Once volumes, products, and locations grow,
the arithmetic flips: the cost of misallocation quickly outstrips
clerical overhead. Beyond that threshold, clinging to implicit
valuations is no longer frugality; it is self-sabotage.
With this caveat settled, let us return to the main thread of
our argument. Safety stocks, min/max reorder points, and time-
based buffers are archetypal noneconomic policies. Each spits out
a number—percentage, unit count, or “days of demand”—that
triggers an order, yet none carries a valuation expressed in coin.
They are calibrated to hit engineering targets—fill rate, cycle time,
visual comfort—without ever asking what the next unit is worth.
Capital cost, margin variance, and opportunity cost are nowhere
in the formulas. A company can execute such policies flawlessly
and still misallocate resources, because the yardstick they optimize
is disconnected from profit and loss.
However much these policies ignore economic calculation, prof-
its and losses still occur; the calculation can be evaded—its conse-
quences cannot. For example, there is a common misconception
that percentages should be steered toward 100%, as with service
levels steering safety stocks. There are obvious diminishing returns
to increasing quantities held in stock. At some point, write-off costs
from excess inventory equal stockout costs; beyond that, further in-
creases in stock levels generate losses. When noneconomic policies
are used, there is no reason to believe their internal parameters
are close to the settings that would maximize returns.
118 CHAPTER 4. ECONOMICS
As a silver lining, those settings are unlikely to be catastrophic:
if they were, the company would have ceased operations. Survivor-
ship bias looms large: only companies that avoided bankruptcy
remain observable; the rest, by definition, are gone. Thus, to some
extent, a noneconomic policy that has kept a company solvent over
the years demonstrates some merit.
Conceptually, every noneconomic policy can be rewritten as
an economic one. Introduce a series of valuations chosen not for
their adequacy to real profits and losses, but for their capacity to
numerically yield the same decisions. This exercise clarifies the
fundamental defect of noneconomic policies: they are economic
policies built on flawed valuations.
Thus, any noneconomic policy can be improved by first rewrit-
ing it explicitly as an economic policy, then bringing the valuations
closer to their real values. Such a roundabout course is mostly
of theoretical interest; in practice, there is rarely any justification
for addressing the situation so convolutedly. It may occasionally
be warranted if the legacy transactional system is inflexible and
can operate only on safety stocks, for example. In that case, if
the company will not or cannot upgrade its system, it is forced
into the tedious process of making implicit valuations explicit and
readjusting them to more sensible values. This process involves
substantial reverse engineering, is prone to implementation errors,
inflates maintenance costs, and must remain an option of last
resort.
With noneconomic flow policies now reframed as crude—often
faulty—valuations, two practical hurdles still stand between book-
keeping and actionable micro-decisions. Attribution asks: “Which
past coins belong to which past choices?”—it untangles the causal
threads that weave costs and revenues together. Projection then
asks: “Given those threads, what pattern of coins will tomorrow
most likely bring?”—it pushes the picture forward under uncer-
tainty. The next two subsections tackle these twin challenges in
that order.
4.5. VALUATION CONCERNS 119
4.5.3 Spending and revenue attribution
Attribution reconnects every coin that flows through the ledger
to the operational choice that precipitated it. In the world of the
pure trader, the mapping is trivial: one purchase, one sale, one
gross margin. The moment a company stores, routes, transforms,
or promotes goods, this one-to-one symmetry vanishes. A single
inbound container replenishes hundreds of SKUs; one markdown
lifts a whole category; a missing gasket halts an entire assembly
line. Unless those braided effects are disentangled, the “value” of
shipping one extra pallet or shaving a day off lead time is anyone’s
guess. Attribution therefore provides the quantitative causality
that turns raw ledger records into decision-ready data and makes
a microanalytical, option-by-option evaluation possible.
Let us consider the problem of establishing the per-unit pur-
chase price for a series of products that can all be acquired from
the same supplier. This per-unit purchase price is critical for
deciding whether units should be acquired from this supplier or
from a competitor. In our example, the supplier puts unit prices
on display; however, those tag prices come with strings attached.
Indeed, the supplier charges a flat delivery fee, reflecting the cost
of dispatching a truck. The client company faces a trade-off: small
orders with low inventory costs and high delivery fees; large orders
with high inventory costs and low delivery fees. Whatever the size
and frequency of purchase orders, the supplier’s invoices include
a mix of item lines and delivery lines. The attribution problem
consists of deciding how those lines should be allocated to the
calculation of per-unit purchase prices.
For clarity, assume the client orders two products: one in
large volume and the other in small volume. In practice, the only
“economical” way to order the second product is to piggyback on
an order that includes many units of the first product. The second
product “tags along” in the order stream with the first one. In this
situation, it is tempting to allocate delivery fees to both products
in proportion to spend. Thus, nearly all the delivery fees would
be allocated to the first product, leaving only a small portion for
120 CHAPTER 4. ECONOMICS
the second. Based on this attribution, both products will see their
“adjusted” purchase price inflated by a fixed percentage. However,
this line of reasoning ignores the coupling of the two products. If
the company switches the first product to a cheaper supplier, it
might end up worse off, because savings on the first product are
offset by delivery fees still incurred to order the second product.
Such a decision effectively inflates the per-unit price of the second
product.
The attribution problem is not, however, restricted to spend-
ing; it also impacts revenues. For example, let us consider a store
offering a substantial discount on a given product. Revenue for
this product spikes as more units are sold, albeit with a lower gross
margin; the promotional effect does not stop there. Customers
also purchase complementary products, prompted by their initial
interest in the discounted product. Thus, revenues for several
complementary products rise as well. It may appear that these rev-
enues were generated at full price, but this perspective is misleading.
Without the discount on the “flagship” product, those other prod-
ucts would not have been purchased to the same extent. Thus,
instead of naively assigning the entire revenue contraction—caused
by the discount—to the first product, it is more correct to spread
this contraction across all complementary products in proportion
to their indirect promotion.
More generally, in supply chain, couplings are ubiquitous: items
are jointly purchased, transported, stored, transformed, and sold.
Each of these operations involves fixed or shared costs. Conversely,
nearly all mechanisms generating revenues come with spillover
effects: brand awareness and loyalty, impulse buying beyond the
original intent, etc. Attribution reorganizes spending and revenues
in ways more closely aligned with decision-making. As such, at-
tribution is one of the key mechanisms to perform the economic
calculations needed to identify the best options available to the
company. To a large degree, attribution defines the very structure
of the economic calculation.
Attribution cannot be mathematically proven correct; it can
only be shown incorrect if it conjures spending or revenues out of
4.5. VALUATION CONCERNS 121
thin air, or if the underlying arithmetic is faulty. However, those
issues are relatively trivial, and even flawed attributions may still
meet such a basic standard of “correctness”.
In most countries, there exist a great many rules, usually of
the accounting variety, to perform such attributions. For example,
those rules may aim to prevent “selling at a loss”—according
to some semi-arbitrary definition of “losses”
14
. Those rules are
intended to fulfill taxation or regulatory purposes. Their validity
does not extend any further than their original goal. It would be a
grave mistake to recycle those rules for analytical purposes. Supply
chain must abide by regulatory requirements, but that is all.
Every attribution scheme comes with its own distortions. From
an informational perspective, attribution is a dimension-reduction
technique. Back to the high-volume/low-volume example above,
we have two item prices and one delivery fee. By attributing
the delivery fee to the products themselves, we project a three-
dimensional situation into a two-dimensional space. This projection
simplifies the decision-making process, as each product gets a
shadow price reflecting the delivery fee. However, it may lead to
misguided decisions if those shadow prices misrepresent the original
situation. In general, folding dimensions cannot perfectly preserve
all properties of the original space.
Optimization techniques capable of operating in high dimen-
sions may alleviate, or even eliminate, the attribution problem.
For example, if we have a numerical recipe capable of directly
handling the six dimensions of a pair of suppliers competing for
the same two products (i.e., four per-unit prices and two delivery
fees), then no attribution is needed: the recipe embraces the higher
dimensionality of the problem.
Attribution belongs to economics rather than economic history,
in that it takes the accuracy and completeness of records as given. It
simplifies the problem space—through dimension reduction—while
14
Those rules invariably neglect spending on research and development, hence
underestimating the break-even point, and they invariably neglect revenues from
complementary products or enhanced brand awareness, hence overestimating it.
In short, those rules are nonsense.
122 CHAPTER 4. ECONOMICS
maintaining a relevant economic model, i.e. one that reliably
represents spending and revenue baselines for valuing options.
Once those baselines are established, one must proceed to the
second mechanism: projection.
4.5.4 Projected spending and revenues
Economic history must not be confused with economics proper.
While past spending and revenues can be made certain—aside from
occasional clerical errors—future spending and revenues are always
uncertain. Their uncertainty is irreducible. No matter how recent
or numerous the transactional observations, they can never tell
the complete story of what will come to pass, because the future
depends on decisions that have yet to be made. Some of those
decisions will be made by the company itself; others will be made
by third parties—customers and suppliers, for example.
As supply chain options are picked for their assessed economic
returns, they ultimately rely on projected (and therefore inherently
imperfect) valuations. There are situations where valuation dis-
crepancies between the recent past and the near future are deemed
insignificant, but those situations are rare. The opposite dominates
supply chain. Let us briefly illustrate this proposition.
When a retailer decides to add one more unit—tagged at one
coin—to the shelf, two complementary projections must be made.
First come the projected outflows. From the instant the item
enters the store until the day it leaves, it ties up capital and
absorbs carrying costs: rent for the floor space it occupies, wages
for staff who move or count it, utilities, shrinkage, insurance, and
the opportunity cost of cash immobilized in the unit itself. These
expenses accrue day after day, and their expected value is strictly
positive; were they truly negligible, the rational strategy would be
to expand the sales area without bound—a reductio ad absurdum
that confirms the costs are real and material.
Second come the projected inflows. The one-coin label is only a
candidate price: if demand lags, the unit may have to be discounted;
if it degrades, it may be written off; if competitors undercut, the
4.5. VALUATION CONCERNS 123
store may follow; and, in the opposite direction, a sudden demand
surge or general inflation can push the realized price above the tag.
Thus the expected cash recovered at sale is almost never exactly
one coin—most often lower, occasionally higher. Only after both
streams are discounted to present value can the retailer decide
whether stocking that extra unit raises or lowers long-run profit.
More generally, the projection problem is as thorny as the
attribution one. While the example above is straightforward—the
measurement is merely delayed by one inventory cycle—many
concerns are more elusive empirically. Yet, those concerns can also
prove to be critically important. There is no correlation between
ease of measurement and criticality for the business.
Let us consider a fashion brand discounting at season’s end to
liquidate the current collection. There are two distinct forces at
play. First, the discount increases the sales volume, but at the
expense of the gross margin. Second, the discount trains customers
to expect future comparable discounts, eroding willingness to pay:
some will delay purchases until the sales periods. This erosion might
slow; yet, given enough time, its effect on the brand’s revenues is
considerable. It may well dwarf the first one for many, if not most,
brands. Yet, while quantifying such erosion is not impossible, it is
far from being straightforward.
Every valuation involves a projection—a numerical statement
concerning the future, a “forecast” not restricted to demand alone.
Historical spending, revenues, and their attributions provide the
baselines on which to compute these projections. However, histor-
ical baselines should not be equated with their projections. The
future is contingent on decisions that have not been made. Any
projection that appears accurate can become completely inaccurate
if subsequent decisions are revised. This problem is fundamental
and cannot be eliminated by statistical methods alone, however
sophisticated; it can be mitigated by methods that acknowledge
inherent uncertainty.
The economic performance of a supply chain decision largely
depends on the quality and reliability of its underlying projections.
While addressing this challenge typically involves a great deal of
124 CHAPTER 4. ECONOMICS
statistics, it is an error to treat it as a purely statistical problem.
Indeed, a projection must, first and foremost, be assessed on
its economic relevance—a criterion more demanding than mere
statistical accuracy.
The remainder of this section zooms in on two special categories
of projections that deserve dedicated treatment. Shadow valuations
come first: they are numbers that do not appear in any ledger
line yet must be conjured into existence to capture elusive but
economically decisive effects. We then turn to private valuations,
the internal transfer prices applied to intermediate components
that never face the outside market yet still compete for cash, space,
and attention inside the firm. Both families follow the general
logic just established, yet each brings its own pitfalls and practical
techniques.
Shadow valuations
The economic impact of a decision is usually tallied by projecting
the cash spent today and the cash earned tomorrow. Yet many
second-order effects—lost goodwill, supplier distrust, employee
fatigue—never hit the ledger, even though they later translate into
hard cash. Ignoring them yields a caricature of a short-sighted,
spreadsheet-driven accountant who “optimizes” this quarter while
quietly destroying next year. To escape that trap, practitioners
must introduce shadow valuations—explicit, money-denominated
estimates that render invisible consequences tangible. These figures
are assumptions—there is no historical baseline for them—but
without such adjustments, the calculation collapses into naive
cost-cutting that sacrifices long-term optionality and, ultimately,
profit.
Consider a retail example: if a customer does not find what he
expected in a store, his purchase is at least delayed—and may even
be canceled if he loses interest. He may instead visit a competitor,
find the product, and eventually prefer that store. Such service
failures occur silently, leaving no trace in transaction records, yet
they have real consequences—both short- and long-term revenues
4.5. VALUATION CONCERNS 125
suffer.
If a retailer values its inventory solely on observed sales, it
overlooks how inadequate service erodes its customer base. In
other words, by undervaluing inventory—and setting aside shelf-
capacity constraints—the store ends up holding suboptimal stock.
Recognizing this, the company may introduce a “stockout penalty”,
a shadow valuation designed to capture the long-term cost of
lost sales. When incorporated into the economic calculation, this
penalty raises the perceived value of holding inventory, encouraging
the store to carry more.
More generally, the goodwill
15
that a company enjoys with
its customers, suppliers, and employees is best captured through
shadow valuations. Although specifics vary across organizations,
most supply chain decisions affect goodwill. Therefore, incorpo-
rating shadow valuations better accounts for these broader conse-
quences when evaluating options.
For example, suppliers may be harmed by erratic purchasing—
fluctuating order sizes and inconsistent frequencies. By smoothing
order flow and making demand more predictable, companies en-
courage suppliers to lower their prices. In this context, a shadow
valuation that penalizes order variability steers purchasing behavior
in the desired direction.
Similarly, employees suffer under excessive workloads. For
instance, if a grocery store’s staff receives more items in a day
than can be shelved during regular hours, they may be forced
into unplanned overtime. Moreover, inventory left scattered rather
than properly arranged frustrates both employees and customers.
A shadow valuation that penalizes excessive delivery volumes helps
smooth warehouse dispatches.
In practice, shadow valuations are often rough estimates—and
that is acceptable; being approximately right is preferable to being
exactly wrong. Overlooking elusive yet consequential factors is
15
In this context, “goodwill” refers to the common meaning—friendly, helpful,
or cooperative attitudes—rather than the accounting definition of an intangible
asset. The shadow valuations discussed here are inherently linked to decision-
making.
126 CHAPTER 4. ECONOMICS
misguided. This oversight leads to “idiot savant” solutions, which
may exhibit great technical sophistication while still missing the
point. Although we will later discuss how experimental optimiza-
tion offers a more robust solution for valuations, for now informed
approximation is preferable to ignoring significant factors.
Private valuations
A fundamental feature of the market economy is the establishment
of prices. For finished or “sellable” goods, companies must operate
within the constraints set by market prices. A company can
steer its prices downward—by innovations that slash costs—or
upward—through careful branding, as with luxury goods. However,
market prices always apply. Even a company with a patent-backed
monopoly must adjust its prices with regard to alternatives in the
market. Market prices do not require perfect substitutes.
In contrast, non-sellable goods do not directly benefit from
market prices. Such goods are common in manufacturing, where
production often requires numerous intermediate components—
some assembled from subcomponents sourced externally. These
intermediates typically exist only within the company’s production
process and lack the complements (technical documentation, estab-
lished sales channels, marketing materials, compliance procedures)
that would render them sellable. However, not all intermediate
goods are non-sellable; for instance, a manufacturer may produce
both pumps and air-conditioning units, where the former is a part
of the latter yet is also sold independently.
Non-sellable goods require investments in both production
capacity and sufficient inventory. When a non-sellable component
is used exclusively in one finished product, its valuation is relatively
straightforward
16
—the cost of the finished product can be allocated
proportionally across its components. However, when a component
serves multiple finished products, valuing each additional unit in
16
Even for a component serving a single finished product, pitfalls remain:
the unavailability of a low-value part may jeopardize the flow of a high-value
product.
4.5. VALUATION CONCERNS 127
stock becomes more challenging, as it depends on the probability
of its use across products.
For example, suppose a manufacturer uses a component in two
products sold in equal volumes: Product A yields a 10-coin gross
margin and Product B only 1 coin. The value of producing an extra
unit of this component depends on which product it supports. If
ample stock of Product A exists while Product B is out of stock, the
next unit is more valuable for Product B. Conversely, if Product A
is out of stock and Product B is well supplied, the additional unit is
more valuable for Product A, which offers a higher margin. Thus,
the extra component’s value depends critically on downstream
inventory levels and product margins.
In practice, a manufacturer may handle numerous products and
components. Production cycles often differ, and demand fluctuates
in both volume and price. Each product typically maintains its own
stock, often distributed across multiple locations, and production
may occur in batches of varying size. These conditions complicate
the valuation of component stocks.
The rate of return of any investment related to non-sellable
goods—whether acquiring capital equipment or using input re-
sources to produce extra units—must be evaluated by its pro-
jected impact on the company’s revenue and expenses. Like the
attribution introduced previously, private valuations serve as a
dimension-reduction technique, summarizing complex, multidimen-
sional effects into an expected return.
In larger, vertically integrated companies, private valuations
grow in prevalence and importance. In such firms, non-sellable
goods often outnumber finished products and account for a higher
share of inventory value. Consequently, valuing these assets be-
comes critical.
The role of the market
Private valuations are inherently difficult. As companies grow
larger and more vertically integrated, their supply chain issues in-
creasingly resemble the resource-allocation problems governments
128 CHAPTER 4. ECONOMICS
face under socialism and its central planning. Indeed, socialism
has been implemented many times, in many places; as leading
economists predicted in the first half of the 20
th
century, all those
attempts failed and invariably led to general impoverishment. So-
cialism, with its corollary of central planning, fails because it tries
to solve the problem of valuing scarce resources—a challenge that
only the free market can resolve at scale. Similarly, although com-
panies can perform accurate private valuations at small scale, large
firms often encounter significant diseconomies of scale—flawed
private valuations being a key factor—that limit growth.
In a free market, prices signal to economic actors how resources
should be allocated. When demand exceeds supply, prices rise,
prompting investments in the scarce product. Conversely, an
oversupply drives prices down, encouraging a shift toward more
profitable opportunities. When a supply chain is relatively simple—
with few products, minimal intermediate goods, and short lead
times—translating market prices into resource-allocation decisions
is straightforward. In these cases, market prices themselves resolve
a critical piece of the supply chain challenge without the need for
deep insights or advanced analytics
17
.
As large companies expand their supply chains in depth and
complexity, the link between planning and market-price feedback
weakens, and challenges multiply. This issue was well known to
the entrepreneurs of the 19
th
century, but in his Testament of a
Furniture Dealer (1976), the founder of IKEA, Ingvar Kamprad,
puts it plainly, stating:
But do not forget that exaggerated planning is the most
common cause of corporate death.
Indeed, much like governments attempting socialism, large
companies have no alternative but to appoint a caste of experts—
17
The effectiveness of the free market has long baffled intellectuals. The idea
that an unintelligent mechanism—ad hoc voluntary exchanges—can produce
better—smarter, even—solutions than those experts devise is profoundly sur-
prising. Despite the resounding successes of free markets over the last three
centuries, and the disasters of planned economies, most intellectuals remain
skeptical. See also The Anti-Capitalistic Mentality (1956) by Ludwig von Mises.
4.5. VALUATION CONCERNS 129
such as supply and demand planners, category managers, demand
forecasters, and inventory managers—to address the allocation of
resources. Although these roles differ only slightly in scope, the
overall challenge is immense: companies must account for hundreds
of economic factors and establish thousands of private valuations,
all while facing inherent future uncertainties. Flawed valuations
beget unprofitable decisions.
As an organization scales, incentives shift: middle managers
increasingly prefer avoiding blame to chasing incremental gains,
and risk appetite shrinks with firm size. Consequently, they rely
on established processes, often adhering to them rigidly regard-
less of economic merit. When these processes repeatedly yield
unprofitable outcomes, the bureaucracy responds by adding more
rules, extending checklists, and incorporating additional workflow
steps. In essence, the bureaucracy expands regardless of economic
justification.
Diseconomies of scale—not a shortage of ambition—usually put
the brakes on very large, vertically integrated firms. As organiza-
tional layers multiply, private valuations drift, and every new tier
magnifies the resulting misallocations. To regain agility, companies
typically shed peripheral activities—through outsourcing, spin-offs,
or joint ventures—and redouble their efforts on segments where
their comparative advantage is obvious and where external market
prices offer a rapid, unambiguous feedback loop.
130 CHAPTER 4. ECONOMICS
Chapter 5
Information
Every supply-chain decision is a bet under uncertainty; to be any
good, it must be informed. Somewhat unexpectedly for the lay
reader, “informed” is not merely a metaphor: since the 1940s,
information theory has given the word a precise, quantitative
meaning. This chapter clarifies what “being informed” entails for
a modern supply chain.
Two facts motivate what follows. First, contemporary supply
chains are irreducibly digital objects. Their states and histories
live inside business systems; calling a supply chain “digital” is
redundant, because no sizable chain can operate without soft-
ware. Second, mainstream supply-chain theory routinely collapses
data into information: it treats the quantities it aims to decide
upon—demand, lead times, service levels—as directly observable.
In reality, the firm only holds crude traces (sales orders, ship-
ments, returns, stock corrections) from which those quantities
must be inferred. Ignoring this distinction is precisely why elegant
formulas so often fail when confronted with the raw exhaust of
enterprise systems. This mismatch between theory and practice
drives many failed initiatives: firms try to graft pre-computer ideas
onto post-computer infrastructures.
To navigate this landscape, we must separate three notions
that everyday language blends: data, information, and knowledge.
131
132 CHAPTER 5. INFORMATION
Data are the recorded symbols—bytes in tables, log lines, barcodes.
Information is the capacity of those symbols to resolve uncertainty,
a quantity Shannon taught us how to measure. Knowledge is the
causal structure that lets a mind—human or machine—turn infor-
mation into decisions that improve the firm’s returns. Confusing
these notions breeds naive recipes that look sophisticated on paper
yet underperform crude heuristics in production.
Digitalization has already displaced humans from the custody
of data; the same drift now erodes their custody of information
and knowledge. As volumes and refresh rates explode, the ar-
chitecture of the systems—not the personal cleverness of their
users—determines what the firm can or cannot decide well. Hence,
understanding where information truly lives inside the company
becomes strategic.
Since the 1970s, three software classes have structured that
“where”: systems of records (run workflows, store transactions),
systems of reports (describe what happened), and systems of intel-
ligence (decide what should happen next). Most corporate disap-
pointments come from conflating these roles—expecting ledgers to
think, or analytics layers to serve as sources of truth.
This chapter proceeds in stages: we first disentangle data,
information, and knowledge, anchoring “information” in Shannon’s
theory. We then examine how these notions appear—and are
routinely distorted—inside the three dominant enterprise
-
software
classes. With that frame, we confront the messy semantics of
business data, the folklore of “bad data”, and the incentives that
bend records away from the firm’s interest. Finally, we revisit the
tacit, so
-
called mundane knowledge that still sits in people’s heads
and show how it, too, can be progressively codified. This sets the
stage for the subsequent chapter on intelligence and for the full
mechanization of decision-making.
5.1. DATA, INFORMATION, AND KNOWLEDGE 133
5.1 Data, information, and knowledge
Data, information, and knowledge are three words that presenta-
tion slides and vendor brochures happily interchange; in practice,
confusing them wrecks architectures and policies. Treating raw
records (data) as if they already resolved uncertainty (informa-
tion), or treating reduced uncertainty as if it already carried a
causal model that can steer action (knowledge), is not a minor
semantic slip. It is the root cause of dashboards that never decide,
“planning” modules that never plan, and post
-
mortems that keep
blaming “bad data” for failures that are, in fact, methodological.
Data are symbols captured by systems—bytes in tables, log
lines, barcodes; by themselves, they say nothing about how un-
certain we remain. Information is what those symbols buy us
against uncertainty; it can be quantified and budgeted. Knowl-
edge is the structure—explicit or tacit—that lets an intelligence
turn information into decisions that systematically improve the
firm’s returns. Collapsing these layers invites two symmetric errors.
First, firms hoard ever more data and compute, convinced that
volume substitutes for information; they end up with petabytes of
low-grade exhaust that barely moves the needle on the uncertainty
that matters. Second, they mistake information for knowledge
and expect reports or KPIs to “drive” the business; people then
compensate manually, injecting ad
-
hoc rules and folklore that never
get formalized, versioned, nor tested.
For a modern supply chain, the distinctions are operational.
If you cannot say where information sits, you cannot say which
system is authoritative; if you cannot say where knowledge sits,
you cannot say who—or what—is accountable for the decisions.
Once we look through this lens, “systems of records”, “systems of
reports”, and “systems of intelligence” become essential architec-
tural constraints: records hold (most of) the data, reports expose
fragments of information, intelligence operationalizes knowledge
to issue orders. Expecting any one of these layers to do the job of
the others is wishful thinking—and a reliable way to burn both
budgets and credibility.
134 CHAPTER 5. INFORMATION
What follows, therefore, does not rehash definitions for the sake
of taxonomy. We start from the mechanical nature of data—mere
symbols—and the engineering realities that govern their storage
and retrieval. We then anchor “information” in Shannon’s quan-
titative treatment of uncertainty, showing why many seemingly
“rich” datasets carry little informational weight for the decisions
at hand. Finally, we turn to knowledge, the least tractable of
the three, and tie it back to the company’s code, recipes, and
people—where it actually lives, how it should be curated, and how
it can be progressively mechanized. Only by keeping these three
tiers distinct can a firm hope to build decision systems that are
both informed and reliable.
5.1.1 Data storage
Data are recorded symbols. They can be numbers, characters,
pixels, or any other discrete marks that a system can reliably store
and retrieve. On their own, these symbols carry no meaning; only
an agreed
-
upon interpretation (a schema, a codebook, a protocol)
turns them into something a firm can reason about. A barcode, a
customer identifier, a purchase order line, or a temperature probe
reading are all “data” in this strict, mechanical sense.
On computers, these symbols are ultimately encoded as binary
digits and grouped into bytes (8 binary digits per byte)
1
. Modern
storage devices, weighing just a few grams
2
, can hold over 1 TB,
or 10
12
bytes. A terabyte already lies beyond intuitive grasp. The
1768–1880 edition of Encyclopaedia Britannica spans 143 volumes,
with its text content totaling 336 MB—about 3,000 times smaller
than 1 TB. Even with high
-
resolution images, the encyclopedia
totals just 23.5 GB, or roughly 400 times smaller than 1 TB. With
efficient representations, 1 TB can store over a decade of retail
1
These binary digits are commonly called bits. We will later reuse the word
“bit” in Shannon’s information
-
theoretic sense; context will make it clear whether
we mean a binary digit or a unit of information.
2
As of August 2024, some 1 TB solid-state drives weigh about 8 grams and
sell for roughly the cost of four Paris haircuts.
5.1. DATA, INFORMATION, AND KNOWLEDGE 135
transactions for a large network.
From a supply chain perspective, large data storage capacities
enable the collection of extensive electronic records. These records
capture both the current state of the supply chain and the detailed
sequence of events that led to it, including the flow of physical
goods. They can also include market data, such as competitor
pricing. All of these data are stored in the company’s systems of
records, which will be discussed further.
The storage capacity of modern hardware is so vast that the
numbers are hard to internalize—and this gap will only widen
as technology progresses. Modern digital stacks routinely span
10 to 15 orders of magnitude: from nanosecond CPU cycles to
day
-
long batch jobs, from bytes on embedded devices to petabytes
in data warehouses. By contrast, the span between a one
-
person
shop and the largest firms on Earth is “only” about seven orders of
magnitude (1 to 10 million employees). The quantities are perfectly
measurable; what is difficult is building an intuition for phenomena
that differ by factors of a trillion or more.
As capacities rise, retrieving and processing what we store also
become harder. Yet market pressure generally pushes the rest
of the stack—bandwidth, RAM, CPU—forward enough to keep
the extra storage economically usable. When storage outpaces its
complements for too long, investment in ever
-
denser media simply
stalls until the rest of the hardware catches up, at which point
further gains again make business sense.
A gram
-
sized storage chip now holds more bits than a single
mind—or a thousand of them—could ever keep in working memory.
This sheer capacity is why, decades ago, business systems became
the de facto custodians of almost all relevant supply-chain informa-
tion. Consequently, a supply
-
chain decision
-
making process is truly
informed only if it taps into the vast troves held by the company’s
systems of records.
The vastness of modern data storage comes with two distinct
pitfalls. Supply chain practitioners who are not well-versed in
information technology often underestimate how much data can
be stored and processed for supply chain purposes. As a result,
136 CHAPTER 5. INFORMATION
vital data are left unrecorded, deleted too soon, or never collected.
If these data were available, they could have been used to further
refine decision-making processes. The two most common mistakes
are failing to preserve all events affecting the business systems and
discarding historical data after three to five years, even though the
entire history could be preserved almost indefinitely
3
.
For example, while all inventory management systems include
manual stock-level overrides to correct discrepancies between phys-
ical inventory and electronic records, some do not record those
overrides. Yet the full trail of overrides is essential for tracing the
root causes of inventory errors. Ideally, every event that ends up
modifying the state of the business system should be recorded
4
.
Conversely, software engineers, who are well aware of computing
hardware capabilities, often take advantage of this by favoring easy-
to-implement solutions over resource-efficient ones. As a result,
software often becomes inefficient in its use of hardware, especially
in enterprise software compared to consumer applications. In
consumer software, the buyer and user are typically the same
person, and the app’s responsiveness is a key factor in its adoption
5
.
Engineers therefore spend much of their effort squeezing every cycle
out of the app.
By contrast, the buyer of enterprise software is seldom the
user, and demos usually run on mock datasets far smaller than
3
Some countries regulate how long personal data can be stored. Unless a
supply chain software vendor mishandles personal data within its system, these
regulations generally have little impact on supply chain operations. Personal
data offers limited value for optimizing the flow of goods and is more of a liability
than an asset in this context.
4
Event sourcing requires all events affecting the system to be stored in an
event store, replacing the more common relational database. Unlike relational
databases, where unlogged state changes can occur, event sourcing ensures by
design that every change is logged, capturing all information that led to the
system’s current state.
5
Since mobile devices far outnumber other consumer computing devices,
making apps resource-efficient also helps preserve battery life, a highly valued
feature for users. While storing more data doesn’t inherently consume more
energy, the more data that must be stored, retrieved, and processed, the more
energy the device will use.
5.1. DATA, INFORMATION, AND KNOWLEDGE 137
those in production, so real
-
world performance gets little scrutiny.
Moreover, despite inefficient use of computing resources, the total
cost of ownership of the app usually dwarfs its hardware costs. As
a result, when facing a performance issue, it is common to throw
more hardware at the problem instead of improving the underlying
efficiency of the software. More generally, most enterprise software
vendors have only modest incentives to invest in making their
technology resource-efficient.
Resource efficiency matters because the very same piece of
information can be encoded in many ways, some far leaner than
others. While a 1 GB (10
9
bytes) dataset may seem richer and
more informative than a 1 KB (10
3
bytes) dataset, more bytes do
not inherently mean more information. For example, the number
123 can be stored as a single byte or as a million bytes with a
million leading zeroes.
Using a million bytes to represent a small number is an obvi-
ous misuse of computing resources, yet enterprise software often
creates redundant copies of the same data in its databases. In
relational databases, enterprise software typically inflates data
storage by a factor of ten—and sometimes a hundred—due to
redundant data representations. Various technical reasons justify
these redundant data representations. Those reasons lie outside
our scope here, but they raise an important question: how does
the chosen representation relate to the information itself?
5.1.2 Information theory
Company records can be stored as data. Yet while data are conven-
tionally measured in bytes, that unit does not indicate how much
information they contain. The same record admits many represen-
tations, some more storage-efficient than others. The underlying
information is invariant to its notation. For example, 1776 equals
MDCCLXXVI in Roman numerals; the information is unchanged.
Information theory studies what information is—and how it is
measured, stored, and transmitted.
Claude Shannon founded modern information theory in the
138 CHAPTER 5. INFORMATION
1940s. It is a foundational auxiliary science of supply chain. This
theory is grounded in probability and defines information as the
capacity to resolve uncertainty. To illustrate the link between
uncertainty and information, consider two stores, each stocking
100 products. We focus on stock levels—100 numbers in total—and
compare the information involved in managing two very different
stores.
First, a luxury watch store. For each watch in the assortment,
at most one unit is kept. When a watch sells, a replacement is
promptly shipped to keep the assortment complete. Second, a
suburban depot for bulk construction materials. Replenishments
are infrequent and involve large batches, typically a full truckload.
In the watch store, a diligent manager likely knows the exact
inventory at all times. Typically, the stock level is 1 for all watches;
for those just sold, it temporarily drops to 0 until replenishment.
Because most watches are at 1 and only a few briefly at 0, the
manager can easily recall levels. In contrast, stock levels at the
suburban depot range from 0 to 10,000 units. A manager might
recall which products are out of stock or recently replenished, but
even a diligent one cannot memorize the exact levels across all 100
products. Both stores track 100 items, yet the watch shop carries
far less information than the depot.
Information theory makes this contrast precise. The notion of
entropy quantifies information; it is measured in bits (sometimes
“Shannons”). Returning to the example suffices to illustrate entropy.
Assume the watch store has a 95% service level: each watch is
out of stock with probability 5%, otherwise at 1. Assume further
that the suburban store’s stock levels are uniformly distributed
between 0 and 9999. Although a uniform distribution is unrealistic,
it suffices for this illustration.
The entropy will be approximately 29 bits for the luxury watch
store and 1328 bits for the suburban depot (the detailed calculation
appears in the annex). The suburban depot’s entropy is nearly
50
×
that of the watch store, which is why even a diligent manager
cannot mentally track its inventory.
From a supply chain perspective, information theory clarifies
5.1. DATA, INFORMATION, AND KNOWLEDGE 139
what complex means in a supply-chain system. A supply chain
resists succinct description: millions of bits are needed to represent
even an average one, and billions for a large one. This vast re-
quirement is not an artifact of representation; it is a fundamental
property of supply chain. The information required to resolve the
state of a supply chain far exceeds what any group of people could
memorize, let alone track its transitions over years.
Thus, generating informed supply-chain decisions daily requires
millions of bits as input to reflect state, and millions more as
output to represent the decisions. Hence software is fundamental
to modern supply chains: without it, operating them would be
impossible. Before business systems spread in the early 1970s,
supply chains were far less complex than today. This surge in
complexity stems directly from advances in computing hardware.
Companies now manage more SKUs, suppliers, sites, price schemes,
and channels than ever, leveraging modern computing hardware
to handle this increased complexity.
Although supply chains have grown more complex, many sources
of accidental complexity have been tamed. For physical goods,
standardization, strict tolerances, and formal quality control be-
came increasingly common in the first half of the 20
th
century.
The digital turn of the 1970s pushed standardization even further.
The modern concept of SKU (stock-keeping unit) assumes that all
units in stock are physically identical.
6
The content of an SKU
can be summarized by a single number: the stock on hand.
Modern readers may find it hard to believe that even mass-
produced items in the 19
th
century varied markedly from unit to
unit. In such supply chains, an inventory management system—had
it existed—would have needed to track details for every unit to
reflect inventory accurately. In practice the problem was handled
differently: salesmen discounted sub-par units on the spot.
Information theory provides fundamental insights for supply
6
Perishable goods, which have expiration dates, and serialized inventory,
where each item has a unique serial number, are two notable exceptions to the
usual SKU perspective. In these cases, stock on hand alone doesn’t adequately
represent the inventory, so a more complex model is needed.
140 CHAPTER 5. INFORMATION
chain, especially when judging whether a data source can sharpen
decisions.
For example, this perspective clarifies why fine-grained manual
overrides—such as adjusting forecasts or stocking policies—are
futile. If demand planners rely on historical data to make those
corrections, they add no bits of information to the system. They
merely palliate defects in the existing numerical recipes—built on
that very same data. Thus, for such a manual process to make the
system more informed, it must introduce “fresh” bits of information.
Suppose a number based on intuition or prior knowledge carries
fewer than 10 bits of information. Even if an employee supplies 100
meaningful numbers daily, this totals only 1000 bits—assuming,
unrealistically, complete independence. Adding 1000 bits to a
system that processes millions is unlikely to make it materially more
informed. While an employee may occasionally supply information
with outsized impact, this raises questions about the source of such
insights and the odds of repeating them consistently.
More generally, the informational lens helps assess how far
a data source can inform supply-chain decisions. As the earlier
example showed, processing data adds no information—it can only
uncover what was already there in the original data. It also clarifies
that data volume may not correlate with information content.
Yet turning information into informed decisions still requires
one more ingredient: knowledge.
5.1.3 Knowledge
Knowledge is the theoretical and practical understanding of an
object. It encompasses the insights, rules, and principles that
shape a mind’s causal model of the object
7
. This causal model lets
a mind deduce why an object is as it is and where it is headed.
7
By mind, we refer to an entity capable of general intelligence, putting aside
all the concerns of consciousness and soul which have no bearing on supply
chains. Such entities were once exclusively the province of man; however, recent
advances in software and hardware are increasingly blurring that line.
5.1. DATA, INFORMATION, AND KNOWLEDGE 141
Knowledge turns observations into actions that steer the object
toward a better state.
For supply-chain purposes, we draw on only a tiny fraction of
mankind’s knowledge. Our concern is solely with supply chains; the
scope of knowledge is limited to this purpose. Knowledge counts
as supply
-
chain knowledge only when it improves the company’s
decisions. Supply-chain knowledge, in the broadest sense, spans a
wide range of subjects. However, as the earlier Chapter Epistemol-
ogy explained, we must distinguish between the core supply-chain
domain and its auxiliary sciences, as their ways of organizing and
fostering knowledge differ markedly.
Unlike information—fully characterizable mathematically and
an unintelligent mechanical affair whose storage medium is irrel-
evant—knowledge is elusive and inseparable from the mind that
produces it.
Give two students the same physics manual: one may extract
insights far beyond its examples, while the other may never connect
examples to principles. What a reader gains is subjective: a book’s
knowledge remains latent until someone reads it. Until then, it
is merely a data storage device. The book’s content remains the
same regardless of paper quality—or even of being printed at all;
the electronic copy is identical in content.
Because knowledge and general intelligence are intertwined,
fully understanding one requires fully understanding the other.
Defining one without the other is inevitably incomplete. Gen-
eral intelligence remains a subject of active research. Progress
in this area has accelerated over the past seven decades, yielding
remarkable results. Yet we have also realized how much we under-
estimated our ignorance on this subject. We will revisit this topic
in Chapter Intelligence.
Equally important is the locus of knowledge—where it actually
lives inside the company—a crucial question when applying knowl-
edge for supply chain purposes. Two common but simplistic views
miss the mark. The first and most common misconception is that
knowledge resides exclusively with experts and their supporting
materials—papers, books, and courses. Academia often feeds this
142 CHAPTER 5. INFORMATION
view; many professors see themselves as the pinnacle of expertise.
The second is the notion that knowledge is spread evenly across
the company. This belief often stems from a misguided desire to
involve all employees in the company’s supply chain efforts. To
clarify the locus of knowledge, we must further refine what we
mean by knowledge.
In his landmark 1945 essay
8
, The Use of Knowledge in Society,
Friedrich Hayek separates ‘special’ from ‘mundane’ knowledge.
Special knowledge covers textbooks, formulas, source code, policies,
and regulations. The fundamental error, he argues, is to believe
that special knowledge is the only type.
Hayek adds a second, crucial kind—mundane knowledge. Mun-
dane knowledge covers the trivia bound by the circumstances of
a given time and place. A truck driver, for instance, knows the
healthy hum of an engine and books repairs before trouble starts.
This knowledge is sharpened when the driver regularly operates
the same truck, developing an understanding of its unique quirks
over time. This knowledge is informal and undocumented, yet
often essential for getting things done reliably.
Thus, supply chain mixes special knowledge, held by a small
body, and mundane knowledge, held by a much larger one. In
either case, the holders of that knowledge may be employees—or
not. For example, bits of special knowledge can be delegated to
an enterprise software vendor, and bits of mundane knowledge to
a third-party repair company.
For a century, firms have worked to replace mundane know
-
how
with codified rules. Mundane knowledge still matters—what you
don’t know can hurt you—but its informality creates friction for
large firms. Large companies aim to make their processes as
replicable as possible to reduce accidental variance in quality, cost,
or delays in their physical-goods flow. From this perspective,
mundane knowledge hinders the standardization of processes.
8
This 13-page essay is very well written and largely accessible to a general
audience without any particular prerequisite. Free copies can easily be found
online; we strongly recommend this essay to the curious reader.
5.1. DATA, INFORMATION, AND KNOWLEDGE 143
The rise of enterprise software over the last five decades has am-
plified this trend: software embodies special knowledge and accel-
erates the shift. Software mechanizes the application of knowledge,
yielding significant productivity gains when successfully deployed.
However, software is expensive to set up and even more costly to
maintain, especially in complex enterprise settings. Codifying a
process yields two gains: easier replication and tighter software
control.
In a modern supply chain, the source code that implements
the firm’s numerical recipes—together with its supporting artifacts
(tests, notebooks, configuration, documentation)—is the company’s
most valuable special knowledge asset. Unlike slideware or tribal
lore, this asset is actionable: it turns information into decisions
without waiting for a meeting. Crucially, this codebase is not
a static library of formulas; it is a living corpus that must be
curated, versioned, and continuously challenged against profit-and-
loss feedback.
Because the codebase is information, it can itself be analyzed
by software. Static analysis, type systems, linters, profilers, for-
mal tests, and simulation harnesses make the knowledge both
inspectable and improvable at machine speed. Conversely, because
this knowledge is inseparable from the minds that evolve it, the
engineers who maintain and extend the codebase are part of the
asset: without stewardship, rot sets in, semantics drift, and the
firm quietly slides back toward folklore.
This observation closes the loop. Data are symbols. Infor-
mation is their power to shrink uncertainty. Knowledge is the
causal structure—captured increasingly as code—that systemati-
cally converts that reduced uncertainty into superior allocations of
scarce resources. For supply chain, “owning the knowledge” means
owning and operating the code that embodies it.
144 CHAPTER 5. INFORMATION
5.2 Classes of enterprise software
Most of the information a modern supply chain needs resides in
enterprise software. Yet these applications play distinct epistemic
roles. The previous pages separated data, information, and knowl-
edge; this section mirrors that tripartition in software. What the
firm records (data), reports (information), and decides (operational-
ized knowledge) live in three distinct classes of systems. Confusing
them—asking ledgers to think or dashboards to be sources of
truth—is among the costliest mistakes companies make.
Since the 1970s, three classes have dominated enterprise stacks:
systems of records, systems of reports, and systems of intelligence.
Dozens of specialized tools exist (e.g., CAD for engineering, virtual-
ization for IT), but in budget and supply-chain relevance they are
dwarfed by these three. What follows names each class precisely,
explains why their responsibilities must remain disjoint, and shows
how mixing them corrupts performance and accountability.
Definition (System of records).
Software that embodies operational workflows and cap-
tures every transaction and state change, acting as the firm’s
authoritative ledger.
Its primary virtues are reliability, traceability, and consistency.
“Intelligence” is neither required nor desired. A system of records
digitizes and streamlines tasks otherwise done with pen and pa-
per. Almost all transactional systems—ERP, CRM, WMS, PLM,
MRP—belong to this class. Architecturally, they are CRUD-first
(create, read, update, delete) and backed by a relational database.
Concurrency control, auditability, and strong consistency dominate
the design. Heavy analytics and optimization do not belong here:
they starve shared resources, degrade the user experience, and
muddy the accountability ledgers must preserve.
5.2. CLASSES OF ENTERPRISE SOFTWARE 145
Definition (System of reports).
Software that re-projects what the system of records al-
ready contains—through aggregation, reconciliation, and vi-
sualization—so humans can inspect, audit, and narrate what
happened.
It adds no information; it merely reshapes what exists. Systems
of reports automate what clerks and analysts once did by hand:
totals, averages, pivots, dashboards. They sit on top of systems
of records and are intrinsically descriptive. For modern supply-
chain practice, their usefulness is limited: they remain helpful
for compliance, forensics, and management oversight, but not as
inputs to optimization engines. Running decision-making on a
reporting layer introduces information loss, semantic drift, and
extra bias for no economic gain.
Definition (System of intelligence).
Software that transforms the firm’s records (and a few
well-chosen external feeds) into unattended, auditable deci-
sions—and writes those decisions back to the system of records.
It does so through a narrow API and exposes logs precise
enough to be challenged against profit-and-loss feedback. It owns
almost no master data and very little user interface. A system
of intelligence replaces the manual, spreadsheet
-
mediated steps
that typically sit between reports and execution. It consumes
authoritative data, computes options, arbitrates them economically,
and commits orders, allocations, prices, or schedules to the ledger.
Its engineering culture, runtime profile, and accountability model
differ fundamentally from those of systems of records and reports.
This class will be examined in greater depth in Chapter Decisions.
Inside a firm, systems of records come first, reports second,
intelligence last. Systems of reports and systems of intelligence
are intended to be overlaid on systems of records—possibly aided
146 CHAPTER 5. INFORMATION
by data lakes or warehouses. Companies often err by blending
the classes, expecting a single product to deliver value beyond
its natural boundaries; those attempts invariably fail. Systems of
records make terrible systems of intelligence, and vice-versa.
The second mistake is attempting to run a system of intelligence
on top of a system of reports. A system of reports is a secondary
source of information. The original data are invariably altered
to make it more amenable to reporting. These transformations
not only cause information loss but also introduce complications
and bias. While those transformations are justified—within the
context of the system of reports—the data no longer accurately
reflect what actually happened.
For example, weekly sales reported by business intelligence may
be net of returns. For items often sent back, those figures can
understate the inventory required to maintain a given service level.
The idea of running a system of intelligence on top of a system
of reports is tempting because, to human eyes, the data as presented
by the system of reports looks clearer and better organized than
its original counterpart in the system of records. Thus, it feels
as if this would help the system of intelligence. Yet this is false.
The information perspective explains why: the system of reports
not only contains less information but also adds its own layer of
complications.
A clear grasp of these three software classes is essential for
modern supply-chain practice to take root in a company.
5.2.1 The place of software
Supply chain is now software-bounded: what the firm can observe,
remember, and ultimately decide is constrained less by forklifts and
bays than by schemas, APIs, and runtimes. Misplacing software
in the overall architecture is not a pedantic IT concern; it is an
economic error that quietly taxes every decision the company takes.
If the wrong system is asked to think, or the right one is starved
of the data it needs, optionality shrinks, variability bites harder,
and working capital gets trapped for reasons that never show on a
5.2. CLASSES OF ENTERPRISE SOFTWARE 147
P&L.
The previous pages separated data, information, and knowledge,
and mapped them to three software classes: systems of records,
systems of reports, and systems of intelligence. This section clarifies
why that separation must be enforced in practice, not merely
acknowledged on slides. Before examining how most writers get
this wrong, we must make explicit what is at stake: authority over
the facts (records), over their narration (reports), and over the
levers that actually move cash and atoms (intelligence). Confusing
these roles scrambles accountability, bloats costs through resource
contention, and ensures that “bad data” will be blamed for what
is, in truth, a design failure.
With this frame in mind, we can look at how supply-chain
discourse routinely miscasts software—either as an irrelevant detail
that magically provides whatever numbers a formula demands, or
as a glossy org chart of interchangeable modules that supposedly
add up to “best practice”.
Supply
-
chain writers typically fall into two camps. Those
with an academic bent treat enterprise software as an abstraction,
assuming that figures such as sales or stock levels are simply “there”.
Where the numbers come from is seldom discussed; obtaining them
is treated as casually as a contractor choosing a shovel brand. To
widen their scope, they offer a menu of models—some that rely
on, others that sidestep, inputs such as stock-out records—leaving
firms to pick whichever fits the data they actually hold.
Writers from consulting firms lean on fashionable buzzwords,
unfurling ornate diagrams and maturity matrices that treat soft-
ware modules as puzzle pieces. They prescribe phased “adoption
journeys”, but the stages seldom get more than slide-deck gloss.
Executive sound
-
bites abound; hard technical substance is scarce.
Both viewpoints fall short of a rigorous supply
-
chain treatment.
Treating “sales data” or “stock data” as self-evident betrays igno-
rance of enterprise software realities; describing tools without their
underlying mechanics or markets is no better. Too often the result
is little more than a sales pitch in scholarly dress.
Software is an indispensable auxiliary science of supply chain,
148 CHAPTER 5. INFORMATION
and its mastery cannot be shunted to specialists—no more than a
modern physician can outsource chemistry. A firm must understand
its own software terrain to run its supply chain well.
5.3 Systems of records
Systems of records are a company’s ledgers: they embody oper-
ational workflows, capture every transaction and state change,
and, by construction, become the authoritative memory of goods
in motion. In modern supply chains, they are the only scalable
custodian of information: what the firm can observe and remember
sits there—not in people’s heads.
This section treats systems of records as they are: indispensable,
yet epistemically and technically bounded. They are engineered
for reliability, traceability, and concurrency control. They are not
built to report, let alone decide. Confusing them with systems of
reports or systems of intelligence is among the most common—and
most expensive—corporate mistakes.
We proceed in five steps: first, we recall the historical path that
produced today’s ERPs, CRMs, WMSs, and their cousins. Second,
we dissect their CRUD design and the ensuing commoditization.
Third, we examine the technological trajectory that makes every
system of records swell over time, creating functional overlaps and
synchronization nightmares. Fourth, we state the hard functional
limits that preclude pushing heavy analytics or optimization inside
the ledger. Fifth, we close with the vendor’s conundrum—the
structural incentive to market ledgers as “intelligent” platforms
despite their intrinsic bounds.
With this frame in mind, the rest of the section can be read as
a field guide: how to exploit systems of records ruthlessly for what
they do well, and how to keep them out of the decision seat they
were never designed to occupy.
5.3. SYSTEMS OF RECORDS 149
5.3.1 The origin of enterprise software
Systems of records appeared in the late 1970s and spread rapidly
through the 1980s and 1990s. Their role was to convert paper-
heavy clerical work—especially accounting and stock keeping—into
electronic workflows. By capturing every transaction and stock
movement, they eliminated whole classes of clerical error and, once
barcode scanners arrived, lifted productivity. By the late 1990s,
most large firms had deployed several such systems to automate
their routine back-office tasks.
Falling hardware costs underpinned this rise. During 1970–2000,
Moore’s Law held: transistor counts doubled roughly every two
years while prices barely moved. Memory, storage, bandwidth,
and I/O followed similar curves, so by the late 1990s storing
transactions digitally was orders of magnitude cheaper than paper
alternatives.
Amid the flood of record
-
keeping products, two labels came to
dominate: ERP (enterprise resource planning) and CRM (customer
relationship management). Strip away the marketing—both are
the same animal: a CRUD ledger wrapped in workflows. The only
real difference is the entry point—ERP is organized around the
product (and its bills of materials, routings, inventories), CRM
around the customer (and the sales pipeline). Once a vendor
secures a foothold, both families grow the same way: bolt on
adjacent entities, bundle ever more modules, and acquire niche
rivals. By the late 1990s, most ERPs and CRMs had swollen into
sprawling monoliths, setting the stage for the technical problems
discussed next.
One root cause was technical: packet-switched networking
was conceived in the 1960s, but true distributed business systems
stayed niche until the late 1990s. Firms therefore ran centralized
mainframes, and cross
-
vendor integration was either ruinous or
impossible. Expanding the resident system was cheaper than wiring
in a new one.
But building and running ever
-
larger code bases is expensive.
During the 2000s commodity integration tools—open
-
source ETLs
150 CHAPTER 5. INFORMATION
and web APIs—made it cheaper to stitch systems together, so
specialized record
-
keeping apps resurfaced and clawed work back
from the monoliths.
Yet the old monoliths never shrank, and the new niche tools
soon ballooned too: every system of records keeps growing along
the same lines. Hence the alphabet soup of WMS, OMS, PIM,
PLM, SRM, and countless cousins—different starting points; much
the same architecture and growth path.
In supply-chain work, the ERP monolith is typically the prime
data source because it holds the purchase and sales ledgers. The
label Enterprise Resource Planning is a marketing artifact: an
ERP does not plan in the economic sense of arbitrating options
under uncertainty; it executes and records resource workflows (or-
ders, inventories, invoices) and, at best, sequences clerical steps
(approvals, calendarized MRP explosions, status flips) according
to user
-
entered parameters.
9
Calling this “planning” conflates
bookkeeping with decision-making and fuels the recurring myth
that a system of records can double as a system of intelligence.
5.3.2 The CRUD design
Nearly every current record system follows the CRUD (create, read,
update, delete) pattern born with relational databases in the late
1970s
10
. Front
-
ends have evolved from green screens to glossy web
apps, yet back
-
end principles have scarcely budged. Tooling, by
9
Classical MRP explosions, MOQ/EOQ “optimizations”, and time
-
bucketed
reorder cycles embedded in ERPs are deterministic, parameter
-
driven recipes.
They neither price optionality nor balance opportunity costs across the firm;
hence they do not qualify as planning under the definition advanced in this
book.
10
A more recent architectural pattern, event sourcing, records every state
change as an immutable stream of events instead of persisting only the latest
state. Event
-
sourced systems deliver superior traceability, replayability, and au-
ditability, yet the pattern has been fully commoditized since the mid
-
2010s: main-
stream databases, frameworks, and cloud services all provide off
-
the
-
shelf sup-
port. Consequently, every conclusion drawn in this subsection for CRUD—about
commoditization, resource contention, and cultural mismatch with systems of
intelligence—applies verbatim to event-sourced systems as well.
5.3. SYSTEMS OF RECORDS 151
contrast, has advanced so far that, since the mid-2000s, systems of
records have been little more than commodities.
In practice “the database” means a relational (SQL) engine:
a set of tables, each with typed columns (numbers, text, dates,
. . . ) and rows of data. Other designs exist but matter far less
in enterprise software. Relational databases link tables through
primary
-
and foreign
-
key pairs. An
Orders
row, for instance,
can carry a
ProductID
that points to the matching
ProductID
in
Products
; the engine forbids an order that references a product
that does not exist.
Vendor dialects differ, but the SQL
-
92 (published in 1992) core
works everywhere, and most CRUD apps rarely need more than
that despite later, fancier extensions. The fine print of SQL is
beyond the present scope; however, a basic level of SQL should be
considered part of the foundational knowledge needed by a modern
supply chain practitioner.
CRUD design begins by defining entities—groups of tables
mirroring the business concepts the app manages. Those concepts
can be products, clients, inventories, suppliers, contracts, employ-
ees, payments, etc. Moreover, most business concepts do not fit
within a single table. For example, a client order may require
one table
Orders
with one transaction per line, and another table
OrderDetails listing all items included in each order.
Each entity gets screens through which users create, read, up-
date, or delete its records, following the database’s underlying
paradigm. Those screens are frequently accompanied by bits of
business logic to automate database changes—mechanical conse-
quences of the user’s entries—and typically follow a workflow of
some sort.
A CRUD app’s complexity almost always creeps upward: new
entities appear; existing ones expand. New screens proliferate;
old screens accrete options. Workflows grow more expressive by
adding options that fine-tune the automation that unfolds after
data entry. In theory, complexity could fall; in practice, ambient
incentives prevent it. We return to this point next.
CRUD apps are, as the name suggests, crude. Putting aside
152 CHAPTER 5. INFORMATION
the foundational layer itself (database included), there is nothing
remotely “smart” in either the design or the execution of a CRUD
app. The software principles supporting the rules and behaviors
implemented in a CRUD are fundamentally accessible to anyone
who has completed middle school. It boils down to arithmetic
operations interleaved with IF-THEN statements, and little more.
Naturally, the business itself may be intricate, with rules and be-
haviors well beyond what a middle-school student can comprehend.
In such instances, the CRUD paradigm offers only a laborious
decomposition into elementary processing steps.
CRUD’s simplicity made it the default business
-
app pattern for
decades, and since the mid
-
2010s open
-
source stacks have turned
it into a commodity. Those tools make software engineers far more
productive when developing CRUD apps than when developing
almost any other kind of software. Notably, CRUD apps are the
poster child in most programming-environment demonstrations.
Moreover, while hiring software engineers remains notoriously
challenging, CRUD development is largely accessible to junior
developers (and to mediocre seniors) and represents, by far, the
easiest kind of software to build.
Many firms still overpay for CRUD record systems, underes-
timating how thoroughly open
-
source tooling has commoditized
them. Vendors exploit that lag to keep prices high. No matter how
useful and mission-critical a system of records might be, this sort of
product can now be developed both quickly and cheaply. Yet, out
of entrenched habits, many companies still pay software vendors
annual fees beyond what it would cost to redevelop such systems
in-house. Naturally, vendors benefit from this confusion and have
become masters at delaying the inevitable: lowering prices on the
ledgers they sell. This situation also follows from the technological
trajectory of systems of records.
5.3.3 Technological trajectory
Systems of records, unlike the spreadsheets they replaced, are rigid
and less expressive. Even an extremely capable system of records
5.3. SYSTEMS OF RECORDS 153
today covers only a fraction of what a loose collection of spread-
sheets supports. With loose spreadsheets, employees can always
deviate from formalized processes—so long as their colleagues agree.
Steps or requirements can be skipped; exceptions can be made to
any rule; or, conversely, an arbitrary judgment call can reject a
request that “blind” policies would deem acceptable. Naturally,
this freedom costs reliability and productivity, as spreadsheets
resist substantial automation.
Vendors typically launch a system of records with a small
set of entities plus the screens and workflows needed to handle
them. Technological resources at an emerging vendor are limited,
and there is no point in further development if the product can
be sold as is despite its limitations. Furthermore, developing a
business app without detailed feedback is futile before customers
have embedded it in their processes. Thus, while the initial core
may rest on an a priori view of the market, extensions are almost
invariably built in close collaboration with a few customers.
Business markets lack the depth of consumer markets. As a re-
sult, once a product is sold to a business client, the vendor naturally
leverages that position to sell anew to the same client—upselling
11
.
Selling extra modules to an existing customer is far cheaper than
winning a new one, since the latter entails clearing all corporate
hurdles of a new line of spending—such as going through a tender
against rivals.
Systems of records always offer another entity to add—or en-
rich—so the vendor can justify the next sale. No matter the
product’s feature coverage, there are always more business con-
cepts and workflow variations to add. Nothing in the technology
of systems of records imposes a consistency imperative that would
restrain growth. For example, an inventory management system
can be expanded to manage customers; conversely, a customer-
relationship-management system can be expanded to manage in-
ventory. In fact, most large ERP vendors feature CRM capabilities;
11
Upselling also works with subscriptions. In this case, when the vendor sells
an extra, the subscription fee is increased.
154 CHAPTER 5. INFORMATION
conversely, most large CRM vendors feature ERP capabilities. It
boils down to adding more tables, more screens, and more business
logic, ad infinitum. Setting aside per-client bespoke customization,
the system of records becomes the sum of the quirks of its entire
customer base.
Every commercially
-
maintained system of records keeps grow-
ing until the vendor goes bankrupt or retires the product line
entirely. As engineering cost is a superlinear function of the num-
ber of entities—i.e., implementing twice as many entities more
than doubles the costs—the product’s growth rate (in features
and capabilities) tends to slow over time, even if the vendor keeps
increasing its engineering headcount as the company expands. Yet,
despite those superlinear costs hindering an otherwise unlimited
complexity growth, many systems of records end up as feature
mastodons after two decades of continuous development. Numerous
popular systems of records, including those for smaller businesses,
boast over 10,000 tables, most with dozens of fields.
The path of relentless complexity growth is effectively irre-
sistible for vendors selling systems of records. There is simply no
upside in ditching infrequently used entities: it antagonizes the few
client companies that use them—however small that number may
be—while leaving all other clients indifferent. Unfortunately, this
path leads to a predictable bastardization of the product. Indeed,
while collaborating with customers is an essential ingredient of
business software development, some customers are misguided in
their requests. Others reflect narrow niches that did not warrant
bespoke capabilities. Still others impose terminologies or concepts
inconsistent with those adopted by the vendor. As a result, over
time, every actively developed system of records becomes a Love-
craftian construct endangering the sanity of those who contemplate
it for too long.
In A Deepness in the Sky (1999), American science-fiction writer
Vernor Vinge offers an uncannily accurate picture of long-running
software initiatives:
There were programs here that had been written five thou-
sand years ago, before Humankind ever left Earth. The
5.3. SYSTEMS OF RECORDS 155
wonder of it—the horror of it, Sura said—was that unlike
the useless wrecks of Canberra’s past, these programs still
worked! And via a million circuitous threads of inheritance,
many of the oldest programs still ran in the bowels of the
Qeng Ho system. Take the Traders’ method of timekeeping.
The frame corrections were incredibly complex—and down
at the very bottom of it was a little program that ran a
counter. Second by second, the Qeng Ho counted from
the instant that a human had first set foot on Old Earth’s
moon. But if you looked at it still more closely. . .the
starting instant was actually some hundred million seconds
later, the 0-second of one of Humankind’s first computer
operating systems.
So behind all the top-level interfaces was layer under layer of
support. Some of that software had been designed for wildly
different situations. Every so often, the inconsistencies
caused fatal accidents. Despite the romance of spaceflight,
the most common accidents were simply caused by ancient,
misused programs finally getting their revenge.
“We should rewrite it all,” said Pham.
“It’s been tried,” corrected Bret, just back from the freezers.
“But even the top levels of fleet system code are enormous.
You and a thousand of your friends would have to work
for a century or so to reproduce it. Trinli grinned evilly.
“And guess what—even if you did, by the time you finished,
you’d have your own set of inconsistencies. And you still
wouldn’t be consistent with all the applications that might
be needed now and then.
Yet this spiral of accretion is not an iron law for the buyer.
Several large organizations have kept their ledgers lean by pushing
ancillary workflows into detachable sidecars—small services or plug-
ins that interface with the core through an API fence. Because
the plug
-
ins are owned, versioned, and—crucially—disposed of by
the client at will, cruft accumulates at the edge where it can be
pruned without touching the authoritative data store.
A more radical lever is phased retirement of functions. Contrary
to the folk wisdom that “nobody rewrites an ERP”, many firms
156 CHAPTER 5. INFORMATION
do—just not in a single heroic blast. The proven pattern is the
strangler: place a thin façade in front of the monolith, route a single,
self-contained capability (pricing engine, returns authorization, tax
module . . . ) to a greenfield service, freeze the corresponding
tables, then physically drop them after a sunset period. Repeat
every quarter and the impossible big
-
bang becomes a sequence of
manageable surgeries.
These tactics do not fix the vendor’s incentive to sell yet more
expansion packs; they merely restore agency on the client side.
Nothing forces a company to install the next glossy add
-
on when the
carrying cost of complexity exceeds the cost of a lean replacement.
In practice, the lowest long
-
term cost of ownership comes from
treating the central ledger as sacred ground and everything else
as disposable code. Complexity growth is the default, not the
destiny—provided the organization cultivates the discipline to keep
pruning its own garden.
5.3.4 Functional overlaps
Master data
12
is the first casualty of functional overlaps. As soon as
two systems of records coexist, no 1-to-1 mapping of entities can be
assumed. Attributes that look identical (“product weight”, “order
date”, “supplier code”) routinely diverge in meaning, unit, timing,
and governance. What the ERP calls
product_family
may only
approximate what the PIM stores as
category
;
lead_time
in the
MRP may be a procurement promise while the WMS records an
empirical average. The so
-
called “golden record” is a managerial
fiction: in reality, each system is master for some attributes, some
of the time, for some use-cases.
This fragmentation is not a mere clerical nuisance; it is the
12
Colloquially: the supposedly “single source of truth” for core business
entities such as products, locations, suppliers, and customers. In practice, there
is almost never a single source, only competing partial authorities. Large “MDM”
suites promise a golden record through central hubs, survivorship rules, and
stewardship workflows. In practice, they often become yet another overlapped
ledger whose governance drifts as fast as the others.
5.3. SYSTEMS OF RECORDS 157
primary driver of semantic drift (see Section Semantics of the
data). When overlapping ledgers do not share isomorphic schemas,
information about a single business fact gets smeared across ta-
bles, systems, and teams, with no canonical reconciliation rule.
Master
-
data
-
management (MDM) programs attempt to paper over
this reality with central hubs and survivorship rules, but—absent
a clear, per
-
attribute authority and an auditable decision sys-
tem—they merely add another overlapped system and another
layer of synchronization to maintain.
Functional overlaps between third-party systems of records
are the direct consequence of their technological trajectory, and
they hit master data the hardest. Indeed, as software vendors
keep pushing their respective functional envelopes, collisions are
inevitable; only their timings remain uncertain. Moreover, larger
vendors are especially prone, having grown by continually widening
their functional coverage. This problem is specific to third-party
systems; even modest internal coordination usually prevents such
overlaps from arising in in-house developments.
In practice, overlapping systems require data-synchronization
processes. Master-data synchronization is particularly pernicious:
schemas are rarely isomorphic, attributes carry different semantics,
and per
-
attribute authority (which system is master for what, and
when) is seldom spelled out. Such synchronizations are invariably
ad hoc software projects. Indeed, the enterprise
-
software market is
too fragmented to justify developing plug
-
and
-
play synchronization
tools.
A two-way synchronization between systems of records—where
edits can be made on both systems and each edit in one must
be propagated to the other—typically requires at least an order
of magnitude more effort than one-way integrations, where one
system is deemed “master” and the other “slave”. A two-way data
synchronization must address conflicting edits and their resolution,
which is far trickier than simply copying and overwriting all edits
from the master into the slave. For master data, where multiple
departments continuously edit different subsets of attributes, this
conflict surface is the rule, not the exception. Despite the existence
158 CHAPTER 5. INFORMATION
of exceedingly clever algorithms dedicated to the case, N-way
data synchronization, with N greater than 2, should generally be
regarded as an abomination.
Even when IT carefully identifies which system is master for
each area, most large systems end up both master and slave—depending
on the area (e.g., the ERP may be authoritative for price and the
PIM for descriptive content on the same SKU). For example, the
CRM may be master for “customers” yet slave to the ERP for
“suppliers”.
These overlaps pose a massive, ongoing hurdle that companies
consistently underestimate. In particular, when it comes to picking
the next software vendor, companies often prize impressive feature
lists, as illustrated by the typical RFQ (Request For Quotation)
process that frequently includes hundreds of requirements. As
a result, companies tend to pick large vendors with needlessly
complex products, following the common-sense intuition qui peut
le plus, peut le moins (he who can do more can do less). Yet this
misguided intuition invariably leads companies to pick vendors
with vast functional overlaps. Those overlaps prove extremely
costly.
These overlaps are not just an IT tax; they directly corrupt
meaning. We return to this point in Section Semantics of the data,
where we show how elusive, row
-
dependent semantics quietly derail
optimization efforts.
First, for the IT department, resolving synchronization issues
is sharply superlinear—the effort grows far faster than the number
of entities. Soon enough, the Tokyo subway map looks simpler
than the synchronization graph that IT has to maintain. It is now
common to see IT divisions go effectively bankrupt: they can no
longer keep up with maintaining the synchronization graph amid
the never-ending stream of updates to their third-party systems of
records.
Second, for employees entering data, these overlaps create an
ongoing source of confusion, as the same entities can be viewed and
edited in more than one place. Software vendors have no incentive
to facilitate the “enslavement” of parts of their products to rivals
5.3. SYSTEMS OF RECORDS 159
in order to mitigate this confusion. From their perspective, even
an unused feature keeps a foot in the door for the next upsell.
Furthermore, the idea that the company ought to pay for having
features removed from a system simply goes against the entire
paradigm governing the enterprise software market.
Third, all attempts at exploiting the data, either as part of a
system of reports or a system of intelligence, confront a hall of
mirrors: the same information appears in multiple places. However,
secondary data, as generated by synchronization processes, should
not be used for those purposes. Secondary data might be outdated,
degraded, or subtly altered, wreaking havoc in downstream pro-
cesses. Thus, for every piece of information, one must ensure he
reads the canonical master record.
While the history of enterprise software is short, in-house de-
velopment of systems of records—despite their flaws—is the only
viable way to avoid the relentless costs induced by functional over-
laps between third-party systems of records. Fortunately, as noted
above, developing such systems of records has, for all practical
purposes, been commoditized.
5.3.5 Functional limits
While systems of records deliver value to the firms that run them,
what they can deliver is strictly limited. Systems of records are, at
best, improved ledgers. In particular, they are not effective systems
of reports nor systems of intelligence; those aims are technologically
antagonistic to the basic functions they must serve.
Much of a system of records’ value derives from its database—its
transactional core. The system’s design is fundamentally central-
ized—precisely what the business wants. For example, if the system
holds the stock level of a given SKU, every user or agent querying
it must see the same value. Furthermore, if someone claims 1 unit,
that unit must not be granted to a competing claim. Maintain-
ing electronic-stock integrity requires the system to coordinate
competing concurrent claims on the same datum—a unit in stock.
More generally, this class of issues is resource contention: con-
160 CHAPTER 5. INFORMATION
flict over access to a shared resource such as RAM, disk, or other
hardware. Resource contention is notoriously hard, as it often
boils down to reducing latency. Yet latency progress largely stalled
over a decade ago—over two for many specific challenges. To a
large extent, latency is now bounded by the speed of light itself.
For example, data packets over the Internet propagate at roughly
two
-
thirds of the speed of light in vacuum. This leaves little room
for further improvement
13
. Another example: in about 0.3 nanosec-
onds—roughly a single CPU clock cycle—light can only travel back
and forth about 5 cm.
Techniques exist to mitigate resource contention on compu-
tationally heavy processes—horizontal partitioning (sharding),
read/write splitting, asynchronous queues, CQRS
14
/event sourcing,
distributed locks, saga/orchestration patterns, and more. Each,
however, trades the very virtues a system of records must maxi-
mize—simplicity, traceability, strong consistency—for operational
and semantic complexity: eventual consistency to explain down-
stream, duplicated read/write paths, elaborate retry/compensation
logic, brittle distributed transactions, heavier observability bur-
dens, slower schema evolution, and harder audits. In short, a ledger
can be made to scale with such patterns, but the price is ambigu-
ity and fragility precisely where the firm needs an authoritative,
crystal-clear truth. Consequently, anything computationally “fat”
should be expelled from the ledger and delegated to a system of
reports or of intelligence.
Resource contention largely explains why business apps are not
fundamentally snappier today than in the early 2000s. Editing
an entity still incurs a user-noticeable delay—tens of milliseconds.
Lags of a second or more for seemingly basic operations remain
13
Hollow-core optical fibers (HCF) guide light mostly through air rather than
silica, bringing propagation speeds closer to
c
and yielding, on long links, single-
digit to low-double-digit percentage latency cuts. Even if HCF were widespread,
switching, buffering, routing detours, and protocol handshakes would keep end-
to-end decision latencies largely bounded; the macro-argument—that latencies
are now a hard constraint for enterprise systems—still stands.
14
Command Query Responsibility Segregation.
5.3. SYSTEMS OF RECORDS 161
common. Such lags seem surprising given that, at equal cost,
hardware is now orders of magnitude more capable than two
decades ago. Yet the stagnation reflects flat latency progress;
resource contention does not improve either.
Inside the system of records, because resources are shared,
every operation must be as light as possible. Any heavy operation
risks starving the system, degrading the quality of service for other
concurrent operations. Hence decision-making processes must be
scrupulously avoided within a system of records. Decisions require
weighing numerous options and entail orders-of-magnitude more
compute than basic arithmetic or logical rules.
Consider a procurement optimizer that needs 10 seconds of
CPU to compute the order quantity of a single SKU. In a system
of intelligence, run offline or on a dedicated cluster, that cost is
trivial—and even 100 seconds to squeeze a few extra basis points
may be rational. Inside a system of records, however, the same
computation is catastrophic: with tens of thousands of SKUs
per day, 10 seconds per SKU becomes dozens of CPU-hours that
contend with transactional workloads, stall screens, and degrade
user experience. Even at 1 second per SKU, it would still create
unpredictable latencies where the ledger must guarantee low, stable
response times. Heavy optimization therefore belongs outside the
ledger: the system of records should persist compact outcomes
(e.g., purchase-order lines), not perform the combinatorial search
that produced them.
In theory, alternatives to CRUD exist to interleave heavyweight
with lightweight operations. In practice, however, those alternatives
undermine the very reasons CRUD is attractive. The simpler, more
practical approach is to segregate all the “fat” operations away from
the system of records. Descriptive statistics belong in systems of
reports; decision-making belongs in systems of intelligence. Keep
the three classes strictly decoupled, and resource contention is
largely mitigated.
Beyond low-level contention lies a weightier obstacle: the en-
gineering cultures of the three software classes are fundamentally
incompatible. From afar every class appears similar—each involves
162 CHAPTER 5. INFORMATION
writing code—but moving between them is as different as house
painting is from fine art. Both trades use paint; they share little
else. Detailing those nuances is outside our scope; suffice it to
say that for five decades vendors have strayed beyond their home
class—most often with dismal results, especially when systems-of-
records specialists expand into systems of intelligence. Yet vendors
keep trying, oblivious to the predictable failures that follow.
5.3.6 Vendor’s conundrum
The value-add of a system of records is fundamentally limited.
Once the system has ample coverage and proves reliable, no further
benefits accrue. Software vendors selling systems of records have
always been painfully aware of this limitation. Hence, since the
late 1970s, nearly every successful vendor of systems of records has
tried to venture into systems of intelligence.
To understand why, let’s consider the case of an accountant.
There is little tangible benefit in a great accountant over a merely
competent one. Hence companies do not hire “superstar” accoun-
tants at hundred-fold salaries; accounting delivers only bounded
value. Marketing, by contrast, can add value without limit. A
marketing genius can revitalize a brand in seemingly impossible
ways, creating fortunes for shareholders. As a result, companies
routinely seek “superstar” creative minds and pay them lavishly.
A system of records contributes like an accountant. Once the
job is done properly and diligently, nothing more remains to expect.
By contrast, the value-add of a system of intelligence—the decision-
making engine—is akin to that of marketing talent. For practical
purposes, there is no upper bound on the quality of a business
decision.
Consequently, willingness to pay for newer, better systems of
records is limited. This fundamental insight isn’t novel and was
already perceived as such by software vendors in the 1970s. Until
the early 1990s, successful vendors had little engineering capacity
beyond adding features and keeping up with steady hardware
progress. This changed in the 1990s and led many to rebrand as
5.3. SYSTEMS OF RECORDS 163
“ERP” vendors, the novel “P” for planning signaling an intent to
be systems of intelligence. Similar moves occurred across other
variants of systems of records. Yet, as noted, those attempts have
largely failed—and still fail—sometimes spectacularly.
As a side effect, vendor marketing often confuses executives. As
the distinction between systems of records, systems of reports, and
systems of intelligence remains relatively unknown to the general
public, vendors invariably tout the promise of better decisions. In
short, every enterprise software product is pitched as a system
of intelligence. For instance, a system that merely logs stock
movements while clerks handle replenishment should not promise
fewer overstocks or stockouts—yet such claims are ubiquitous.
From the buyer’s angle, three simple heuristics cut through the
confusion.
First, a bona fide system of intelligence ships with almost no
data-entry screens. Intelligence consumes authoritative records; it
does not solicit daily re-keying or “confirmation” by users. Each
additional form is a tell-tale sign of a ledger in disguise, and it
muddies accountability: when outcomes are poor, the vendor can
always claim that “the user typed the wrong number”. A genuine
system of intelligence owns its decisions end-to-end: its inputs
are the firm’s records; its outputs—orders, allocations, prices,
schedules—are auditable against profit and loss. No daily clerical
escape hatch remains for either party.
Second, the engine must not pretend to be a standalone solution.
Beyond its own code, configuration, and audit logs, it keeps no
durable business state. Any working tables or caches are disposable
by design. It reads the authoritative system of records, computes
recommendations, and writes them back through a narrow, ver-
sioned API. The moment master
-
data tables, workflow builders,
or user
-
management modules appear, what is on offer is a system
of records masquerading as “AI”.
Third, the commercial model must align fees with the economic
quality of the decisions delivered, not with head
-
count, storage
volume, or page views. Seat-based, page-view-based, or gigabyte-
based pricing creates a perverse incentive to keep more people in the
164 CHAPTER 5. INFORMATION
loop, proliferate dashboards, and hoard data, since vendor revenue
then scales with organizational inefficiency. A genuine system
of intelligence should, on the contrary, make most clerical seats
disappear. Contracts should therefore index payment to decision
throughput and/or to a measured share of the value created (e.g.,
risk-adjusted working-capital reductions, gross-margin uplift, or
quality-of-service improvements at constant capital employed),
with explicit caps to bound risk for both parties.
Taken together, these negative tests discard most “intelligent
platforms” and let the practitioner focus his attention—and bud-
get—on the few offers with a credible shot at improving supply-
chain decisions.
A common misstep is upgrading a system of records simply
because it “looks old”. Executives see IT progressing by leaps
and bounds and infer the newer system must be much better.
Furthermore, the look and feel of the latest versions proposed by
vendors is usually more attractive than that of the old system. The
inference is misleading. Progress in software technologies does not
imply that systems of records themselves are progressing.
On the contrary, the core technologies and concepts underpin-
ning systems of records have barely progressed since the 1980s.
Current progress focuses almost exclusively on cutting development
costs for technologies already heavily commoditized. Thus, for
systems of records, the primary goal when upgrading should be to
reduce fees paid to vendors. Similarly, the benefits of a better look
and feel are usually overestimated: if the old system was snappy,
fancier screens rarely make employees more productive. Highly
productive interfaces are often perceived as hostile to newcomers
precisely because they lack the hand-holding features that hinder
productivity once employees know the system.
Collectively, without collusion or even coordination, vendors
of systems of records have fostered a persistent state of confusion
in the general public. In the early 2020s, although systems of
records had been commoditized for over a decade, most companies
were paying more for them than two decades prior. This state of
affairs is broader than our supply-chain concerns, but supply-chain
5.4. DATA 165
software may be the most telling example. This again illustrates
the adversarial nature of supply chain challenges, where incentives
are only partially aligned.
Thus, systems of records are indispensable—and tightly bounded.
They are the firm’s authoritative ledgers, not its calculators or its
strategists. Whenever they are asked to report heavily, optimize, or
“plan”, resource contention, semantic drift, and accountability fog
follow. The right architectural stance is austere: keep the ledger
thin, central, and boring; push descriptive analytics to systems
of reports; push decision-making to systems of intelligence; fence
every interface with a narrow, versioned API; prune functional
overlaps relentlessly, even if vendors resist.
The corollary is as managerial as it is technical. Treat the
ERP (and its cousins) as sacred ground: it remembers facts and
enforces workflows—nothing more. Every extra feature that creeps
in—KPIs, forecasts, optimizers, “what
-
ifs”—quietly taxes perfor-
mance, muddies semantics, and diffuses responsibility. If something
is computationally fat, probabilistic, or economically arbitrated, it
belongs outside the ledger.
With the role and limits of systems of records made explicit,
we can now turn to what lives inside them: data. The next
section examines how those raw symbols become information, why
semantics—not missing rows—is the true hard part, why “bad
data” is the wrong scapegoat, and how to extract, document, and
harden what the decision engines will need.
5.4 Data
In digital form, business data are by far the most important sources
of information for supply chain decisions. These data are collected
primarily through the company’s systems of records. Other sources
exist, but they matter far less. From a supply chain perspective,
the relevant information lies within this data; extracting that
information proves harder than generally assumed. Furthermore,
the data’s complexity is likewise underappreciated.
166 CHAPTER 5. INFORMATION
Data are critical because supply chains are not directly ob-
servable; one sees them only through the company’s records. One
could argue that supply chains ceased to be directly observable—
by the senses in any meaningful way—well before the advent of
computers. Indeed, by the end of the 19
th
century, the rise of the
giant corporation brought armies of bookkeeping clerks. Thus,
even then, the supply chain was seen only through ledgers kept
by those clerks. Many objects in human knowledge, like supply
chains, are not directly accessible to the senses; the consequences
are substantial.
As noted earlier, many supply-chain authors either ignore soft-
ware or treat it superficially. Data are no exception. In particular,
data are usually conflated with the information they contain, as
if transforming the former into the latter were a given. Nothing
could be further from the truth, and this transformation alone
often consumes more than half the technical effort in a supply
chain initiative. Moreover, far from being mere software techni-
calities, this transformation requires a deep grasp of the supply
chain challenge. As a result, it is profoundly misguided to think
this task can be delegated to IT, or to any specialists who are not,
first and foremost, supply chain specialists.
Against this backdrop, a stock explanation is wheeled out
whenever a planning initiative stalls—the data are bad. Past
failures—rarely investigated—are paraded as proof. The slogan
is convenient: it absolves vendors and integrators, blames an
amorphous scapegoat, and discourages any examination of the
ledgers. In digitized supply chains, the transactional record is
usually good enough; what is missing is the hard work of turning
records into information—semantics clarified, provenance and au-
thority established—and then into decisions. The sections that
follow separate those steps and make their failure modes explicit.
5.4.1 Origin of the data
Most business data lives in systems of records, yet those systems
collect it only incidentally. The point of systems of records is
5.4. DATA 167
to digitize the firm’s information processes, making them more
reliable and productive. Data are not the intended outcome; it is
a by-product of those operations. This insight is simple, yet its
consequences are frequently misunderstood.
Data are organized to mirror the system’s quirks. Such intri-
cacies arise from subtle performance constraints, leftovers from
aborted upgrades, outdated concepts from an earlier development
era, or the routine compromises of software development.
To make this concrete, consider two commonplace examples
showing how record structures reflect engineering trade-offs rather
than analytical needs.
Decades ago, a well-meaning engineer—working on a still-
popular ERP—stored product returns alongside sales orders in the
sales
table with negative quantities. With that trick, summing
sales over time yields a figure “almost” net of returns. The de-
sign, however, has several downsides and proves far less clever in
practice. For service quality, relying on net sales biases analysis
toward overconfidence about stockouts. If 10 units are sold in one
week and 3 are returned the next, at least 10 had to be in stock
in the first week—not 7—to serve demand. Conversely, if most
returned units can be reconditioned and resold
15
, then replenishing
inventory right before season’s end is likely to generate sizable
overstocks, as stock will naturally benefit from the intake of re-
turns. More generally, returns are not sales; while they may share
fields (customer, product identifiers), conflating them in one table
forces columns that make sense for only one of the two—e.g., a
cause_of_return field.
Consider a simple online store front-end. After each trans-
action, stock-on-hand must be decremented to keep availability
current for later patrons. Updating stock-on-hand is a straightfor-
ward
UPDATE
on the table that holds those values; call this table
stock
. However, as virtually every team behind a moderately
15
In the 2010s, fashion e-commerce in Germany routinely featured return
rates above 50% of the units ordered. Indeed, many leading e-commerce retailers
had generous return policies, and patrons were used to order multiple sizes for
their garments and return all the units that didn’t fit.
168 CHAPTER 5. INFORMATION
popular store has found
16
, updating
stock
after every transaction
creates heavy contention in the database—precisely around that
table. Indeed,
stock
is sizable—typically tens or hundreds of
thousands of rows—and each update can trigger a cascade in the
app’s front end. The usual fix is to add a
pending_updates
table
that queues stock changes. On each transaction, rows are added to
pending_updates
instead of updating
stock
. Then, every minute
or so,
pending_updates
is purged while
stock
is updated in batch.
As a result, stock-on-hand information is split across two tables
rather than one. The pattern is sound, but a supply-chain special-
ist must decide whether ignoring
pending_updates
—purely for
simplicity—remains acceptable.
Many other cases tell the same story: data in systems of records
was never meant for data
-
science work. For the engineers who
wrote the software, it was at best a secondary concern, often an
afterthought.
Corporate managers are frequently surprised by the time and ef-
fort required to extract the right information from business systems.
Many attribute the overheads to the IT department’s supposed
inefficiency and to overcomplicated tools. That impression is re-
inforced by IT specialists who hide behind technical opacity to
justify costs and delays. This perspective, however, is mistaken.
Data in systems of records is readily accessible and, assuming a
mainstream database, the query tools are nothing short of excellent.
Costs and delays are high because turning data into information is
surprisingly difficult—especially when attempted by IT specialists
lacking the field expertise the task requires.
5.4.2 Semantics of the data
A typical enterprise system of records stores tens of thousands of
fields; the largest exceed one hundred thousand. The semantics of
those fields—what they mean, the purpose they serve, the behaviors
16
If counting in-house developments, then the ‘e-commerce front end’ app has
been redeveloped independently more than a thousand times worldwide since
the early 2000s.
5.4. DATA 169
they trigger—are usually, at best, loosely documented. That
documentation may come from the original vendor or, for in-house
systems, from the company itself. Custom extensions—ubiquitous
in enterprise software—further blur meaning. In practice, there
is rarely more than a short paragraph per field, and too often it
merely paraphrases the field name.
To see why semantics matters, consider a single field: “order
date”. That one timestamp might denote checkout time, last edit,
acknowledgment, or payment clearance. Each reading drives a
different operational reality: it shifts measured lead times, alters
service and fill-rate metrics, and changes how cash flows and
liabilities are recognized. Ambiguity at this level quietly corrupts
every downstream calculation.
In an ideal world, two things would hold: the builders would
leave precise field-level documentation, and management would
enforce day-to-day practices that keep meanings stable in opera-
tions. In practice, three forces break that ideal: age (systems and
notes outlive their authors), field-level inconsistency (meanings
drift across modules and versions), and operator-driven semantics
(users repurpose fields to get work done). The next paragraphs
examine each in turn.
By software standards, most systems of records are old—two
or three decades are common. Early implementations rarely came
with field-level documentation and, even when they did, the authors
long ago moved on. The code has been patched, migrated, and
extended so often that the original intent is unrecoverable from
commit messages. Crucially, field semantics are inseparable from
the behaviors wired to them, and those behaviors shift across
versions and modules. The net effect is predictable: large swathes
of meaning are simply lost, and reconstructing them amounts to
reverse-engineering—a slow and expensive task. Outdated—or
outright obsolete—notes are the bane of long-running software
initiatives.
Loss is only half the story. New meanings also accrete informally
as operations adapt. Teams invent workarounds, reinterpret labels,
and attach new workflows to convenient fields. Over time these
170 CHAPTER 5. INFORMATION
ad hoc practices become the system’s de facto contract, regardless
of vendor intent or what the manual still says. This emergent,
undocumented layer drives semantic drift: the “current meaning”
of a field is whatever the data-entry crowd now uses it for—and
it can differ by site, period, or role. Pinning this down requires
observing practice, not just reading schemas.
In practice, users are structurally unaware of this drift. Mean-
ings accrete over years—often longer than anyone’s tenure in a
given role—so today’s interpretation simply feels “the way the sys-
tem works. If the application exposes two near-synonyms such as
Discontinuation Date
and
Phase-out Date
, and neither drives
a visible automation, teams adopt a convenient house meaning
and move on. Whether that meaning matches the vendor’s intent
is immaterial; what matters is that the local crowd converges
on one reading and uses it consistently. Nobody will mistake
Volume
for “heel height,” but many will, quite sensibly, agree
that
Discontinuation Date
marks the moment remaining stock
is routed to off-price liquidation. Given that most fields trigger no
behavior at all, divining the vendor’s semantics is wasted effort;
ensuring everyone inside the firm uses the same semantics is the
real work.
Furthermore, field semantics are often row-dependent; not
all values of a given field mean the same thing. We saw such
an example above with “order quantity” referring to a purchase
when positive and to a return when negative. More generally, a
field’s semantics often depend on the value of another field (or
several others). Documentation cannot be a mere dictionary of
fields; meaning changes with context. To capture row-dependent
semantics, documentation must cover the relevant business logic—
again a costly, lengthy exercise, worsened by years of system
evolution.
Ultimately, meaning rests with the data
-
entry crowd—employees,
clients, even suppliers—whatever the engineers once intended. For
example, if Purchasing repurposes an obscure field—once unused
and tied to arcane Incoterms—to encode a supplier’s transport
route, then the semantics of that field become the transport route,
5.4. DATA 171
not the original documentation. While repurposing a field is hardly
good practice, it happens routinely. Moreover, as field semantics
are tricky at the best of times, teams make genuine mistakes,
attributing close-but-not-quite-right meanings to numerous fields.
Pinning down field semantics is hard—even “market-leading”
systems prove surprisingly and invariably arcane. This work is not
pedantry: without stable meanings, raw records cannot be turned
into information a decision engine can use. Beyond the practical
obstacles above, one more hurdle remains: we must be able to
verify that the meanings we ascribe are in fact correct.
Even a modest supply-chain initiative typically requires docu-
menting at least a hundred fields. For each field, half a page to a
full page is usually needed to pin down the semantics that matter
for decisions: definition; units and admissible values; provenance
and authority; row-dependent interpretations; cross-field depen-
dencies; and concrete examples. These notes should be grounded
in the firm’s own data—simple distributions, ranges, and counts
often surface what matters (for example, a typical hypermarket
in our portfolio carries more than 100,000 SKUs, a fact that rules
out algorithms whose cost grows superlinearly with the number of
items). As a consequence, the semantic dossier for a “small” scope
easily runs to a hundred pages or more. Thus, given the volume,
it is unreasonable to expect all of it to prove factually correct.
Errors—ranging from crass mistakes to subtle misunderstandings—
will litter the documentation. Hence a method is needed to assess
the validity of this knowledge and weed out the errors.
Extra reviews—even peer reviews—rarely catch every error.
Indeed, once a misunderstanding is solidified in the documentation,
a new element is usually needed to reveal it. That element must
be real-world feedback from a decision-making process that draws,
among other things, on this knowledge. If flawed knowledge yields
sound decisions, the flaw is harmless and the issue moot. The fine
print of this methodology—experimental optimization—will be
detailed in a later chapter. For now, suffice it to say that semantic
validity must ultimately be rooted in empirical feedback from the
practical application of this knowledge while generating decisions.
172 CHAPTER 5. INFORMATION
5.4.3 Shenanigans in data
Data, like everything else in supply chain, bends to the incentives
of its makers and collectors. As noted earlier, incentives often
diverge from the company’s interests. This yields numerous quirks
in the data—some minor, some consequential.
Consider a large manufacturer’s real-world mishap. The manu-
facturer was suffering from chronic overstocks. A glance at past
orders revealed routinely exuberant buys—the main cause of the
overstocking. To redress the situation, management chose a new
supply chain vendor to fully automate replenishment, removing
well-meaning but misguided judgment calls from the purchasing
team. Pre-production tests showed the excessive orders vanished.
This still left the possibility of overstock whenever a large, unex-
pected drop in demand occurred, but instant overstocks absent
major market swings were eliminated. The solution looked promis-
ing.
Yet two months into production, excessive replenishment orders
persisted. The result was puzzling: the orders bore no resemblance
to the recipe’s output.
Closer inspection suggested a mundane cause: purchase or-
ders were still keyed into the ERP by hand because bidirectional
integration had been deferred—the IT team was overloaded. His-
torical data extraction from the ERP—a venerable four-decade-old
system—was automated, but writing back the new orders was
not. The decision engine produced orders unattended; clerks then
retyped them into the ERP each day. This remaining manual hop
became our prime suspect.
However, after days of careful assessment, it became clear that
while manual data entry was involved, this step was performed
very carefully, and that, under no circumstances, were employees
intentionally modifying the quantities. Thus the new solution
again became the primary suspect. Yet log inspection quickly
showed the new solution had never generated those excessive orders.
Dates and products matched; quantities did not. The bafflement
deepened: day after day, the orders entered into the ERP the
5.4. DATA 173
previous day exactly matched the solution’s output. Yet days—or
even weeks—later, some reorder quantities suddenly ballooned,
and the next inventory check confirmed the surplus.
Replenishment orders seemed to surge spontaneously, without
rhythm or reason. So the vendor’s team studied those occurrences
more closely. A surprising pattern emerged: the surges appeared
chiefly in orders placed near quarter-end. Because lead times varied
by product, the overstocking events were fairly evenly distributed,
which had obscured the pattern. The calendar pattern was clearly
human, but the actors remained a mystery.
Weeks of investigation led the vendor’s teams to the receiving
docks, where merchandise was unloaded from supplier-chartered
trucks. The receiving teams followed multiple quality-assurance
steps. One step was to count the units and ensure they matched
the ordered quantity. When a discrepancy arose, the ERP offered
little to handle the edge case. In fact, the only official policy
the ERP supported was to declare the shipment noncompliant
and return it wholesale. Yet the receiving teams had learned this
invariably incurred the production teams’ wrath. If goods had
been ordered, they were needed; without them, production halted,
infuriating colleagues.
Decades ago, to cope, the receiving teams established a back-
door into the ERP. To avoid returning a shipment for a single-unit
discrepancy, the team used the backdoor feature to adjust the
order quantity to the quantity received. The supplier was then
paid the adjusted amount, reflecting what was actually received.
For years, the setup worked, until the backdoor was essentially
forgotten by all but the receiving team.
Over time, a few suppliers noticed that small over-shipments
were accepted without so much as a phone call. This was unusual:
clients typically returned the excess units or at least complained
loudly. A few suppliers began exploiting the process. At quarter-
end, if lagging their targets, they used the manufacturer as the
variable of adjustment. At first, suppliers inflated orders only
slightly, but the manufacturer’s silence soon emboldened them to
submit far larger quantities.
174 CHAPTER 5. INFORMATION
Once the pattern was understood, identifying the misbehaving
suppliers was easy. A series of phone calls to the shortlist of
incriminated suppliers ended the explosive surges overnight.
Unexpectedly, though the pattern was baffling and obviously
detrimental to supply chain performance, the data were funda-
mentally correct. The ordered quantities were real and accurate,
as plainly demonstrated by the stock on hand. The error lay
in the real policy governing replenishment—letting suppliers de-
cide whatever they wanted—which the manufacturer had never
envisioned.
While data shenanigans are not always as convoluted as the
one above, unintentional adverse incentives routinely arise in any
sizable supply chain, leading to all sorts of data oddities. We
have already noted a common mistake in supply chain circles:
treating data problems as matters to be left to IT specialists.
The example above makes clear why this approach usually fails.
Grasping supply chain data is first and foremost the province of
supply chain specialists.
5.4.4 Shadow IT
In organizations, shadow IT denotes departmental systems that
operate without central IT oversight—systems the main IT team
may not even know exist.
17
Shadow IT exists in virtually every
large company, though prevalence varies greatly. From a supply
chain perspective, shadow IT systems are predominantly loose
systems of records and systems of reports, often mixing the two.
17
Historically, shadow systems have almost never been systems of intelli-
gence. True decision engines demand durable data integrations, empirically
validated modeling, version-controlled code, resource isolation, and runtime
monitoring—requirements that exceed what spreadsheets or ad hoc desktop
databases can deliver. Consequently, shadow efforts concentrated on records
and reports. However, since the early 2020s, large language models and agent
frameworks have made it feasible for non-IT teams to assemble narrow, quasi-
autonomous decision bots. This “shadow intelligence” will dominantly arise in
small, well-scoped niches (e.g., survey and qualify potential suppliers based on
public web data) where data plumbing requirements are minimal.
5.4. DATA 175
Shadow IT matters for supply chain because the “system of records”
portion often holds valuable data. For example, ERPs commonly
omit some ordering constraints that apply to specific suppliers. As
a result, to compose a valid replenishment order, the practitioner
relies on his own shadow system containing a complete survey of
the applicable constraints.
Spreadsheets are, by far, the most common technology for
shadow IT. Any workflow that relies on employees interacting
with a shared spreadsheet is almost certainly shadow IT. IT knows
spreadsheets are used, and the software itself is approved. However,
a spreadsheet-run system of records lacks what IT should expect
of any supervised system: versioning, backups, audit trails, fine-
grained permissions, etc.
Beyond spreadsheets, simple databases
18
are also commonly
used as part of shadow IT. These databases allow more polished
systems of records, enforcing a modicum of type-level correctness
(a value is a number, a date, etc.). Yet, like spreadsheets, they lack
the supporting processes that ensure long-term maintainability
(versioning, backups, etc.).
The motivation for shadow IT is straightforward: central IT is
swamped for months—sometimes years—so business teams build
what they cannot wait for. The paradox is that shadow IT is
rarely born of excessive zeal for software; it is born of too little
software culture. Software now touches almost every workflow, so
most departmental problems are, in part, software problems. If
the organization expects a single central IT group to solve all of
them, overload is guaranteed.
The cultural deficit amplifies the backlog. Outside IT, teams
often lack the vocabulary to state requirements, to rank them
economically, or to estimate engineering effort. Pet projects are
promoted as “quick wins,” essential needs are deferred because
they look dull, and tickets arrive as solutions (“add a field”) rather
18
From the mid-1990s to the late 2010s, the database management system
Microsoft Access enjoyed considerable popularity in large companies, allowing a
white-collar employee to roll out his own system of records backed by a relational
structure.
176 CHAPTER 5. INFORMATION
than problems (“we need to stop double-billing returns”). Central
IT, besieged by poorly framed requests, appears unresponsive;
business units, forced to deliver, turn to spreadsheets and ad hoc
databases. Friction becomes the norm and productivity erodes on
both sides.
Central IT is also in a bind: employees are understandably
reluctant to expose the full extent of their shadow initiatives, which
often run with tacit managerial approval and plainly violate IT
guidelines. As a result, the very people who could help regularize
these tools discover them only when something breaks. Expecting
central IT to “bring the data” is therefore unrealistic; by design,
much of it is invisible.
The way out is straightforward, if not easy: replace shadow
systems of records with small, maintainable CRUD apps, one by
one. Prioritize ledgers first, reports later. Records are at risk
of loss; reports can be recomputed once the underlying tables
are sound. Concretely: take the live spreadsheet (or desktop
DB), freeze it read-only, sketch a minimal schema that reflects the
actual workflow, stand up a narrow CRUD service with versioning,
backups, audit logs, and permissions, backfill the data, redirect
users, then retire the old file. Rinse and repeat. Once the data sits
in a production-grade store, rewriting the surrounding reports is
cheap and fast.
Supply-chain specialists must own this remediation. Software
is an auxiliary science of the practice, not an outsourcing target.
Central IT should provide the guardrails (runtime, security, back-
ups), but the scope, sequencing, and semantics must be driven by
those who understand the flow of goods.
5.4.5 Derived data
Business systems store more than facts. Alongside event records
(sales, receipts, stock counts, price agreements, calendar con-
straints) sit large volumes of what I will call derived data: numbers
created from other records—aggregations, forecasts, EOQs, safety
stocks, classification flags, and the like. Derived data add no novel
5.4. DATA 177
information; they are a re-expression of what is already in the
ledger. A beginner expects every table and field to carry “new”
data; in practice, the majority of tables in a mature application
are artifacts. For clarity, facts data refers to the original records of
the flow, and derived data refers to anything computed from them.
Not all derived data are equal. One class is properly motivated:
working or temporary tables (materialized views, caches, denor-
malized summaries) introduced for performance or operational
convenience. They exist to make the system responsive and are
disposable because they can always be recomputed from facts data.
The other class is epistemic decoration: planning artifacts saved
into ledgers (forecasts, EOQs, “service levels”, ABC tags, etc.).
These tables also contain no new information; they merely freeze
the output of a recipe. When mixed with records, they blur ac-
countability, multiply inconsistencies, and later mislead systems of
reports and systems of intelligence.
Demand forecasts are a classic example of such artifacts. More
generally, almost all planning data held by companies are nothing
but derived data. Unfortunately, many practitioners mistake these
artifacts for stepping stones toward a system of intelligence; in
practice they are landmines.
Before turning to why planning artifacts are misleading, let
us dispatch a benign subclass of derived data. Engineers often
introduce “working” or temporary tables—materialized views, de-
normalized summaries, staging areas—solely to keep CRUD ledgers
responsive or to decompose heavy logic. These tables are conve-
niences, not knowledge; they carry no new information beyond
the underlying facts and can be recomputed at will. Practitioners
should largely ignore them for decision-making.
Misguided practitioners are naturally drawn to artifacts such
as sales forecasts, economic order quantities, safety stocks, and
service levels because these aggregates feel “digestible”. These
aggregates compress thousands of row-level events into a handful
of numbers—a weekly sales total looks far more actionable than a
page of transactions—so their informational density for the human
reader is high. Yet this convenience should not be confused with
178 CHAPTER 5. INFORMATION
novelty: these figures contain exactly what the underlying facts
already implied—and nothing more. Treating such summaries as
primary inputs to a decision engine mistakes ease of consumption
for added information and betrays a misunderstanding of the task
at hand.
First, derived data contain no novel information. They are
deterministic functions of facts data. As long as the underlying
facts can be exported, every forecast, EOQ, “service level,” or
ABC tag can be recomputed outside the original system. More-
over, CRUD ledgers are culturally and technically hostile to heavy
computation. Sophisticated analytics belong in environments built
for them; recomputing those figures in a non-CRUD setting is
both simpler and more reliable than bending the ledger to do the
math.
19
Put differently—and to connect the discussion back to informa-
tion—derived data contain zero new bits. They are deterministic
functions of the firm’s fact data; at best they are lossy compres-
sions, convenient for a human reader. Treating such artifacts as
primary inputs conflates representation with knowledge: any un-
certainty they appear to resolve was already resolved—no more,
no less—by the underlying records.
Why, then, do ledgers teem with these tables? CRUD systems
are engineered to keep workflows snappy and auditable, not to
perform heavy optimization. Vendors therefore materialize interme-
diate calculations—forecasts, EOQs, safety stocks, ABC tags—so
screens load fast and so, a ledger can posture as “planning”. This
precomputation spreads because it is cheap to store and easy to
demo, not because it adds information.
Second, handling these artifacts is materially harder than work-
ing with primary (fact) data—often by an order of magnitude or
more. Flow events are bounded by physics; artifacts are bounded
19
Perfect numerical identity would require mirroring the source system’s
floating-point quirks, rounding rules, and version-specific bugs. For decision-
making, economic equivalence—not byte-for-byte equality—is what matters;
vanishing numerical differences do not change the ranking of options nor the
resulting profitability.
5.4. DATA 179
only by storage—effectively abundant. In software products that
conflate ledgers, reporting, and “planning”, derived artifacts prolif-
erate until they dominate the schema. It is common to see derived
tables outnumber fact tables ten-to-one. The consequence is pre-
dictable: more volume to move, reconcile, and govern, with little
or no informational gain.
Third, the analytical artifacts embedded in CRUD apps (setting
aside purely operational caches and materialized views) almost
always reflect naive, legacy approaches that do not belong in
modern practice. This is not an accident but a consequence of the
CRUD engineering culture: ledgers are built to keep workflows
snappy and auditable on top of relational databases, so vendors
gravitate toward recipes that are (a) expressible in SQL, (b) cheap
to precompute, and (c) easy to demo. Forecast tables, economic
order quantities, “service levels”, and ABC tags tick all three boxes;
pricing optionality, cross-SKU capital arbitration, or risk-adjusted
portfolio decisions do not. The outcome is a steady accumulation of
brittle numbers that freeze simplistic assumptions into the ledger,
blur accountability (which figure is authoritative—the artifact or
the facts that generated it?), and mislead downstream efforts by
masquerading convenience as knowledge. Had the authors aimed
at economically sound decision-making rather than screen speed
and sales collateral, these artifacts would never have been mixed
into the system of records in the first place.
Not every statement about the future is derived data. Some
future-dated entries are themselves facts: decisions or commitments
that bind the firm or its partners. For example, if the marketing
department has formally approved a budget for an advertising
campaign starting early next year—complete with timing, regional
splits, and channel allocations—this entry is not computed from
other records; it is a new constraint that changes cash flows and
capacities regardless of what demand later reveals. The same
applies to effective-dated price changes already agreed with a
supplier, planned store openings or closures with firm dates, or
a regulatory change whose enforcement date is fixed. Such items
should be recorded as facts, with clear provenance, effective/expiry
180 CHAPTER 5. INFORMATION
windows, scope, and revocation rules.
More broadly, the habit of treating derived artifacts as inputs to
a system of intelligence reflects a planning-centric worldview that
puts planning on a pedestal. Yet planning—and the artifacts it
emits—is a second-class citizen in supply chain. What matters are
the enacted decisions that commit scarce resources. Intermediate
computations such as time-series forecasts, especially when they
support far-off choices like next season’s replenishments, do not
deserve equal billing: they are provisional and will be revised
repeatedly before any cash or atoms move.
Systems of intelligence exist to issue decisions—purchase or-
ders, allocations, prices, schedules—that commit scarce resources.
While exploring the option space, the engine produces a trail of
intermediate calculations (scores, simulations, scenario logs, pro-
visional forecasts). These are derived data: they contain no new
information beyond the inputs and the code. Useful for auditability
and diagnostics, they are transient technical by-products, not mas-
ter data. The only durable outputs are the decisions themselves,
which must be written back to the system of records through a
narrow interface. Those entries are not “artifacts”: they are new
facts—commitments that the firm will execute and that will later
materialize as receipts, movements, and sales.
Consequently, a system of intelligence should consume authori-
tative facts from systems of records (and possibly a few external
feeds with clear provenance), not the frozen artifacts embedded
in ledgers or reporting tools. If a summary is needed, it should
be recomputed inside the engine from the underlying facts; ingest-
ing precomputed artifacts merely imports someone else’s assump-
tions—and their information loss.
5.4.6 External data
By external data we mean signals that do not originate in, and can-
not be reconstructed from, the company’s own systems of records.
They must be obtained from outside—public websites, official re-
leases, third-party sensor feeds, or commercial data brokers. If the
5.4. DATA 181
bytes can be derived solely from internal ledgers, they are not exter-
nal; they are internal or derived data. Exploiting external data can
create value, but acquisition, cleaning, and upkeep costs are high
and often erode returns. As a rule of thumb, exhaust the economic
value of internal data first; the recurring exceptions are competitors’
posted prices and a few highly curated, industry-specific datasets.
The chief obstacle with external data is the cost of acquiring,
cleaning, and keeping it current. Unlike the firm’s own ledgers,
external feeds are irregular, partially structured, and seldom aligned
with internal schemas or refresh cycles. Internal records tend to
scale with the firm; external sources do not. When external data
prove worthwhile, it usually falls into three recurring families: (1)
competitive intelligence—rivals’ posted prices, assortments, and
terms—which directly modulate demand and constrain pricing; (2)
weather observations and short-range forecasts, which can inform
near-term allocation and scheduling; and (3) social-media exhaust,
attractive in theory but typically unrewarding in practice.
Each family brings distinct frictions: pairing and normalizing
items across catalogs for competitive data; locality, horizon limits,
and heavy volumes for weather; and noise, manipulation, and weak
causal links for social media. The next subsections take them in
that order, starting with competitive intelligence.
Competitive intelligence
Competitive intelligence, in the supply chain sense, means har-
vesting rivals’ public signals—posted prices, assortments (styles,
colors), and terms (warranties, delivery promises)—and folding
them back into our own decisions. It is first and foremost an
external data problem: these signals do not live in our ledgers;
they must be acquired, matched to our catalog, cleaned, and kept
current.
Economically, the link is direct: price rations scarce goods
and coordinates substitution. Small price moves—ours or a ri-
val’s—shift both total volume and market share, sometimes abruptly
once retaliation begins. Thus, to allocate capital and inventory
182 CHAPTER 5. INFORMATION
rationally, the supply chain must track competitors’ prices, assort-
ments, and terms and let these signals inform its decisions.
Before the late 1990s, gathering competitors’ posted prices
was tedious clerical work. The spread of e
-
commerce in the 2000s
made prices readily accessible on public sites. Even professional
goods once hidden behind requests for quotes now surface, because
a single reseller can publish the figure. Consequently, a cottage
industry of web
-
scraping experts—and tools to parse messy pages
while impersonating human visitors
20
—emerged during the 2000s.
Today, companies can cheaply buy extensive datasets, refreshed
several times a day, that map competitors’ assortments. Exploiting
such data is still laborious; several hurdles remain.
Competitive
-
intelligence data are far noisier than the company’s
internal records. Web scrapers lean on many heuristics, and these
can occasionally misread a value
21
. Even with high overall quality,
wrong values must still be detected and neutralized.
In most markets our catalog does not map 1-to-1 to a rival’s.
Even when a competitor sells the “same” article, its SKU, pack
size, color set, or included accessories differ. Pairing is therefore
a matching problem, not a lookup. A workable approach pro-
ceeds in three steps: (1) generate candidate pairs using cheap
signals (shared manufacturer references, GTIN/EAN/UPC, nor-
malized titles, salient attributes such as size/volume/power rating,
and—when available—image embeddings); (2) score each candidate
with multiple independent tests—text similarity, attribute com-
patibility, price-per-unit coherence, and bundle/variant rules—to
20
An arms race happened in the 2010s between publishers—possibly operators
of large marketplaces—and scrapers, the former trying to prevent the latter.
However, for all practical intents and purposes, through the use of headless
browsers, proxy networks, and more recently LLMs (large language models),
the scrapers emerged victorious. Considering the evolution of the software
technologies, there is little reason to ever expect any lasting reversal of this
situation.
21
Competitive intelligence tools also frequently reveal that competitors them-
selves occasionally face pricing glitches, with a low percentage of their own
prices being bogus, either due to some “fat finger” clerical error, or due to bugs
in the e-commerce front end itself.
5.4. DATA 183
produce a confidence score; (3) retain the full matching graph
(one-to-many and many-to-one) with timestamps rather than a
single “best” link, because assortments drift and the correct pairing
can change over time. Modern language and vision models make
step (1) easier, but they do not remove the need for steps (2) and
(3). Without an auditable pairing layer—complete with versioned
rules and confidence thresholds—downstream price comparisons
will be numerically precise and economically wrong.
Data volume is not the only concern; persistence and prove-
nance are. Competitive feeds are not “one row per SKU per rival
per day”. To remain auditable and economically useful, the firm
must retain: (i) every raw observation (timestamped URL, page
or API snapshot, and the parser version used), (ii) the full match-
ing graph with per-edge scores and timestamped rule versions,
(iii) normalization rules and unit conversions as they were at the
time, and (iv) every detected change (price, availability, variant
set, bundle composition). Prices can move several times a day;
marketplaces expose multiple seller-specific offers; variants and
accessories appear and vanish. Collapsing this volatility to a single
“daily price” discards exactly the signal we need for allocation and
pricing.
Practically, the competitive corpus often exceeds the firm’s
own transaction log by 10–100
×
in rows, and by far more in bytes
if raw HTML/JSON snapshots are archived for traceability.
22
Modern hardware digests the volume; the real cost lies in the
plumbing: ingestion pipelines resilient to site changes, anomaly
detectors to neutralize scraper glitches, periodic re-matching jobs
as assortments drift, and quality assurance flags that quarantine
dubious observations without halting the feed. Neglect any of these
and the dataset will be numerically dense yet epistemically thin.
Overall, competitive intelligence is consequential and affordable
22
Archiving raw pages or API payloads is indispensable for audits and parser
regression tests. It also guards against silent upstream changes. Observe the
target site’s terms of use and applicable law; rate-limit scrapers and respect
robots.txt directives unless a specific legal basis and business necessity justify
otherwise.
184 CHAPTER 5. INFORMATION
enough to merit inclusion in most supply chains. However, if a
project can launch without competitive data, add that feed later,
once the core recipe is live.
Weather data
Many categories—apparel, beverages, home and garden, automo-
tive accessories—are weather-sensitive. Because weather moves
demand, it is tempting to let forecasts steer the supply chain. The
right question, however, is not whether to use weather data but
when it truly informs a decision. Two practical axes determine the
matter: the decision horizon (hours, days, weeks) and the spatial
granularity at which the firm acts (store, city, region). A forecast
carries useful information only when its horizon and locality match
the decision’s. In practice, two hard constraints limit the payoff of
weather-powered decisions.
Horizon: beyond roughly ten days, modern forecasts regress
toward seasonal averages. From the empirical rules of the early
20
th
century to today’s high-resolution satellite imagery, steady
progress has improved forecasts, yet they remain a resolutely
short-term affair. Consequently, choices with a look-ahead beyond
ten days are no better informed by weather feeds than by plain
seasonality measured on the firm’s own history.
Locality: Weather is intensely local. Conditions can diverge
sharply across places separated by only a few kilometers—or even a
few hours—while large firms sell into and operate across hundreds
of thousands—or even millions—of square kilometers. A weather
signal helps only if it is available—and mapped to our assets
and customers—at the same spatial granularity at which we can
act (store, route, city, or region) and within the relevant lead
time. Achieving this match requires high-resolution grids and
dense historical reanalyses to calibrate the statistical link between
weather and demand. Covering the next ten days plus several years
of history quickly yields datasets tens of gigabytes in size—often
an order of magnitude larger than the firm’s own transaction log.
Modern hardware digests the volume, but the plumbing is non-
5.4. DATA 185
trivial. When decisions are taken at a coarser level (e.g., regional
allocations), fine-scale signals largely average out, shrinking the
payoff.
Furthermore, weather is not a one-dimensional phenomenon.
Typical parameters include temperature, humidity, wind speed
and direction, precipitation, pressure, visibility, and cloud cover.
Depending on the goods, some parameters prove irrelevant, yet
identifying them still takes work. For electricity demand, temper-
ature often suffices, whereas other products hinge on a different
mix of weather factors.
Finally, asymmetric risks in a given supply chain decision can
undermine the benefits of weather-enriched forecasts. Let’s consider
women’s razors. In Western countries, those products follow a sharp
quasi-seasonality: demand surges in the week just before the first
hot spring weekend. Otherwise, demand stays low for the rest of
the year. For a general-merchandise store, the wiser move is to
avoid last-minute, just-in-time replenishment the moment a hot
weekend looms. If the forecast is wrong, replenishment happens
too late, and the store has missed its once-a-year opportunity to
sell those products. A sound, risk
-
adjusted choice is to replenish
in early spring. There will almost certainly be a first hot spring
weekend; the uncertainty is its precise date. Keeping a pack of
small, nonperishable products on the shelf a few extra weeks is far
cheaper than missing the surge because of a bad forecast.
Weather helps only where its signal survives the two hard
constraints already noted: horizon (roughly ten days) and locality
(the unit at which the firm can act). Outside that window, the
physics of forecasting and the plumbing costs dominate, while
seasonal patterns and risk-adjusted policies carry more weight.
Tooling may soften these frictions later, but for now, weather is
an adjunct—not a foundation—for planning.
Social media data
During the 2010s, social-media data fascinated executives more
than any other external source. As millions of consumers began
186 CHAPTER 5. INFORMATION
voicing desires and opinions on social media, a company able to
continuously refine its offering on that information could outcom-
pete its rivals. Instead of relying on sales data observed with
weeks of delay, the hope was to capture market sentiment in real
time. Moreover, social media would hold information beyond the
company’s current offering, providing clues to guide new-product
introduction.
Yet social-media data never delivered the riches vendors promised;
the expected benefits did not materialize. Indeed, social-media
data are very challenging to exploit because it is extremely—and
often maliciously—noisy. Sock-puppet accounts abound; genuine
users troll or manipulate; influencers wax and wane. Inflated
audience figures—views or subscribers—are commonplace. Social-
media platforms have little incentive to purge click-farm traffic
23
that inflates those figures.
Even if quality were higher, matching posts to precise parts of
the company’s offer—at scale and automatically—remains oner-
ous. While it is obvious that public feedback helps design or
engineering teams by surfacing subtle flaws—or, conversely, ar-
eas for improvement—supply-chain feedback is mostly unhelpful.
The company does not need social networks to learn that recent
shipments were delayed; its systems of records—and carrier inte-
grations—already provide that information.
Finally, processing costs remain consequential. Unlike similarly
large datasets such as weather data, which are entirely numerical,
social-media data are largely unstructured, with text and images
representing almost all of it. Recent progress in multimodal LLMs
(large language models) eases processing, yet running them at scale
remains costly and slow.
Among external sources, social-media data exhibit the worst
cost-to-benefit ratio. Extraction is arduous, and the insights are
rarely actionable from a supply-chain perspective. In practice,
23
A click farm is a setup where a large group of low-paid workers or automated
systems are employed to artificially inflate online metrics, such as clicks, likes,
views, or app downloads, to manipulate the popularity or ranking of digital
content, apps, or social media accounts.
5.5. MUNDANE KNOWLEDGE 187
firms should skip social
-
media data for supply
-
chain work until
every other source is exhausted.
5.5 Mundane knowledge
A common objection to mechanizing decision-making is that people
know many things that are not in the company’s systems of records.
However, while this statement is a truism—the warehouse staff
knows the color of the door handles, knowledge almost certainly
absent from the system—it does not follow that near-complete
automation is impossible. On the contrary, in the public and in
corporate circles, a widespread mysticism endures about keeping
people in the loop, as if the human mind possessed preternatural
qualities no computer could emulate.
Let us start by clarifying the statement people know things
with respect to information versus knowledge. While employees
are certainly knowledgeable about the processes involved in the
flow of goods, they are not necessarily highly informed, especially
about the fine print of the flow’s ever-changing state. The human
mind cannot reliably remember thousands of daily inventory posi-
tions, thousands of customer orders, hundreds of pending supplier
deliveries. Even rudimentary computers vastly exceed the human
mind’s capacity—both in reliability and in speed—to store and
retrieve information.
Today, almost all remaining manual decisions in supply chains
involve a planner who relies solely on information shown by com-
pany systems. When systems of records fall short, employees
reach for spreadsheets—their shadow IT safety net—to assem-
ble the missing information. This widespread practice illustrates
that supply chains are de facto dependent on digital information.
Whether this information is neatly integrated into well-engineered
systems of records or haphazardly spread across a loose collection
of spreadsheets, the case remains: for all intents and purposes, the
information exists only in digital form.
Recall the distinction drawn earlier in the Knowledge sec-
188 CHAPTER 5. INFORMATION
tion between special and mundane knowledge. Special knowl-
edge—formulas, source code, contractual rules—already inhabits
the written realm and thus is a natural feedstock for mechanization.
Mundane knowledge, by contrast, lingers in tacit practices: know-
ing that carrier X never shows up after 4 p.m., that truck “Bessie”
must be coaxed into first gear, or that a given supplier forgets the
customs paperwork every second Tuesday. What skeptics of full
automation really fear is that this fuzzy remainder will disappear
once people are “out of the loop.
The cure is not to renounce automation but to practice progres-
sive codification. Whenever an operator relies on an unwritten rule
that alters a material decision, the rule must be surfaced, written
down, and placed under version control exactly like any other piece
of productive code. Over time, the organization assembles a living,
peer-reviewed corpus—a supply-chain playbook—from which engi-
neers can harvest deterministic rules, statistical priors, or training
labels for the forthcoming system of intelligence.
Cultivating such a playbook demands a company
-
wide written
culture. Meeting minutes, post-mortems, simulation notebooks,
and corrective SQL snippets should be treated as first-class arti-
facts, stored in a shared repository for search, review, and refac-
toring. Written words decay slowly; oral traditions evaporate
overnight. The sooner a firm embraces this discipline, the faster it
converges toward fully informed, machine-executed decisions.
Therefore, the information-based objection to automation does
not stand. However, this does not address knowledge itself—and
the intelligence needed to wield it. This objection can be restated:
given all the necessary information, we still do not know how to for-
malize the transformation of information into decisions. Employees
can do it, but they cannot say how—at least not precisely enough
to permit reliable replication by a computer. This objection has
merit and will be revisited later in the chapter Intelligence.
5.6. BAD DATA 189
5.6 Bad data
Over the last four decades, supply-chain planning initiatives have
been an almost unbroken string of failures, as demonstrated by the
absence of production-grade unattended decision-making processes
in most companies. While this statement will be revisited in greater
detail later, let us immediately note that bad data has become
the favorite excuse. Software vendors, software integrators, and
data scientists alike invariably attribute their lack of results to
inadequate data. Yet, this explanation does not stand up to closer
scrutiny.
Since the late 1990s, all sizable supply chains have been dig-
itized: every purchase, every shipment, every sale, and every
production run is logged in a system of records somewhere. This
does not imply that those systems capture all relevant aspects of
the physical flow, nor that they are free of limitations and other
issues. Yet, for more than two decades, the core transactional
history of large companies has been digital.
Empirically, transactional data held by systems of records
almost invariably exhibit error rates as low as one faulty entry in
a thousand. This accuracy is unsurprising. Maintaining a precise
ledger is a matter of survival for companies: unpaid customer debts
must be recovered, and supplier invoices must not be paid twice.
Routinely failing at those basics either guarantees bankruptcy or,
at best, an acquisition by a better-managed competitor, since those
problems are largely fixable. Occasional clerical errors still slip
through despite safeguards, but they are too rare to meaningfully
affect supply-chain optimization.
Non-transactional data held by systems of records are typically
lower in quality than transactional data, precisely because erro-
neous entries do not systematically trigger an economic sanction.
However, in supply
-
chain settings, these data are seldom outright
garbage. Indeed, most large companies are disciplined enough that
their records reflect daily operations. The main quality issue with
non-transactional data is that entries may be missing if employees
can evade the data-entry process. When the user experience is
190 CHAPTER 5. INFORMATION
poor, skipping data entry may be the only practical way to stay
productive. Yet those fields go unfilled precisely because they are
not critical to keeping the flow running.
In a sense, the very fact that the company keeps operating
proves the data are “good enough”. Decades ago, some divisions
kept paper backups, but such practices have vanished from large
supply chains. If data are not in the company’s systems, it does
not exist. As a result, missing non-transactional data does not
prevent supply-chain optimization; it merely limits its effectiveness.
Furthermore, the same data limits constrain employees performing
the task manually. Few situations exist where the information
in people’s heads suffices for optimization. Instead, they rely on
context-free policies that keep the flow running—possibly ineffi-
ciently—on scant data. Yet those policies are not exclusive to the
human mind; software can reproduce them.
Let’s consider, for example, a retail network with a warehouse
serving a few dozen franchised stores. Each day, the independently
run stores send orders to the warehouse, expecting next
-
day fulfill-
ment. Yet, the total quantity ordered by the stores on a given day
might exceed the stock on hand in the warehouse. In this situa-
tion, the default policy is simply “first
-
come, first
-
served”. This
naive rule invites gaming: stores overorder whenever they sense
a possible shortage, which only deepens the problem. A better
policy would distribute the remaining stock across the network
rather than granting it all to the earliest claimants. However, such
refinement requires more data, i.e., more context.
These observations cast doubt on the popular claim that “bad
data” sinks most supply
-
chain projects. After all, most initiatives
amount to flow planning that needs only transactional data—data
that is both available and reliable. An alternative, more con-
vincing explanation is that most technologists—software vendors,
software integrators, in-house data scientists—fail to exploit the
data already sitting in the company’s systems of records. Indeed,
supply-chain books and software alike tend to dismiss the problem:
data preparation is wholly ignored, implicitly treated as a task so
straightforward that it does not deserve any specific methodologies
5.6. BAD DATA 191
or technologies.
Yet, as detailed in the previous section, systems of records were
never designed for data science and are unlikely ever to be. Large
companies often operate many—sometimes dozens—of function-
ally overlapping systems of records. Such a chaotic application
landscape arises from mergers and acquisitions, from successive
roll
-
outs that never retire old systems, and from ever
-
new functional
demands. Thus, data preparation—especially pinning down field
semantics—is invariably a huge challenge for any sizable company.
That challenge can only be tackled by supply-chain specialists.
In short, “bad data” often serves as a convenient scapegoat
for plain incompetence. While external sources often suffer from
overwhelming quality problems, this is not the case for the systems
of records operated by the company itself.
192 CHAPTER 5. INFORMATION
Chapter 6
Intelligence
Intelligence is the elusive faculty that lets people articulate their
wants and plan the actions needed to reach them. It is self-
evident that supply chain, understood as mastery of optionality,
demands substantial intelligence to execute profitably. Yet a supply
chain run crudely saddles the company with constant, avoidable
overhead—costs sharper resource allocation would spare.
Definition (Intelligence).
The capacity to make choices that yield superior future
rewards.
In an unhampered market, firms managed intelligently are
rewarded, while those that are not are weeded out. The market
is remarkable for delivering capabilities its participants do not
explicitly understand. Earlier we saw the market deliver prices;
now we see it delivers intelligent management as well. Those
capabilities are obtained only indirectly—an aggregate property
of the market as a whole. An executive must still rely on his own
intelligence to improve his firm—specifically its supply chain.
Understanding intelligence is crucial to supply chain because
much of it can now be engineered and delegated to computing
hardware. In practice, for well over a decade, at-scale, unattended
193
194 CHAPTER 6. INTELLIGENCE
automation of the mundane operational decisions that keep goods
flowing has been within reach. This is not a claim that general
intelligence is “solved”; rather, it is an observation that specialized,
engineered intelligence already delivers superhuman consistency
on narrow tasks and large productivity gains. Today’s ubiquitous
route-planning apps offer quiet yet powerful evidence.
The architectural form of this engineered capability is the
system of intelligence. Before contrasting general and specialized
intelligence, let us examine these systems on their own: how they
arise in a modern supply chain, how they turn raw records into
profitable unattended decisions, and why their design principles
diverge sharply from the CRUD applications dominating enterprise
software.
6.1 Systems of intelligence
Decision-making is mechanized through systems of intelligence.
Unlike systems of records or systems of reports, systems of in-
telligence are engineered as replacements for human operators,
not for man–machine interaction. The purpose of a system of
intelligence is to generate the most profitable decisions, and to
have those decisions made in a fully unattended manner.
1
For
most companies, relentless automation has been the cornerstone of
profitability. Systems of intelligence extend this principle to tasks
traditionally delegated to the white-collar workforce. Our ability
to engineer a system of intelligence depends squarely on our ability
to mechanize intelligence, whether general or specialized.
Although every corporate function—marketing, finance, HR,
and so on—will ultimately have systems of intelligence of its own,
supply chain is one of the most promising segments. Any sizable
supply chain demands tens of thousands—sometimes millions—of
decisions each day. Absent systems of intelligence, dozens or
1
Full automation is also a practical requirement for a dual-run—operating
the new decision engine in parallel with the legacy process—without duplicating
the planning workforce. We return to this point later.
6.1. SYSTEMS OF INTELLIGENCE 195
even hundreds of clerks are needed just to keep goods moving.
By the early 2020s, many retailers and manufacturers employed
more white-collar staff updating spreadsheets than blue-collar
staff handling the goods themselves. Blue-collar roles have been
aggressively mechanized for two centuries—a process still underway.
By contrast, nearly all productivity gains from enterprise software
in the past five decades have come from systems of records alone.
White-collar headcounts have hardly fallen; rising supply chain
complexity has absorbed the gains.
Yet the first inventory control systems, deployed in the 1970s,
were expected to deliver extensive decision-making automation
2
.
Those systems almost invariably relied on the safety stock paradigm,
or on minor variants of it: for every SKU, the buffer is sized
from a demand forecast, a lead
-
time input, and a target service
level. The method assumes demand uncertainty follows a normal
distribution. The original expectation was that such a system
could automatically generate inventory decisions, under modest
supervision from employees to update parameters once in a while.
The expectation proved misplaced; the systems remained intensely
labor-intensive.
Yet in the 21
st
century, after a few decades of stalemate, the
practical automation of mundane supply chain decisions became
possible through systems expressly engineered as systems of intel-
ligence. For supply chain practitioners, the importance of those
systems cannot be overstated: the modern practice of supply
chain is, for the most part, a specialized software undertaking. It
amounts to engineering systems of intelligence dedicated to supply
chain. This fundamental insight is routinely missed by supply
chain authors, consultants, and academics alike. As a result, the
traditional paradigm fails to turn the supply chain practice into a
productive asset.
2
In any case, those systems were heavily promoted as such by their respective
software vendors.
196 CHAPTER 6. INTELLIGENCE
6.1.1 A productive asset
What if we train employees and they leave? Indeed, but
what if we don’t and they stay? (old business saying)
Before software, firms had to hire and keep clerks for every
routine decision, no matter how small, mundane, or repetitive. As
decisions multiplied, so did clerks and their managers. A company
could narrow its focus to cut this headcount, but only by shrinking
its offering. In this view, the staff is a pure operating expense
(OPEX): the staff’s workdays are spent and leave no lasting asset
once a decision is made.
Even with training and refined organization, large workforces
have long since reached their competence ceiling. For instance,
APICS
3
(American Production and Inventory Control Society) was
formed in 1957; its teachings have been stable for decades. Despite
what many authors of the consulting variety still advocate, it is now
unreasonable to expect that revamping the supply chain processes
or organization, while remaining anchored in the pre-software
paradigm, will yield anything but thin returns.
The fundamental proposition behind a system of intelligence
is to turn those operational expenditures into capital expenditures.
Even the man
-
days invested in building the system improve the
system itself. Operating and maintaining it still costs money,
yet—as software—those outlays are typically a small fraction of
the payroll required to achieve the same results manually. Thus,
through the system of intelligence itself, the practice of supply
chain is reified as a productive asset. This asset is productive
because it generates ongoing returns for the company, through the
decisions it automates, well beyond its operating costs.
Unlike a clerical workforce that can only be trained to a point,
there is no a priori upper bound on supply chain performance that
can be expected from a system of intelligence. First, it becomes
possible to deliver what amounts to definitive fixes to certain in-
efficiencies or mistakes. Indeed, no matter how well-trained and
3
It became the ASCM (Association for Supply Chain Management) in 2018.
6.1. SYSTEMS OF INTELLIGENCE 197
how diligent the managers might be, any manual process is bound
to suffer occasional clerical errors. Moreover, introducing further
redundancies in manual processes—such as extra validations by
peers or by the hierarchy—is usually counterproductive, as it slows
the flow and adds hurdles and costs. Second, every decision can
be further refined through more data, more accurate predictors,
more effective optimizers . . . with no built
-
in ceiling on their so-
phistication. Software can already package what would otherwise
require lifetimes of accumulated insights and skills, and this has
been the case for decades
4
.
As the maintenance of a system of intelligence is a problem
of general intelligence, no system is yet fully autonomous. Since
the early 2010s, however, a growing number of companies have
run systems of intelligence that keep the mundane execution of
their supply chains largely unattended
5
. Numerical recipes remain
untouched for weeks, if not months.
6.1.2 Muddy emergence
Since the 1970s, supply chain software vendors have repeatedly
claimed that their products would automate planning; in practice,
these promises did not materialize—the systems shipped did not
make unattended decisions, and for most vendors this remains true
in the mid-2020s.
6
Genuine systems of intelligence emerged only in
the late 20
th
century, first at scale in quantitative finance as algo-
rithmic trading platforms—often referred to as “trading algorithms”
or “automated trading systems”. Starting in the 1980s, financial
4
In 2023, the Linux kernel has over 30 million lines of code, while Microsoft
Windows has over 50 million lines of code. Even Android, a mobile operating
system, has more than 7 million lines of code when counting dependencies.
5
At a minimum, those companies include the corporate clients of Lokad.
Furthermore, while technological giants, like Amazon or JD.com, don’t fully
publicize the fine print of their technologies, their research publications on the
same time period suggest they have also uncovered methods capable of delivering
very high levels of automation, quite comparable to those achieved by Lokad.
6
Mainstream ERP/APS offerings still delegate decisions to human clerks
and function primarily as systems of records or reports.
198 CHAPTER 6. INTELLIGENCE
institutions employed algorithms to automate trading decisions,
effectively replacing human traders in various market activities.
These systems were designed to analyze vast amounts of market
data, identify profitable trading opportunities, and execute trans-
actions at speeds unattainable by humans. The success of these
systems highlighted the transformative potential of mechanizing
intelligence, setting a precedent for other industries.
However, in other industries—most notably in supply chain—the
confusion between systems of records and systems of intelligence
has persisted since the rise of enterprise software in the 1970s, with
software vendors—aided by market analysts—playing a critical
role in perpetuating it. In a nutshell, systems of records are easy
to design but offer limited returns; systems of intelligence are
hard to design but have vast potential. As a result, most vendors
frame their systems of records as systems of intelligence to inflate
their prospects’ willingness to pay. Crude as it is, this marketing
technique remains effective, with companies in the 2020s still pay-
ing high fees for technologies (CRUD apps) commoditized over a
decade ago. Many executives remain convinced that systems of
records contribute to the quality of their business decisions, despite
several decades of contrary empirical evidence.
The value a system of records can deliver is strictly capped,
no matter the quality of the solution. Indeed, once the system
achieves near-perfect reliability and complete functional coverage,
there are no further improvements to be made. For many pro-
cesses—invoicing, inventorying, time tracking . . . —this endgame
was reached more than a decade ago. As having the best accoun-
tant in the world makes no difference compared with having a great
accountant, the greatest system of records makes no difference
compared with a great one.
Enterprise software vendors have always been painfully aware of
the inherent limitations of systems of records. Since the beginning,
vendors have focused their discourse on the “better decisions”
companies would make thanks to their software products. For
example, in supply chain settings, vendors invariably claim that
their solutions lead to lower stock levels and fewer stockouts, even
6.1. SYSTEMS OF INTELLIGENCE 199
when the software in fact entirely delegates inventory decisions to
the client company’s staff—as is invariably the case with ERPs.
Vendors selling systems of reports would later follow a similar
approach, touting decisions that were, at best, only indirectly
derived from their reports.
The crux of the technological challenge is that neither a system
of records nor a system of reports can practically be extended into
a system of intelligence. In defense of the early software vendors
of the 1970s, the reasons weren’t readily apparent at the time, and
thus the vendors themselves most likely believed that their early
systems of records would, in time, encompass decision-making
processes. Yet by the end of the 1990s, dozens of notable failed
attempts had demonstrated that such a transition would not take
place.
6.1.3 Culture and incentives
Automating supply-chain decision-making did not arise from the
enterprise-software pioneers that grew into giants in the 1990s.
This is doubly surprising: those vendors not only surveyed supply
chains extensively—through their systems of records—but also re-
lentlessly promoted their capacity to improve those very processes.
Understanding why those vendors failed to deliver systems of intel-
ligence for supply chain matters, because companies modernizing
their supply chains will face the same problems.
One might argue that late 20
th
century software was lacking
and vendors were overly optimistic; that proposition is questionable.
Indeed, the same vendors—primarily ERP and CRM vendors—still
had not pivoted a decade after it became clear that some compa-
nies had mechanized supply-chain decision-making; they lacked
designs that would make such automation possible. Thus, while
technological barriers existed, they were not the primary hurdle.
The first problem is incentives. Vendors selling systems of
records generally align fees with user counts; even fees not tied
to headcount—e.g., support—track it in practice. As a result,
vendors are incentivized to add traits that increase headcount and
200 CHAPTER 6. INTELLIGENCE
disincentivized from steering products toward reducing it. Systems
of intelligence, whose explicit aim is to automate decision-making,
run directly counter to that agenda.
Thus it is no surprise that vendors of systems of records—often
highly profitable—have little reason to fund technologies that
could erode today’s cash cows. If economics teaches one lesson,
it is that incentives matter. If the organization’s profitability
depends on a breakthrough not occurring, visionaries within it are
inconsequential
7
.
The second problem is engineering culture. As noted, systems of
records are developed almost exclusively along the CRUD paradigm.
Compared to alternatives, CRUD demands far less engineering skill
to maintain and extend an app. This has become a cornerstone
of workforce management for vendors of systems of records. As a
result, they can run larger teams while reducing dependence on
engineering talent. By contrast, systems of intelligence invariably
involve hard engineering challenges, scarcely comparable to CRUD
trivialities. Unsurprisingly, a workforce that has dealt almost
exclusively with “easy” problems—often for a decade or more—is
reluctant to tackle genuinely difficult ones. As anecdotal evidence,
the near-totality of computer-science breakthroughs in the last two
decades came from consumer software companies
8
, while notable
contributions from enterprise vendors of systems of records are
hard to name.
Therefore, when rolling out a system of intelligence, a company
should avoid vendors specializing in systems of records. Because
systems of records are deployed first, managers are invariably
7
Numerous examples of such companies can be found in history. For example,
Kodak was a pioneer in photography and had developed many patents related
to digital cameras. Kodak employee Steven Sasson developed the first handheld
digital camera in 1975. Larry Matteson, another employee, wrote a report in
1979 predicting a complete shift to digital photography would occur by 2010. Yet
they hesitated to embrace digital photography, fearing cannibalization of film.
That reluctance hastened their decline and culminated in a 2012 bankruptcy as
competitors seized the digital market.
8
For example, cloud computing was popularized by Amazon, deep learning
by Google.
6.1. SYSTEMS OF INTELLIGENCE 201
tempted to turn to the incumbent vendor that has already proved
trustworthy. Unfortunately, this trust is misplaced: the traits
that make a vendor succeed at systems of records are wholly
counterproductive for systems of intelligence.
6.1.4 Conflicting design requirements
Beyond incentives and culture, attempts to extend systems of
records to encompass decision-making are largely doomed by con-
flicting design requirements. Performance, reliability, and main-
tainability matter to both camps, but what those notions entail
varies drastically.
As we later explore technologies for supply-chain systems of
intelligence, it will be clear that such a system cannot meaningfully
grow out of a CRUD application and is best developed separately.
Even basic considerations reveal how wide the gap is between
systems of records and systems of intelligence.
Performance
A system of records must respond at low latency: whatever the
input or screen, actions should finish within milliseconds—ideally
under 100 ms—to remain imperceptible; beyond that, productivity
degrades across mundane operations. Although 100 ms sounds
fast, queuing ensures that a system averaging that figure will still
inflict multi-second delays—latencies users notice and resent.
Therefore, in a system of records, every operation must finish in
(effectively) constant time within a fixed resource budget. Because
all operations compete for the same transactional core, any “fat”
operation will eventually starve the system and degrade latencies
across many operations. With thousands of features, even one
exception is risky. Today, most companies with “slow” business
systems suffer the predictable consequences of fat operations that
should never have been implemented in the system of records.
It is easy to introduce such an expensive operation inadver-
tently. For example, summing an unbounded number of lines—e.g.,
202 CHAPTER 6. INTELLIGENCE
computing a one-year moving-average revenue for a product cate-
gory—already qualifies as problematic. Yet it is a basic piece of
descriptive statistics. Readers versed in algorithms may object that
most descriptive statistics can be maintained and retrieved within
a constant resource envelope. While true, such sophistication is
impractical
9
. First, CRUD’s virtue is that it avoids requiring algo-
rithmic expertise. Second, at scale across thousands of features,
performance woes creep in. Avoiding these operations altogether
is the only scalable safeguard.
A system of intelligence faces a very different performance
landscape. Consider deciding when and how much to order from an
overseas supplier. Although the decision is time-sensitive, spending
minutes—or even hours—on an optimization that yields a better
order is sensible. Imposing a 100 ms cap on a decision that will
take weeks to unfold makes no sense. The point of systems of
intelligence is to generate the best decisions the company can
afford to engineer. So long as computational spend is dwarfed by
marginal gains, expending more is reasonable.
Reliability
Reliability means something very different for systems of records
than for systems of intelligence. If a system of records goes down,
any dependent process immediately stalls. For example, picking
stock may not proceed if the inventory system is down. Thus, when
engineering a system of records, it is critical to keep it operational
at all times. Because CRUD applications isolate concerns, most
bugs are local—they affect only one screen or process. Keeping
the system of records running despite an identified issue is usually
preferable to shutting it down until the bug is fixed. In fact, it
should be possible to patch the system while operations are still
in progress, as the resolution of a local problem should not be
detrimental to the rest of the company. This capability—hot-
patching—is generally considered technically difficult. However,
9
Modern relational engines offer extensive mitigations; nevertheless, engineer-
ing complex operations under a constant resource budget remains challenging.
6.1. SYSTEMS OF INTELLIGENCE 203
the CRUD paradigm, via the relational database engine, largely
delivers hot-patching. Thus, in practice, software engineers devel-
oping a system of records rarely require expertise in distributed
computing to achieve near-perfect uptime.
By contrast, when a system of intelligence is suspected of being
defective, prudence dictates shutting it down and diagnosing before
resuming. Corrupted decisions can be hugely costly. For example,
a vastly oversized order to an overseas supplier all but guarantees
stock depreciation, if not an inventory write-off.
Consequently, in a system of intelligence, the economic quality
of decisions matters more than uptime. When confidence drops,
the engine should halt—deliberately and fast—based on explicit
heuristics; this avoids expensive mistakes and is usually the most
profitable choice. These halting heuristics are not the “alerts” and
“exceptions” emitted when a system of records is stretched into
a decision tool. CRUD designs cannot encode the fine print of
nontrivial decisions, so edge cases proliferate: basic replenishment
logic fails on launches, cannibalization, competitor promotions,
and other routine shocks. Yet shutting the application down is
unacceptable—edge cases are frequent enough that the tool would
be down most of the time, and, being also the system of record,
stopping it would paralyze operations. The tool therefore logs
an exception and pushes the case to a human. A true system
of intelligence does the opposite: the decision engine stops, and
operations resume only after a programmatic fix—a change to the
logic—has been deployed; hand-clearing cases is not an option.
Maintainability
The two flavors also differ sharply in their stance toward maintain-
ability. In a system of records, once new entities are added, clients
depend on them: data has been stored and processes now rely
on it. Trouble starts when entities overlap and become unmain-
tainable. Overlaps create unfixable confusion—entities cannot be
removed or even truncated. They also create conflicts when further
developing the software. Similar situations occur when entities are
204 CHAPTER 6. INTELLIGENCE
ill-defined, ambiguous, or when they otherwise compromise the
semantic integrity of the collection of entities. Thus, for a system
of records, maintainability is chiefly a matter of carefully vetting
entities and the behaviors overlaid on them.
Also, in a system of records, sheer codebase size is not, by itself,
a maintainability problem. Many such systems hold thousands
of entities across tens of thousands of tables. Larger codebases
impose more overhead to extend the product, but do not jeop-
ardize continuity. Furthermore, a hypothetical complete rewrite
taking years is acceptable. While not desirable, it is not a prime
concern: no situation calls for an immediate complete rewrite. If
the business extends in a direction unforeseen by existing systems
of records, it is acceptable to set up a new system to supplement
them. Integration may prove challenging, but nowhere near as
challenging as a complete rewrite of the older systems.
In contrast, a system of intelligence is maintainable if, once
confronted with unprecedented conditions, it can be re-engineered
quickly to resume operations. Such conditions can stem from a
drastic strategic shift, a sudden market change, or any factor that
fundamentally alters the flow. A system of intelligence should
handle mundane variations, but a specialized intelligence cannot
be expected to handle radical disruptions. That requires general
intelligence, which is not readily available in computerized form.
For example, during the global panic of 2020, faced with a sharp
decline in passenger traffic, some aviation MROs stopped repairing
certain rotables (repairable components) and stored unserviceable
items to preserve cash. Repairs may cost a third of the item’s price;
with excess serviceable stock due to reduced traffic, delaying those
expenditures makes sense. Before 2020, storing unserviceable items
was not considered viable, and there was typically no process to
opt out of repairs. All rotables eligible for repair were repaired by
default. Unprecedented market conditions invalidated this policy.
It would have been unreasonable to expect a system of intelligence
engineered prior to 2020 to respond adequately to the lockdowns
of that year. Systems in production had to be extensively modified
to accommodate the new reality.
6.1. SYSTEMS OF INTELLIGENCE 205
Because new conditions can demand a full rewrite of the decision
logic, that rewrite must remain feasible. Practically, the team
should draft a rough rewrite within hours and a polished one
within days. Between such extremes, when a halting heuristic
triggers, it should be possible to investigate and clear the case
within hours—possibly less than an hour. This implies an engineer
must always be capable of jumping into the core logic to assess
and, if necessary, fix problems on the spot.
Therefore, the codebase of a system of intelligence should stay
small and clear—ideally manageable by one engineer. Assume the
existing logic will eventually encounter conditions that exceed its
capacity to produce meaningful decisions. When this happens,
there is no alternative but to resort to general intelligence from
a capable employee to re-engineer logic for the new reality. In
essence, maintainability for a system of intelligence is the capacity
to rewrite, more or less completely, the system in the event of
disruption.
6.1.5 No problem without solution
Systems of intelligence live or die by how they frame the business
game they are asked to play. Even a flawless optimizer that
pursues the wrong objective destroys value. Hence, before we let
software reason on our behalf, we must first clarify exactly what it
is supposed to reason about. This undertaking proves surprisingly
difficult.
From primary school through university, students are presented
with “problems” to be solved. Many—if not most—grades depend
critically on students’ capacity to find solutions to those problems.
The approach persists less because of pedagogical merit than
because it gives institutions an easy, ostensibly objective way
to grade students. However, this problem-solving mindset is deeply
ingrained in our industrialized culture and has at least two notable
consequences. These consequences matter greatly for supply chain
and, more generally, for truly difficult undertakings.
First, there is a quasi-instinctive reluctance to think about
206 CHAPTER 6. INTELLIGENCE
“problems”. Academic institutions train and reward solution-
finding, not the invention of problems. After two decades inside
those institutions, this pattern becomes deeply ingrained in most
of us. As a result, we often struggle to articulate a problem unless
a solution is already in view. Thinking about solutions comes
naturally—we have spent thousands of hours doing it—whereas
thinking about the problems themselves is, for most, rarely prac-
ticed.
Consequently, major supply-chain problems are often ignored
by practitioners and authors simply because no ready-made solu-
tion is at hand. Examples include price breaks, cannibalization,
adversarial behavior, model uncertainty, and optimization under
uncertainty—all consequential issues that today’s supply-chain
discourse largely sidesteps. Companies address these problems
indirectly through corporate policies—operating recipes that keep
the business moving—but rarely state explicitly the precise prob-
lem each policy intends to solve. For a system of intelligence,
this oversight is fatal: the code will faithfully optimize yesterday’s
question while the market has already switched games.
Second, challenging the problem statement itself is instinctively
perceived as challenging authority. Indeed, after two decades in
academic institutions, every student knows that answering it doesn’t
matter to a posed question guarantees a bad grade. Yet too often
that answer is nonetheless the appropriate one. Careful selection
of which problems to address is fundamental. Resources—time,
computing power, people—are always limited, so we cannot tackle
every conceivable problem. One of the entrepreneur’s most critical
skills is wisely choosing which problems he decides to solve. This
perspective radically differs from employees’, who are usually tasked
not only with resolving a given problem but also with following a
codified method embedded in corporate policies.
Both academia and business maintain incentives that discourage
questioning the problem statement itself. In universities, publica-
tion, grant money, and student recruitment all depend on staying
within officially sanctioned research fields. Scholars may pursue
any “relevant” topic—provided it fits the canon of their discipline.
6.2. GENERAL INTELLIGENCE 207
Max Planck captured this dynamic in his Scientific Autobiography
(1950): A new scientific truth does not triumph by convincing its
opponents and making them see the light, but rather because its
opponents eventually die and a new generation grows up that is
familiar with it.
In companies, each problem’s solution is embodied in an org-
chart and a budget line. Questioning the problem therefore threat-
ens the status quo and creates managerial winners and losers.
Politics are unavoidable in large firms and present a major hurdle
to reframing the problem. Corporate history shows that organiza-
tions, like people, die—and refusing to confront new problems is a
prime cause of decline.
Hence, the first deliverable in a system-of-intelligence project is
not code but a clear, testable statement of the economic problem.
Only after this statement survives adversarial scrutiny should opti-
mization, forecasting, and data wrangling begin. With engineering
and cultural groundwork in place, we can broaden the lens and
revisit intelligence per se—first in its universal form, then in the
specialized varieties that supply-chain work handles daily.
6.2 General intelligence
Systems of intelligence contain narrow, software-bound reasoning,
whereas general intelligence is problem-agnostic and need not excel
at any single task. Its hallmark is the ability to refine both the
framing and the solving of problems. Riddles and tales—tools for
developing intelligence in children and adults alike—have accom-
panied mankind since the dawn of time. A riddle is an exercise
intended to sharpen the mind for its own sake. Tales play a similar
role but leverage our emotional response to facilitate memorization.
Definition (General Intelligence).
The capacity to intentionally improve intelligence itself.
Writing, invented over 5,000 years ago, is best seen as an
208 CHAPTER 6. INTELLIGENCE
amplifier of human general intelligence rather than its cause. It
externalizes memory: perspectives, insights, and facts can be
recorded, critiqued, and recombined long after their authors’ deaths.
The printing press—and, much later, the web—lowered copying
costs and expanded coordination, accelerating this feedback loop.
Yet more storage does not by itself produce understanding or
judgment. A gram-sized device can hold many lifetimes of literature
and still be entirely unintelligent; memorization is not intelligence.
Intelligent animals such as orcas and chimps transmit cul-
tural know-how—hunting or foraging tricks—to their young. They
demonstrate notable intelligence, yet general intelligence eludes
them. Contrary to past philosophical theories, no single trait
appears uniquely human. Complex language, named individuals,
humor, tools, formal logic, and theory of mind
10
all exist in lesser
form in the animal kingdom. The shift from intelligence to general
intelligence is therefore a loose threshold—many mental faculties
must cross a quantitative line before a qualitative leap occurs.
Until the early 2020s, deliberate self-improvement of general
intelligence was strictly human and decidedly artisanal. The mind
resists “engineering” in the sense of limitless, reproducible upgrades.
Millennia of pedagogy have produced many techniques, yet no one
expects breakthrough in this area, at least not in the foreseeable
future
11
. A new language, linear algebra, or thermodynamics
cannot be mastered overnight; disciplined study remains the only
path.
By the 2010s, specialized intelligences could self-improve in
narrow arenas—most famously Go
12
. The early 2020s brought a
watershed: large language models whose largest instances display
10
A theory of mind reflects the understanding that others’ beliefs, desires,
intentions, emotions, and thoughts may be different from one’s own.
11
Mind-enhancing drugs are a favorite science-fiction trope—Dune’s Sapho
juice, for instance—but no such known compound rivals anabolic steroids’ impact
on muscle.
12
Silver et al., Mastering the Game of Go without Human Knowledge (2017).
AlphaGo Zero trained against itself and beat its predecessor 100-0.
6.2. GENERAL INTELLIGENCE 209
a discernible spark of general artificial intelligence
13
. Like the
printing press or the web, these models expand humanity’s ability
to refine its own intelligence.
With these foundations in place, we can clarify “general intelli-
gence”. The next three subsections tackle the subject from different
angles. First, we ask when a question becomes a general-intelligence
problem—one whose option space defies formal bounds. Second,
we look at clerical roles in supply-chain teams to show why they
need specialized, not general, intelligence and are therefore ripe
for automation. Third, we assess large language models—today’s
strongest bid for engineered general intelligence—and sort real con-
tributions from hype. Together, these lenses turn a philosophical
idea into operational guidance for modern supply chains.
6.2.1 General intelligence problems
General-intelligence problems are inherently open-ended. Each
solution merely deepens our understanding and forces a new formu-
lation, which then demands a fresh solution. Breaking this cycle
is never final: stopping points are partly arbitrary—a pragmatic
truce set by today’s economics, tooling, and attention—until reality
forces a new formulation. Such problems may feel subjectively rare
because, once named, most “problems” are promptly wrapped in
formal boundaries to make them tractable. However, nearly all
business problems, in their naked form, are general intelligence
problems.
Definition (General Intelligence Problem).
A general intelligence problem arises when options have
no formalizable boundaries.
Entrepreneurial profit-seeking is a clear example of a general-
intelligence problem. A firm’s capital, time, and managerial at-
13
Bubeck et al., Sparks of Artificial General Intelligence: Early experiments
with GPT-4 (2023).
210 CHAPTER 6. INTELLIGENCE
tention are finite; the ways they can be deployed are effectively
unbounded and continuously shifting. At every moment, the
entrepreneur must decide whether to commit to the best cur-
rently known option or to expend resources searching for a bet-
ter one—knowing the search itself consumes those same scarce
resources. Because the option space shifts as technologies, com-
petitors, and regulations change, no fixed formalization closes the
problem. Entrepreneurship therefore belongs squarely to general
intelligence. History shows entrepreneurs repeatedly redefining—or
inventing—industries their predecessors never imagined.
By contrast, the vehicle-routing problem—choosing optimal
routes for a fleet that must serve given customers—is the archetype
of a bounded problem. Solving it may be hard
14
, but it does not
require general intelligence. Any solution must obey the formal
graph-theoretic statement of the task. The option set is finite—even
if astronomically large—and every candidate shares the same shape:
a permutation of stops (and timings) evaluated under a fixed cost
model; no genuinely surprising move ever appears.
Every problem begins as a general-intelligence mess; turning
it into a tractable “bounded” problem is itself an act of general
intelligence
15
. The key question is whether the chosen formaliza-
tion is adequate. Academia may catalog puzzles indefinitely, but
businesses must pick one framing, because each resource can be
allocated only once. We return to this methodological point later.
Conversely, any bounded problem can be lifted back into a
general-intelligence problem by changing the frame that made it
“bounded”—a move sometimes dismissed as “cheating” because it
violates the original rules. In vehicle routing, the textbook problem
14
The vehicle routing problem is known to be an NP-hard problem. In
computer science, NP-hard problems are a class of problems that are jointly
conjectured to be strictly more difficult to solve than any given polynomial-time
algorithm. Exhibiting a polynomial-time solution for any NP-hard problem
would refute the conjecture, proving that all NP-hard problems can, in fact, be
solved in polynomial time.
15
One can also do this unintelligently by inventing arbitrary variables and
constraints, then applying standard algorithms—a pattern common in academic
supply-chain papers.
6.2. GENERAL INTELLIGENCE 211
assumes a fixed set of customers, service promises, fleet, and prices.
The moment we renegotiate any of these—drop or defer remote
deliveries, price them differently, add pickup lockers or a parcel
carrier, switch vehicle types, widen delivery windows, or even make
delivery unnecessary altogether (as streaming did to cassettes)—the
original problem evaporates. We have not found a smarter route;
we have changed the game. That reframing—altering constraints,
commitments, or payoffs—is precisely the work of general intelli-
gence.
Earlier chapters showed that several supply-chain challenges
are genuine general-intelligence problems. Three stand out. (1)
Selecting the economic model, which defines how actions translate
into costs, revenues, risks, and constraints—the score we optimize.
(2) Defining the option space: the set of admissible actions, tim-
ings, and quantities the firm is prepared to consider (a decision
is a selection from this set, not the set itself). (3) Engineering
the numerical recipes that, given data, choose decisions within
that option space to maximize risk-adjusted returns while hon-
oring constraints. Because adversarial behavior—by competitors,
employees, suppliers, customers, or vendors—constantly pushes
against boundaries, only general intelligence can tackle these issues.
The story of suppliers exploiting a weak ERP to ship excess stock
is one such subversion.
Much published supply-chain work remains rigidly attached to
bounded formulations and leaves the general-intelligence design
choices—objective, constraints, and option space—implicit. This
does not make such problems intractable. Practically, surface
those hidden choices explicitly—state the economic objective in
monetary terms, enumerate admissible options, and define halting
conditions—and then decompose the work into bounded subprob-
lems a machine can solve. The next sections show concrete tactics
for doing so and for iterating when reality pushes back. The
program is necessarily incomplete—general-intelligence problems
cannot be “closed”—yet it is sufficient to yield unattended, prof-
itable decisions.
212 CHAPTER 6. INTELLIGENCE
6.2.2 Clerical jobs in supply chain
Supply-chain challenges demand substantial general intelligence. A
persistent misconception holds that routine decisions must be held
to the same standard. This is not the case; they only require spe-
cialized intelligence—structured processes that can be engineered
and handed to software.
The misconception stems from outdated practices. In many
firms, clerks still decide, say, whether to raise a replenishment order.
They do apply policy with intelligence, but general intelligence is
scarcely involved—on closer inspection, almost none.
These policies are usually slight twists on 1970s-era practices
16
.
Clerks seldom improve the fundamentals; they merely execute and
maintain the rules. Even parameter upkeep—tweaking service
levels, for instance—is a specialized, rule-bound task. General
intelligence again plays no part.
While better reporting tools raised productivity, they did not
improve the decisions themselves. Sparing employees the hunt
for stock levels and sales history is useful, yet simply surfacing
numbers does not, by itself, reveal how to turn them into profit.
Bluntly put, the 21st-century supply-chain clerk
17
is the white-
collar analogue of the 20
th
century factory worker. Each day begins
by opening an app—often a spreadsheet—and slogging through
hundreds of rows. The cycle is endless: top performers become
trainers or supervisors, perpetuating it. A desk does not, by itself,
confer “intelligence”. Given today’s technology, these clerical roles
will soon belong in history books—obsolete, to general relief.
Yet the accumulated know-how of those very clerks is anything
but disposable. Their day-to-day heuristics—honed through years
of wrestling with software vendor quirks, catalog edge-cases, and
improvised workarounds—form the tacit domain knowledge that
tomorrow’s systems of intelligence must absorb, formalize, and sur-
16
ASCM, founded in 1957 as APICS, has issued over 100,000 CPIM certifica-
tions since 1973.
17
Titles vary—demand planner, inventory analyst, replenishment manager,
scheduler, and so on.
6.2. GENERAL INTELLIGENCE 213
pass. Therefore, automation is not a repudiation of their expertise;
it is the vehicle that will preserve it, scale it, and finally liberate it
from the drudgery that keeps it underleveraged today.
Before considering how routine tasks vanish, recall why “soft-
skill” leadership loomed large in supply-chain teams. When plan-
ning was manual or semi-manual, leaders spent most of their energy
coordinating armies of clerks—arbitrating priorities, smoothing
friction between silos, and keeping throughput from stalling. That
requirement falls away as soon as decisions are automated. Once an
engineered—eventually artificial—intelligence carries the planning
load, the coordination problem collapses; a handful of specialists
can steer a very large flow of work.
The mandate changes. The leader’s job shifts from herding
people to owning the machine that makes the decisions. That own-
ership has four parts: (1) state the economic objective in money
terms and define the admissible option space; (2) guard decision
quality—via audits, halting heuristics, and dual runs when needed;
(3) steward semantics and data provenance, so inputs remain trust-
worthy; (4) recruit and protect a small, engineering-minded team
able to modify the logic on short notice. Charisma and empathy
still help, particularly when aligning finance, merchandising, and
operations around the same objective, but they cannot substitute
for technical stewardship. Firms that cling to clerical workflows
will keep rewarding soft-skill theater even as margins erode—a
theme we revisit when discussing manual forecasting.
6.2.3 Large language models
Large language models (LLMs) that emerged in the early 2020s
mark a milestone in seven decades of rapid software progress.
The most capable LLMs arguably show the first spark of general
intelligence and thus merit close attention. They matter to supply-
chain work—though many vendor-promoted “use cases” are mere
buzzword-riding nonsense.
Conceptually, an LLM does one thing: it reads a sequence of
symbols and predicts the next, appending it to the input. The
214 CHAPTER 6. INTELLIGENCE
input sequence is the prompt; the act of generating is sequence
completion. Symbols could be single characters, but performance
improves when the model operates on multi-character groups called
tokens (vocabularies typically number
100,000). Text is tokenized
before inference and detokenized after, restoring plain language
18
.
Internally, an LLM assigns probabilities to all possible next
tokens. One may sample from this distribution instead of always
taking the top choice, injecting randomness controlled by a tem-
perature parameter (with temperature = 0 yielding deterministic
output).
LLMs have a fixed context window—the maximum token count
for prompt plus output. When that limit is exceeded, the oldest
tokens are truncated and their information lost. Models now
commonly hold over 100,000 tokens (roughly 100 pages of text).
Techniques such as retrieval-augmented generation (RAG) extend
practical reach, but lie outside this discussion.
Running a model (inference) is simple; creating one (training)
is far harder. LLMs are deep learning networks with billions of
parameters arranged as tensors
19
and combined through mostly
linear algebra operations.
Those corpora typically feature a trillion tokens or more. Train-
ing processes the corpus one token at a time and, each time, nudges
the parameters
20
toward a more accurate probabilistic prediction
of the next token. After the public corpus is ingested, training
repeats with a much smaller corpus intended to mold the model’s
generative patterns toward those of a helpful assistant. This latter
process is fine-tuning.
This summary also clarifies why the models are called pre-
trained: unlike most machine-learning models, users need not bring
18
Multimodal models first convert images into a latent vector that occupies
token slots, so no explicit image-to-token step is needed.
19
In computer science, a tensor is the dimensional generalization of the matrix.
A scalar, a vector, and a matrix are respectively tensors of dimensions zero, one,
and two.
20
A feat performed through differentiable programming, which combines
automatic differentiation with stochastic gradient descent.
6.2. GENERAL INTELLIGENCE 215
their own dataset; the trainer supplies it.
LLMs embody an uncanny, non-human kind of intelligence.
Software vendors oversell their capabilities, while many supply-
chain experts—and entire organizations—downplay them, treating
LLMs as a curiosity that leaves both theory and practice unchanged.
Both stances are wrong. LLMs are neither magic nor negligible.
They require basic prompting skills
21
and proper instrumentation;
without these, any model looks “dumb”. Their performance is
highly task-dependent—subpar for arithmetic and strict numerics,
yet superhuman for code de-minification, refactoring boilerplate,
or rephrasing brittle business rules. A competent practitioner
approaches them with prior beliefs about where they will shine
and where they will fail. In that sense, prompting has joined
spreadsheets and word processors as table stakes for white-collar
work.
Contemporary models are also multimodal. Most frontier LLMs
natively ingest images, and many now emit images as well, with
the same integration spreading to audio—both speech recognition
and synthesis—by shoehorning pixels or waveforms into pseudo-
tokens. Yet outside these native modalities, the impressive “extra”
powers that people see—web search, code execution, retrieval,
optimizers, simulators—still come from an orchestrator layered
on top of the model. The model emits a structured tool call; the
orchestrator invokes the external tool; the result is fed back into
context, and completion resumes.
22
An uninstrumented LLM is
a sharp autocomplete; the same weights, instrumented, behave
like a competent analyst with a browser, calculator, and terminal.
The intelligence in the weights is unchanged; the capabilities—and
therefore the usefulness—are not.
Modern language models show a spark of general intelligence
for two reasons: they can write code and they can follow chain-
of-thought prompts. Their code falls short of an experienced
21
“Prompt engineering” sounds grand; most bright people learn those skills in
a few days.
22
Tool names and wire formats keep changing, but the pattern is stable: the
model proposes, the runtime executes, state returns to the model.
216 CHAPTER 6. INTELLIGENCE
engineer’s output, yet it already outperforms low-skill IT work-
ers. The ability to write software is foundational, because self-
improvement—the hallmark of general intelligence—requires it.
Chain-of-thought prompting coaxes a model to reason aloud.
Merely appending “Let’s think step by step.
23
to a given prompt
often boosts answer accuracy, but the extra reasoning tokens raise
compute cost. The figure below illustrates the effect.
(c) Zero-shot
Q: A juggler can juggle 16
balls. Half of the balls are golf
balls, and half of the golf balls
are blue. How many blue golf
balls are there?
A: The answer (Arabic numer-
als) is
(Output) 8 ×
(d) Zero-shot-CoT (Ours)
Q: A juggler can juggle 16
balls. Half of the balls are golf
balls, and half of the golf balls
are blue. How many blue golf
balls are there?
A: Let’s think step by step.
(Output) There are 16 balls in
total. Half of the balls are golf
balls. That means that there
are 8 golf balls. Half of the golf
balls are blue. That means that
there are 4 blue golf balls.
Chain-of-thought (CoT) helps by forcing the model to unpack
a task into smaller steps within its context window. The model
retains no memory beyond that window; no hidden state carries
from one answer to the next. CoT simply spends more tokens
at inference time to surface intermediate reasoning. This usually
improves reliability up to a point, with diminishing returns and
hard limits imposed by the window. Recent systems increasingly
perform this expansion under the hood and suppress the raw trace
23
See Large Language Models are Zero-Shot Reasoners (2023) by Kojima et
al.
6.2. GENERAL INTELLIGENCE 217
(“thinking” modes) to spare users long monologues. CoT is an
inference-time budget to spend on harder questions—not a new
capability in the model’s weights.
Despite their appeal, large language models typically consume
a million-fold more computation than classic algorithms solving the
same job. Hardware progress will lower per-token costs, yet these
models are unlikely ever to rival purpose-built data-processing
code. Reserve them for tasks plain algorithms cannot reach.
Language models matter to supply chain mainly in two ways:
first, speeding up the writing and upkeep of numerical recipes and
their documentation; second, extracting features from raw text.
We revisit both later. Other applications are secondary.
Misuse of large language models
Since the early 2020s, many enterprise-software vendors have made
bold—often nonsensical—claims about artificial intelligence (AI).
They hide the details not to guard secrets but to mask how little
technology lies beneath. Most of the hype centers on large language
models (LLMs). Let us dismantle the main claims.
Our AI forecasts more accurately: Language models are ill-
suited to time-series forecasting—or numerical work of any kind.
Their performance, in both accuracy and computational cost, is
abysmal compared with even basic statistical models, let alone
top-ranked ones. Pretrained forecasting models—an LLM-style
idea—are under active study. They may help elsewhere, but, as
Chapter The Future shows, they are a dead end for supply-chain
work.
Our AI crunches all your data: Per byte, language models are
both expensive and slow, and will remain so for the foreseeable
future. As a rule, processing megabytes of data through a language
model is a nontrivial operation that takes minutes to complete.
Large companies have terabytes of records in their business systems.
Simply passing an entire corporate dataset through an LLM—even
once—will remain prohibitively slow and costly for years. Cost
aside, an LLM does only one thing—sequence completion. That
218 CHAPTER 6. INTELLIGENCE
tool is versatile, but forcing vast non-text datasets through it yields
no obvious benefit.
Our AI talks to end-users: Because software vendors bill per
seat, inefficiency sells: each extra clerk forced to fight the software
means extra revenue. Bolting an LLM onto a screen to create a
“chat” interface only worsens matters: conversation is slow and
tiresome. Chatbots belong at help desks, not in high-throughput
enterprise workflows; training people on a crisp interface beats
sluggish dialogue every time.
In enterprise software, LLMs are today’s fashion—one in a
long line soon to be replaced by the next. For forty years, CRUD
systems have ruled, and that culture, dominant among vendors
of systems of records, resists meaningful integration of new ideas,
even when the ideas are sound.
6.2.4 Misconceptions about intelligence
Popular culture has carried the idea of artificial intelligence (AI)
for decades, and much of today’s boardroom intuition still echoes
20
th
century-century theories about what “intelligence” is supposed
to look like. This matters because those intuitions leak directly
into supply-chain decisions: they shape which projects are funded,
which risks are feared, and which levers are left untouched. Before
we mechanize decisions, we must clear away the myths that keep
executives oscillating between paralysis and credulity.
In supply chain, the relevant question is not whether machines
will “think like us,” but whether software can systematically make
better bets about what moves, where, when, and how much, and do
so unattended, with auditable economics. Misconceptions about
AI push firms into two symmetric errors: alarmism, which delays
profitable automation on the grounds that “AI is dangerous,” and
anthropomorphism, which mistakes human-like conversation for
decision-making ability and green-lights brittle toys.
The next pages examine three persistent misunderstandings—the
“singularity,” the Turing-test view that human-sounding chat im-
plies general intelligence, and the Moravec paradox, misread as
6.2. GENERAL INTELLIGENCE 219
evidence of machine limits—and recast them in operational terms.
The standard for intelligence in this book is strictly economic:
higher risk-adjusted returns from the same scarce resources. By
that yardstick, most cinematic worries are irrelevant, and most
demo-friendly chatbots are distractions.
The singularity
The singularity imagines an AI that self-improves so fast that it
outstrips humanity, then either transcends or enslaves us. Countless
action films riff on this theme, yet Hollywood’s grasp of AI is no
better than its grasp of real-world ballistics. For supply-chain
practice, this mythology is a non sequitur. The systems that
decide reorders, prices, routes, or allocations are narrow, auditable
programs with bounded option spaces, explicit objectives, and kill-
switches; they do not “wake up”, they optimize under constraints.
Refusing to mechanize replenishment or routing because “AI might
go rogue” is like refusing forklifts for fear of robot uprisings: it
trades certain, present waste for a speculative, distant hazard.
The real risks are mundane and tractable—bad objectives, sloppy
semantics, untested code—and so are the safeguards: dual-run,
halting heuristics, code reviews, and economic audits. Existential-
risk debates have no bearing on whether a firm should apply the
best that computer science offers to the day-to-day allocation of
scarce resources.
While the general public may get the impression that the tech-
nology leaped forward, essentially overnight, back in 2022
24
, this
is absolutely not the case. In reality, progress has been slow, incre-
mental, and hard-won over the past seven decades. Furthermore,
it is not one thing being improved, but dozens of largely unre-
lated fronts: better paradigms, better algorithms, better hardware,
better datasets, better methodologies, and better codebases. All
of these improvements have been generated by a diverse crowd
of contributors. Thus, while it is possible to stumble upon some-
24
ChatGPT by OpenAI was released in November 2022.
220 CHAPTER 6. INTELLIGENCE
thing that exceeds human intelligence, the idea that such an entity
would—as an event—elevate itself into a transcendent being is
far-fetched.
Once we remove the “event” aspect of the emergence of artificial
general intelligence, it becomes difficult to explain why this would
ever present an extinction-level threat. Any malicious entity, or any
entity operated by malicious people, will be countered by similarly
capable entities that armies, intelligence agencies, companies, or
even universities and hobbyists will develop. Artificial general
intelligence will be a late addition to the already long list of
technologies that can do immense damage when used for nefarious
purposes.
Such an entity will be the product of an advanced industrial civ-
ilization and will rely on its complex—and fragile—supply chains.
Semiconductor manufacturing already ranks among the most intri-
cate industries. Conceivable “alternatives”—neuromorphic, pho-
tonic, quantum, or biosynthetic substrates—are, at best, lab-grade
and themselves ride on the same semiconductor tooling, materials,
and precision-manufacturing stack. In short, there is no industrial,
non-silicon path to computation today that avoids those chains.
The continued existence of such entities will depend on the active
support of humanity for decades, if not centuries. People will have
ample time and opportunity to address the inevitable problems
that have accompanied the rise of every technology so far.
Doomsaying is ancient, and most experts and politicians join
in for self-aggrandizement.
The Turing test
In 1950, British computer scientist Alan Turing devised his imita-
tion game, later called the Turing test. The gist is that if a human,
chatting through a text-only channel, cannot reliably tell human
from machine, then the machine qualifies as generally intelligent.
The usual objections to the test dwell on consciousness and
self-awareness; these are beside the point for supply chain. Our
concern is not whether a machine “sounds human”, but whether it
6.2. GENERAL INTELLIGENCE 221
consistently makes choices that raise risk-adjusted returns under
uncertainty.
Crucially, supply-chain intelligence is not self-evident to the
naked eye. Humans have no sixth sense for whether “200 units
next week” is slightly too high or too low. Even ex post, judging a
decision requires counterfactuals—what would have happened had
we acted differently—and horizons that match lead times. This
evaluation is hard: outcomes are noisy, delayed, and entangled
with thousands of parallel choices.
An engine that speaks poorly—or not at all—may therefore
look “unintelligent” while quietly compounding value. Conversely,
a fluent chatbot can pass Turing-style conversations yet steer the
firm toward losses. In the 2010s, simple pattern-matching bots
frequently fooled human judges; they were rhetorically convincing
and operationally useless. The lesson endures: imitation of human
style is orthogonal to decision quality.
A practical illustration: a good replenishment engine may
place many tiny, oddly timed orders, reprice slow movers by a
few cents daily, or divert one container to an unfashionable cross-
dock. To a planner, these moves can feel fussy or even wrong;
on the ledger they lower holding costs, reduce write-offs, and
speed cash conversion. By contrast, a tool that “explains” itself
beautifully while preserving legacy MOQs and batch calendars will
earn applause in meetings and quietly bleed working capital.
The only meaningful test is economic. Call it an economic
Turing test: under a dual run or via reasoned counterfactuals,
with the same constraints and audit trail, does the system of
intelligence deliver higher risk-adjusted operating profit than the
incumbent process after all costs (inventory, logistics, markdowns,
labor, capital) are charged? If yes, treat it as intelligent—no matter
how taciturn its interface. If not, no amount of conversational
polish redeems it.
Thus, fascinating though it remains, the imitation game is an
inadequate criterion for our craft. Supply-chain intelligence systems
should be judged by auditable economics over the appropriate time
horizons, not by their ability to chat.
222 CHAPTER 6. INTELLIGENCE
Moravec’s Paradox
The American computer scientist Hans Moravec observed in the
1980s that tasks that seem simple—grabbing a teapot and pouring
tea—are among the hardest for artificial intelligence to replicate.
In contrast, tasks once deemed marks of genius—such as matrix
diagonalization—have proved straightforward for machines.
However, the paradox dissolves once its hidden premise is stated.
The paradox assumes that animal mobility—and the planning
behind it—is learned (nurture) rather than innate (nature). In
other words, acquiring mobility is held to result from learning—one
that has proved incredibly difficult to replicate artificially for the
past four decades.
On closer examination, those capabilities involve little or no
learning. Foals—and their equine cousins—stand and walk within
hours. This is not “learning” but a quick calibration of a nearly
finished motor system. The cognitive structures are nearly com-
plete; the system needs only a slight nudge for the pathways to
lock into place.
In humans, the innateness is less visible: compared with other
mammals, our oversized heads force earlier birth, leaving motor
circuits to finish developing outside the womb. What many perceive
as the infant’s “learning” is cognitive development that would
continue unimpeded if the infant remained in the womb for a few
more months.
These largely innate circuits are evolutionary products, refined
over roughly 800 million years and untold generations. No wonder
machines struggle to match animal mobility. The challenge is
akin to engineering a device that outperforms trees at convert-
ing sunlight into structural material, or ribosomes at assembling
macromolecules. Outperforming evolution at games played for
eons is brutally difficult.
Mobility rests on hundreds of tightly knit heuristics; even a
snail can negotiate the three-dimensional maze of vegetation. Well-
chosen heuristics can seem “intelligent” provided the action space
is tightly bounded—as in the snail’s case.
6.2. GENERAL INTELLIGENCE 223
In nature, mobility was a necessary evolutionary path to general
intelligence: life could not leap from unicellular origins to general
intelligence without first solving locomotion. Machines face no
such constraint—so long as humans supply energy and materials
while we solve the remaining hurdles (e.g., industrial substitutes
for muscle fibers). Evolution solved these problems in ways that
do not lend themselves to industrialization; we must solve them
anew.
The implication for supply chain is easy to miss. The tasks that
feel “simple” to humans—walking across a crowded aisle, grasping
a box, recognizing a co-worker—are precisely those evolution has
tuned for hundreds of millions of years. By contrast, large-scale,
fast-moving resource allocation—ranking millions of micro-options
with delayed, stochastic payoffs—is the opposite of what our brains
evolved to do. Working memory is tiny, attention is fragile, and
our intuitions about probability are notoriously poor. Machines
invert this profile: they struggle with animal motion yet excel at
enumerating vast option sets, scoring counterfactuals, and recom-
puting plans every minute. Thus the quip “AI can’t even fold
laundry” is irrelevant to whether software can outclass planners
at repricing 50,000 SKUs, rebalancing multi-echelon stocks, or
rerouting containers mid-voyage under volatile lead times. In the
Moravec sense, these are machine-native problems.
Two operational corollaries follow. First, do not condition
decision automation on progress in robotics or conversational pol-
ish; the relevant yardstick remains the economic one introduced
earlier: under the same constraints, does the engine deliver higher
risk-adjusted profit than the incumbent process? Second, expect
machines to overtake humans first where planners sweat most:
combinatorial allocation, adversarial pricing, multi-echelon balanc-
ing, and dynamic rescheduling. None of this requires humanlike
generality; specialized intelligence within well-defined boundaries
suffices—exactly the habitat of supply chain engines.
In short, copying animal mobility is not a shortcut to general
intelligence: it is a cluster of tough yet essentially specialized prob-
lems. Solving them may inspire, or occasionally inform, work on
224 CHAPTER 6. INTELLIGENCE
artificial general intelligence, but that remains a separate pursuit.
6.3 Specialized intelligence
A specialized problem is one whose options are bounded by ex-
plicit formal rules; solving it calls for specialized intelligence. The
academic supply chain puzzles are typical examples, and each
solution—whether a simple heuristic or a large hyperparametric
model—counts as “intelligent” only by the quality of its results.
Declaring a method “smarter” merely because it looks arcane is a
mistake.
Definition (Specialized Intelligence).
The capacity to make choices yielding superior future re-
wards within formal boundaries.
Supply chain becomes tractable precisely when its sprawling,
general intelligence questions are carved into many bounded al-
location problems. Each bounded problem fixes three elements:
(1) the option set—what moves where, when, and how much may
be chosen; (2) the constraints—capacity limits, calendars, MOQs,
service promises, and how often the decision is revisited; and (3)
the valuation—a money-denominated score that prices upside and
downside under uncertainty. Once these are explicit, the task
ceases to be “plan the supply chain”; it becomes “choose among
admissible options to maximize risk-adjusted return”.
This decomposition is not a rhetorical device; it is the engineer-
ing move that makes unattended software possible. Replenishing
a SKU at one location over a given horizon, selecting today’s price
for a slow mover, assigning cartons to a truck stop, accepting or
deferring a repair on a rotable—each is a specialized, auditable
resource-allocation decision. Loosely coupled problems can be
handled greedily at high frequency; tightly coupled ones can be
grouped and optimized with heavier machinery. Boundaries remain
revisable—when economics shift, we widen the option set or adjust
6.3. SPECIALIZED INTELLIGENCE 225
the valuation—but while they hold, the decision can be automated,
inspected, and improved by a system of intelligence.
The history of artificial intelligence is a game of moving goal-
posts. Researchers pick a hard problem, label its solution “real AI”,
then—once solved—rename it and move on to the next challenge.
Each triumph is soon judged mundane, and the cycle begins again.
This pattern explains why mature solutions often look “un-
intelligent”: they become routine and mechanical. Intelligence
should be judged by the quality of choices, not by how arcane
the machinery appears. Sophisticated models can masquerade
as “smarter”, yet they still address a bounded task; true general
intelligence questions must be confronted separately.
Counterintuitively, the first workable solution is rarely the
simplest; achieving simplicity is a separate, often harder, task.
Progress does not run from simple to sophisticated but from messy
prototypes to streamlined cores. Maxwell’s electromagnetic theory
began with more than twenty equations; Heaviside later condensed
them into the four we use today. In the same way, today’s convo-
luted answers are tomorrow’s elegant principles.
The same cycle fuels AI booms and busts: people confuse
solving a tough niche problem with cracking general intelligence,
and investors ride the resulting hype. General intelligence is hard,
but countless specialized problems must be harder. Indeed, if we
ever build a general intelligence, every remaining unsolved problem
will, by definition, be tougher; otherwise they would have fallen
first.
The highs and lows of the AI label should not eclipse computer
science achievements since World War II—many central to supply
chain. The field is now so broad that any overview is necessarily
partial, but the following pages single out the topics most relevant
to supply chain work.
6.3.1 Algorithms
Algorithms are precise procedural recipes that terminate and return
a correct answer for a well-specified input. Formalized in the 1930s
226 CHAPTER 6. INTELLIGENCE
(Church, Turing, Kolmogorov), algorithms became the backbone
of 20
th
century computer science. Many early “AI” successes
of the 1950s—topological sorting, graph routing, linear-equation
solvers—are now simply known as algorithms.
Definition (Algorithm).
A finite sequence of rigorously specified instructions that
solves a well-posed problem.
Formally, any program that halts on all admissible inputs could
be called an “algorithm”. In practice, computer scientists use the
term more narrowly. An algorithm is an atomic, reusable building
block whose problem statement is independent of any dataset,
interface, or server, accompanied by a proof that, for every valid
input, it returns the specified output and by an analysis of its time
and space requirements as input size grows.
An “atomic” task cannot be decomposed into smaller, inde-
pendent, purposeful subtasks. By contrast, rendering a web page
involves hundreds of steps—from tokenizing HTML to adjusting
character kerning—and thus is not atomic.
An algorithm comes with a proof of correctness; absent such a
proof, it is merely a heuristic. The combined Fermat–Fibonacci
tests, for instance, detect primes reliably yet still lack a proof of
correctness, so they remain heuristics.
An algorithm’s value ultimately lies in its performance. While
brute-force methods exist for most tasks, a good algorithm reaches
the same result with far less computation and memory. On occasion
one can even prove optimality, though the subtleties of complexity
theory lie beyond this chapter.
Algorithms (and the data structures that support them) are the
bedrock of software; every method discussed later relies on them.
Accordingly, they are an auxiliary science for supply chain—though
well-designed tools can shield practitioners from their details, as
we shall see.
Readers seeking depth should consult Cormen et al.’s classic
6.3. SPECIALIZED INTELLIGENCE 227
Introduction to Algorithms (3rd ed., 2009). The annex provides a
worked example—the topological sort algorithm.
The CRUD pattern—introduced in Chapter Information—owes
its success to the fact that it lets teams ship business software
without deep algorithmic chops. In enterprise IT, this became
a career track: many capable developers spend decades building
forms and reports while never learning to design and analyze
algorithms. That gap is harmless inside systems of records, where
constant-time CRUD dominates and the database shoulders the
heavy lifting; it becomes crippling the moment we leave ledgers and
attempt optimization. It also explains CRUD’s cultural dominance
and, symmetrically, why vendors brilliant with records so often
stumble at systems of intelligence: the latter demand algorithmic
modeling, complexity control, and performance engineering—skills
CRUD habits neither teach nor select for.
6.3.2 Operations Research
Born in World War II, operations research tackled wartime resource-
allocation
25
questions—where to place radar, how to route convoys,
how to assign scarce aircraft—and, after 1945, business ones. It
spawned many algorithms—most famously Dantzig’s 1947 simplex
method—but, unlike computer science, it cared less for atomic
problems and more for end-to-end applications. In 1947, Scottish
radio engineer Robert Watson-Watt defined operations research:
To examine quantitatively whether the user organization
is getting from the operation of its equipment the best
attainable contribution to its overall objective.
Watson-Watt’s wording echoes this book’s opening definition
of supply chain; operations research is its forerunner. Between
the 1950s and 1970s, the field produced simulations, econometrics,
stochastic models, and mathematical optimization. Many of these
strands later grew into disciplines of their own, drifting away from
the operations research umbrella.
25
Operations research was originally understood as research on (military)
operations.
228 CHAPTER 6. INTELLIGENCE
The most notable contributions of operations research—at least
among those not reclassified into their own fields—are methods for
transportation optimization, linear and quadratic programming,
integer programming, and similar tasks. All these tasks share
a common structure: decision variables that must be chosen to
maximize an objective function that represents a real-world goal.
This framing still underpins most business decision-making, and
this book’s approach.
By the 1970s, operations research was falling short of its ambi-
tions. As mentioned earlier, American operations research pioneer
Russell Ackoff captured this sentiment in his 1979 paper The Future
of Operational Research is Past. Most resource-allocation decisions
were still manual in 1979—and most remain so today. Ackoff listed
several root causes:
Poor systems thinking: experts relied on simplistic, first-
order causality.
“Pseudo-optimal” results driven by contrived objective func-
tions.
Fragile answers that assume a perfectly known, unchanging
context.
Sanitizing real-world “messes” into toy problems, leaving
practice untouched.
Those critiques still apply. Much of the supply chain literature
repeats the same mistakes—one reason so little of it survives
contact with reality
26
.
During the 1980s, operations research morphed into “OR” (the
acronym), specializing in software “solvers” for optimization models
written in domain-specific languages. OR thus settled as a subfield
of computer science.
26
As of September 2024, a Google Scholar query for “inventory optimization”
returns 1.9 million publications. It is hardly an exaggeration to say that none
of this literature has ever survived in real-world conditions.
6.3. SPECIALIZED INTELLIGENCE 229
Commercial solvers form a healthy software niche and handle
many design tasks well. Nocedal and Wright’s Numerical Opti-
mization (2006) is still the standard survey, even for learning-based
variants. Beyond vehicle-routing variants, solvers rarely appear in
supply chain because three lingering limitations curb their value.
First, they focus exclusively on deterministic rather than stochas-
tic optimization, making it impossible to handle uncertainty—a
constant feature of supply chains.
Second, they offer either scalability or expressiveness—but not
both. Solvers scale for supply chain purposes only in linear or
quadratic regimes, which are too simple to reflect real-world supply
chain situations.
Third, solvers prize mathematical purity over practical expres-
siveness
27
. Linear and quadratic programming can indeed certify
optimality, yet they describe the real world so poorly that the
“optimal” answer is invariably useless.
Setting purity aside, solvers still scale poorly. Faced with a
concrete instance—say, vehicle routing—companies often build ad
hoc code instead of using off-the-shelf tools. The issue is acute in
supply chains, where tens of millions of decision variables (several
per SKU) are common.
Thus, the legacy of operations research lies largely in the fields
that originally emerged from it. Present-day OR has little to
do with the goals operations research set out to achieve. Fur-
thermore, while OR’s favored techniques have merits and notable
achievements, they are unlikely to circle back into supply chain.
6.3.3 Statistical learning
The notion of algorithmically inferring a process from recorded ob-
servations appeared in the 1950s. It emerged alongside the earliest
writings on artificial intelligence. Because individual observations
can encode partial solutions, learning has long overlapped with
27
Techniques such as cutting-plane methods, branch-and-bound, and branch-
and-cut exemplify the mainstream view that a solver must not only provide good
solutions but also prove that the solutions are either optimal or quasi-optimal.
230 CHAPTER 6. INTELLIGENCE
optimization
28
. Early work on artificial neural networks was once
published under the operations-research banner; today it is firmly
classed as "learning".
Until the early 1990s, interest in learning techniques was limited
for two main reasons. First, computers lacked the power to achieve
anything substantial by learning. Consequently, non-learning ap-
proaches—such as expert systems
29
—were favored. For example,
Deep Blue—the first chess-playing program to defeat the reigning
world champion, Garry Kasparov, in 1997—was engineered as
an expert system. There was also a scarcity of public training
datasets, since collecting and distributing large datasets remained
prohibitively expensive until the late 1990s. The MNIST dataset
contains 60,000 training images and 10,000 test images, each a
28×28 grayscale rendering of a handwritten digit. It provided
a shared baseline that fostered a generation of researchers and
standardized validation and benchmarking.
Second, high-dimensional learning remained baffling until the
late 1990s. Classical statistics assumes the number of variables is
small relative to the number of observations. With 100 observations,
a two-variable regression can be trusted; with ten variables, spuri-
ous patterns proliferate; with 10,000 variables, standard methods
collapse. Paradoxically, more information produced less insight.
The curse of dimensionality found its first convincing an-
swer in Vapnik–Chervonenkis (VC) theory, developed from the
28
While nowadays learning problems and optimization problems have accu-
mulated sharply distinct collections of techniques, it is still not entirely clear
whether this distinction is entirely appropriate. Indeed, most learning tech-
niques do have an optimization algorithm at their core, and conversely, some
authors—including the present one—believe that the future of optimization lies
in introducing a learning component at its core. In the future, these two fields
might merge if a suitable unifying paradigm is discovered.
29
Expert systems are collections of logical and arithmetic rules written by
domain experts. They were largely abandoned in the 2000s, as they generally
proved more costly and less efficient than learning-based alternatives. However,
rule-based subsystems are still prevalent in enterprise software, and many have
reached an operational complexity well beyond what was considered “advanced”
in the early 1990s.
6.3. SPECIALIZED INTELLIGENCE 231
1960s through the 1990s
30
. The theory led to support-vector
machines—first implemented in 1995
31
—and provided a mathe-
matical framework to curb overfitting. Support vector machines
soon demonstrated their value on real-world datasets.
Under-fitting Proper fitting Over-fitting
Figure 6.1: Underfitting, proper fitting, and overfitting.
VC theory reframed the aim of statistical modeling as maxi-
mizing generalization—performance on unseen data. Though we
cannot measure accuracy on data we do not have, VC theory
proves an upper bound on the true error: empirical error plus a
model-complexity term determined by the model’s design. This for-
malization refines the classical bias–variance tradeoff: low-capacity
models exhibit high bias, high-capacity models high variance. Fig-
ure 6.1 illustrates underfitting, a correct fit, and overfitting. This
theory paved the way for statistical models that can deliver reliable
results even when the number of input dimensions vastly exceeds
the number of observations.
Operationally, support vector machines were rapidly super-
seded by alternative techniques—(gradient) boosting (1999)
32
and
30
L.G. Valiant’s 1984 paper A Theory of the Learnable introduced the concept
of probably approximately correct (PAC) learning, laying the first rigorous
foundation—later called computational learning theory—for studying learning
from a statistical standpoint. PAC theory, however, ultimately proved less
influential than the subsequent Vapnik–Chervonenkis framework.
31
See Support-Vector Networks (1995) by Corina Cortes and Vladimir Vapnik.
32
See Greedy Function Approximation: A Gradient Boosting Machine (1999)
by Jerome Friedman.
232 CHAPTER 6. INTELLIGENCE
bagging (2001)
33
—whose mathematical analyses likewise showed
they were well-behaved with respect to overfitting. Together, these
ideas gave rise to gradient-boosting machines, which remained
state-of-the-art across many tasks for roughly a decade. Today,
gradient boosting machines remain popular, particularly when
observations are scarce.
These developments had an outsized impact: the built-in capac-
ity control of boosting, bagging, and SVMs let theoreticians prove
tight bounds on overfitting—assurances rival approaches lacked.
Throughout the late 1990s and much of the 2000s, this statistical
seal of approval led researchers to sideline nearly every competing
approach. Gradually, however, it became clear that no amount of
theoretical polish or engineering finesse could push these methods
beyond certain hard limits. The technology had effectively stalled.
French-American computer scientist Yann LeCun captured the
mood by dismissing them as little more than “glorified template
matching”.
Despite these limits, statistical-learning ideas remain a corner-
stone of modern statistics and a vital auxiliary science for supply
chain. For an accessible, figure-rich survey, see the textbook The
Elements of Statistical Learning (2009) by Hastie et al. Boosting-
and bagging-based models still excel when data are scarce but
high-dimensional, and they require far fewer resources than most
alternatives—although they do not provide a route to general
intelligence.
6.3.4 Deep learning
In the 2010s, deep learning became the dominant machine-learning
paradigm. Deep learning descends from artificial neural net-
works, but it crystallized only after the field abandoned biological
metaphors and embraced silicon hardware. Remarkably, its core
ideas are simpler than those of its bio-inspired predecessors.
Conceptually, a deep-learning model is a fixed circuit of floating-
33
See Random Forests (2001) by Leo Breiman.
6.3. SPECIALIZED INTELLIGENCE 233
point operations parameterized by floating-point numbers. Most
layers perform matrix multiplications punctuated by rectifiers. Rec-
tifiers break linearity: stacking linear maps remains linear, so a non-
linear activation is required. Additional components—convolutions,
pooling, or softmax layers—allow the model to handle diverse input
modalities and output formats.
Consider again handwritten-digit recognition on the MNIST
dataset. If you flatten each 28×28 image into 784 numbers and
pass them through three weight matrices with rectifiers, you obtain
a 234,752-parameter multilayer perceptron—tiny by today’s stan-
dards (see Figure 6.2). Running it (performing inference) requires
only three matrix multiplications and two rectifier activations (see
the annex for details).
1
2
3
783
784
Layer 1
256
Layer 2
128
Layer 3
Digit 0
Digit 9
10
Layer 4
Figure 6.2: A compact 4
-
layer multilayer perceptron (784
-
256
-
128
-
10)
for handwritten-digit recognition.
So far we have specified the architecture and noted that in-
ference reduces to three matrix multiplications and two rectifiers;
what remains is to obtain the numbers inside those matrices. In
practice we want a well-trained network—one that, for each input
image, assigns its top score to the digit actually present. Train-
234 CHAPTER 6. INTELLIGENCE
ing is therefore a search over the parameter space for values that
minimize a chosen classification loss. Deep learning tackles this
search through three coordinated steps: Initialization—choose
sensible random starting weights so activations stay numerically
well-behaved; Gradient—compute how an infinitesimal change
in each weight would change the loss; and Descent—use those
gradients to nudge the weights step by step toward lower loss.
Together these steps iteratively refine the network’s weights until
performance stabilizes.
Initialization Weights begin as Gaussian noise and are imme-
diately rescaled so that, layer after layer, activations retain roughly
zero mean and unit variance. This simple precaution prevents the
product of many linear operations from blowing up (overflow) or
shrinking to numerical dust (underflow). When no closed-form rule
applies, a quick trial pass through the network rescales empirically
to preserve that invariant.
Gradient The gradient tells each parameter which way—and
how far—to move to lower the chosen loss. Modern frameworks
obtain it by automatic differentiation, which threads derivatives
through the computation graph at roughly the cost of one extra for-
ward pass, but with higher memory since intermediate activations
must be cached. Earlier deep learning toolkits used hand-coded
gradient back-propagation; that the community took decades to
adopt automatic differentiation dating from the 1960s is a
reminder that progress is rarely linear.
Descent Gradients become updates through stochastic gra-
dient descent (SGD), a 1950s-era workhorse that still dominates
deep learning. SGD samples one observation—or, for better hard-
ware utilization, a small set of observations referred to as a mini-
batch—nudges the weights in the gradient direction, then repeats.
Mini-batching trades extra arithmetic for sharply lower wall-clock
time by exploiting parallel cores. Variants with momentum, adap-
tive learning rates, or decoupled weight decay refine the basic
update.
Deep learning overturned several statistical-learning dogmas
and still shapes what can—and cannot—be inferred from company
6.3. SPECIALIZED INTELLIGENCE 235
data.
The double descent
Deep-learning models are hyper-parametric: they usually contain
more parameters than training examples. The 234,752-parameter
perceptron above dwarfs MNIST’s 60,000 images. Statistical theory
predicts severe overfitting, yet practice says otherwise.
Discovered in the late 2010s, the double descent curve helped
clarify why many statisticians initially doubted deep-learning
claims. Trained to expect the classic U-shaped bias-variance trade-
off, they were puzzled when larger networks sometimes outper-
formed smaller ones instead of overfitting. Double descent shows
why: as model capacity grows, test error first drops (good gener-
alization), then climbs (over-fit), but beyond a second threshold
drops again as very large, overparameterized models settle into
new low-error regions (see Figure 6.3).
Figure 6.3: Double descent: test error falls, rises, then falls as the ratio
of parameters to data grows. Excerpt from Double Descent Demystified:
Identifying, Interpreting & Ablating the Sources of a Deep Learning
Puzzle (2023) by Schaeffer et al.
On the left of the double descent curve—the classic regime
the network has too few parameters, so it underfits. As we add
236 CHAPTER 6. INTELLIGENCE
weights, the test loss falls until it reaches a first minimum at the
“sweet spot”. Push past that point and the model swings into
classic overfitting: extra capacity now memorizes noise faster than
it learns signal. Curiously, in this zone the training error continues
to slide smoothly even as the test error rises—a behavior that long
puzzled statisticians who expected the two curves to diverge much
earlier.
Move to the right—the deep regime—and the pattern flips.
Once the parameter count crosses a second, much higher threshold,
test loss starts dropping again: larger really is better, as deep-
learning practitioners had claimed since the early 2010s. Yet each
extra slice of accuracy costs an outsized chunk of compute and
energy, so returns diminish rapidly. State-of-the-art networks now
operate well beyond that threshold, trading massive hardware
budgets for the last fractions of a percentage point in performance.
For supply chains, the message is mixed: giant models win
accuracy but hurt maintainability, while smaller ones stay agile yet
fall back into the classic under/overfit dilemma. Hybrid designs
that mix the two are frequently the superior option.
Mechanical sympathy
Artificial neural networks (ANNs) have a clear weakness: they
are grossly inefficient in their use of computing resources. Silicon
chips resemble biological neurons so little that even a crude digital
neuron squanders cycles. Deep learning broke free of this handicap
by adopting what race engineers call mechanical sympathy. In
early Formula 1, the fastest laps were set not by drivers who
ran at the limit, but by those who sensed how much stress the
engine, gearbox, and brakes could tolerate and modulated their
pace so the car finished intact at the checkered flag. Translated
to code, mechanical sympathy means arranging algorithms and
data to move with the grain of caches, vector units, and memory
buses instead of colliding with them. Once this mindset took hold,
researchers shed nostalgic neuron metaphors and kept only what
maximized raw computational throughput.
6.3. SPECIALIZED INTELLIGENCE 237
For example, early ANNs used the sigmoid activation function,
chosen to echo biology, but its exponential is slow in hardware.
The simpler rectifier (ReLU) proved faster and quickly took over.
In the early 2010s, treating cache lines, vector widths, and mem-
ory buses as first-class design constraints felt almost heretical. The
algorithmic, operations research, and statistical learning camps
had risen on hardware-agnostic theory: prove a better asymptotic
complexity class and let Moore’s law absorb pesky constant factors.
Tuning for those constants—say, shaving cycles with a rearranged
matrix or a SIMD instruction—was seen as busywork, soon nulli-
fied by the next processor generation. Deep-learning researchers
upended that view, showing that painstaking attention to hard-
ware minutiae could yield speedups unattainable by elegance in
algorithmic complexity alone.
The early quest for ever-larger networks was fueled by the
biological metaphor: with roughly 100 billion neurons, the human
brain became an informal size benchmark for ANNs. Experiments
soon made the metaphor unnecessary—performance rose simply
because the models grew. Meanwhile, the data landscape flipped.
In the 1990s, public datasets were scarce; by the 2010s, they were
expanding faster than networks could absorb them. Training on
every available sample became impractical, and vast troves of
information remained untapped.
Hence the arms race: ever-larger models fed by ever-larger
datasets. Toronto’s 2012 ImageNet network had 60 million parame-
ters and ran on GPUs; a decade later, trillion-parameter transform-
ers sit on custom silicon. The scale boom also spawned heavyweight
tooling. Google’s TensorFlow, released in 2015, ballooned past
four million lines of code—an industrial effort unthinkable in the
statistical learning era of 10,000-line libraries.
The race for ever-larger models also pushed designers toward
ever-smaller floating-point formats. Because transistor count, and
therefore energy cost, grows faster than bit width, shaving pre-
cision delivers outsized speedups: on current GPUs, arithmetic
throughput can quadruple when you drop from 32-bit to 16-bit.
Early 2010s deep-learning systems used 32-bit floats exclusively—a
238 CHAPTER 6. INTELLIGENCE
safe default that still suffices for most supply-chain calculations,
provided pathological edge cases are avoided (some tasks still ben-
efit from 64-bit). By the early 2020s, however, training in 16-bit
and serving in 8-bit had become routine, courtesy of a friendly
co-evolution of GPU and CPU hardware. Research is now probing
1-bit—and even sub-1-bit—weights, showing that the lower bound
on usable precision has yet to be reached.
Mechanical sympathy matters in supply-chain practice. Swiss
computer scientist Niklaus Wirth warned in the 1990s that software
often slows faster than hardware accelerates—a pattern now called
Wirth’s law. Because of a general lack of interest in hardware
efficiency, many business systems run no faster today than in 2000,
despite far better chips. That slowdown is Wirth’s law at work.
Deep-learning tricks cannot be reused wholesale, yet they reveal
how mindful coding unlocks huge gains. Supply-chain professionals
therefore need at least a working grasp of mechanical sympathy.
Nonconvex optimization
In the early 2000s, mathematical optimization still followed the
original operations-research playbook. Three tenets defined it.
Statistical-learning practitioners—support-vector-machine devo-
tees included—accepted it wholesale.
First, early optimization theory treated local minima as a prac-
tical brick wall: in nonconvex landscapes one might, in principle,
escape them, yet in real projects one almost never did. That as-
sumption spawned an entire branch—convex optimization—whose
express purpose is to sidestep that obstacle. Because a convex
function has no local dips—only a single global minimum—any
well-behaved descent algorithm converges to it. For a thorough
treatment of the proofs, algorithms, and applications that rest on
this property, see Boyd and Vandenberghe’s Convex Optimization
(2004).
Second, for nonconvex problems, any respectable method was
expected to provide a bound on the distance of its solution from
the global optimum. One does so by furnishing an upper bound on
6.3. SPECIALIZED INTELLIGENCE 239
Local Maximum
Global Minimum
Global Maximum
Local Minimum
Figure 6.4: A real
-
valued function showing both local and global extrema.
the gap between the best-known solution and the global minimum.
Techniques such as branch
-
and
-
bound and branch
-
and
-
cut offer
such guarantees. However, unlike convex optimization—which
benefits from highly scalable techniques—these general methods
scale poorly. As mentioned earlier, the poor scalability of such
solvers has long been a chief reason for their limited adoption in
supply-chain circles.
The deep-learning community jettisoned that classical view. It
embraced a blunt tool—stochastic gradient descent—and applied
it to nonconvex landscapes, sidestepping fears of local minima.
This approach has proved wildly successful, enabling models with
trillions of parameters—a scale inconceivable in the 2000s.
A new outlook emerged: the obstacle was no longer local
minima but broad, flat plateaus. The prevailing view holds that
local minima become exceedingly rare as dimensionality increases.
Thus, intuition from low-dimensional examples fails in high dimen-
sions
34
. Instead, vast low-gradient regions (“plateaus”) pose the
main challenge for descent methods.
34
In everyday experience with two or three dimensions, shapes such as circles
and spheres fill a significant fraction of the enclosing squares and cubes. In
higher dimensions, however, the volume of a hypersphere becomes a vanishingly
small fraction of the hypercube’s volume; almost all the space lies in the corners,
well beyond the reach of the centrally placed hypersphere.
240 CHAPTER 6. INTELLIGENCE
Third, for (hyper)parametric models, optimality proofs become
meaningless in the double-descent regime. The training loss can
plunge and hover near zero, yet those tiny residuals say little about
how the model will generalize. Worse, many loss functions favored
in deep learning—such as cross-entropy—do not even have true
minima
35
.
While the deep-learning community had reasons of its own to
reject the classic optimization stance inherited from operations
research, this rejection matters for supply chain. It shows that
the classical paradigm suffers from counterintuitive limitations
that escaped notice for roughly six decades (1950s–2000s). In
what follows, we will see that supply chain has further grounds to
reject this classic stance and that alternative, far more valuable
paradigms exist.
The bag of tricks
Every programming paradigm offers its own mental model of
software work, with core concepts, practices, and tools; most
are opinionated about data representation and control of program
behavior.
Definition (Programming Paradigm).
A maximally composable way to design and structure
programs.
Deep learning was the first
36
broadly adopted programmatic
35
Technically, cross-entropy computed with fixed-precision floating-point arith-
metic does have a definite minimum, since the range of representable values
is finite. However, such a minimum is largely meaningless because numerical
overflows and underflows cause critical failures long before this threshold is
reached.
36
Fuzzy logic—a form of many-valued logic in which variables can take any
real number between 0 and 1—was originally developed in the 1960s. It might be
considered the first programmatic paradigm for what was then called “artificial
intelligence”, although it never became more than an extremely narrow niche
with limited applications.
6.3. SPECIALIZED INTELLIGENCE 241
learning paradigm; in the early 2010s, it puzzled researchers by
defying prevailing expectations in machine learning.
Until then, papers and software focused on the finished product:
the model ready to be trained on a fresh dataset. What set deep
learning apart was its focus on architectures, numerical tricks, and
guiding principles for crafting models. While toolkits for statistical
learning (such as scikit-learn
37
) featured dozens of models, those
for deep learning shipped with none. The field spent much of the
2010s absorbing that shift. The old approach—identifying a novel
model, running a benchmark, and publishing results—became
obsolete. Novelty no longer lay in the model itself or its statistical
characterization. Even first-year computer students could cobble
together a “unique” deep learning network by cherry-picking layers
from the available tensor operations and benchmark it on one of
many public datasets. For a brief period, deep learning conferences
and journals were flooded with papers that were little more than
such compositions; despite their “uniqueness”, such contributions
were often inconsequential.
Deep-learning architectures serve as the modern analogue of
the “models” once championed by operations research, yet they
leave significant latitude for the data scientist in finalizing the
network to be trained and deployed.
The most valuable contributions came to be known as the “bag
of tricks” of deep learning: small, modular tweaks that improve
parts of a network rather than redefining the whole. Weight ini-
tialization strategies, moment estimators, learning rate scheduling,
dropout, early stopping, residual connections, dense connections,
attention mechanisms, transformer architectures, batch normaliza-
tion, gradient accumulation, hyperparameter optimization, data
augmentation, positional encoding, pruning, quantization, knowl-
edge distillation, low-rank factorization, among others, form only
a small subset of this growing “bag of tricks”. However, these con-
tributions—at least the significant ones—proved highly accretive.
37
The scikit-learn open-source project, initiated by French data scientist David
Cournapeau in 2007, exemplifies this approach.
242 CHAPTER 6. INTELLIGENCE
One technique seldom precludes another unless they belong to the
same class.
Since the 1960s, many paradigms—functional, object-oriented,
distributed, reactive, and more—have tackled general software
concerns. Specialized paradigms have also emerged, such as rela-
tional algebra (e.g., SQL for relational databases). While debates
continue over which paradigm is “best”, no serious software project
is undertaken without embracing one or more paradigms. Merely
choosing a programming language already constitutes a paradig-
matic choice.
The key takeaway for supply chains: choosing the right paradigm
matters even outside pure software work. Deep learning, as a pro-
gramming paradigm, has redefined the state of the art in machine
learning. This raises the question of whether other domains, such
as supply chain, could benefit from similarly dedicated paradigms;
as we shall see, dedicated programming paradigms are of primary
importance for supply chain.
6.3.5 Generative Era
In 2022, a handful of breakthrough releases—soon grouped under
the banner “Generative AI”—vaulted from niche research to global
headlines. ChatGPT, the conversational assistant unveiled by
OpenAI in November 2022, and Stable Diffusion, the text-to-
image model released in August 2022 by researchers at LMU
Munich and Heidelberg University, became instant landmarks
in text completion and image generation, respectively. Their
apparent overnight success masked years of steady, open-ended
progress by the wider deep-learning community; what changed
in 2022 was not the underlying science, but the packaging. For
the first time, non-specialists could summon large, pre-trained
models through a simple web interface and obtain striking results
in seconds—a watershed that turned an academic curiosity into a
mainstream product category. Although these systems are squarely
built on deep-learning foundations, they also challenge some of the
paradigm’s early assumptions—for example, about data quality and
6.3. SPECIALIZED INTELLIGENCE 243
loss-function design. In that sense, the “generative era” represents
a genuine second generation of deep-learning practice rather than
a mere extension of the first.
First, vendors began shipping pre-trained models by default—a
radical, albeit unexpected, advance. With pre-trained models, end-
users need no longer concern themselves with generating models
(the learning phase) and can focus on using them (the inference
phase). This greatly eased adoption. It also paved the way for
hybrid learning processes, in which a deep learning network could
be assembled from pre-trained components and supplemented with
a trainable section. Although staged learning processes predate
deep learning, the generative paradigm added a new dimension
and prompted the machine learning community to adopt further
open-source practices, sharing models in addition to code.
Second, the hallmark of the generative era is an almost insa-
tiable appetite for web-scale data. For decades the adage was
“garbage in, garbage out” (GIGO): low-grade inputs doom a
model’s outputs. Yet today’s text- and image-generators train
on billions of lightly filtered pages and pictures scraped straight
from the open web, and still return prose and visuals that eclipse
the average quality of that raw material. Their results are hardly
flawless, but they are unmistakably better than the median of
their sources—an outcome that overturns the older GIGO credo.
Harnessing such oceans of data, however, is only possible when
the software is written with deep mechanical sympathy, squeezing
every cycle and memory bank the hardware can offer.
Third, ever-larger models display emergent skills for which their
loss functions were never designed. For example, once large lan-
guage models (LLMs) reach billions of parameters and are trained
on billions of tokens, they start exhibiting translation capabilities.
Yet the loss function reflects only next-token prediction; it does not
account for translation. Similarly, text-to-image models exhibit
emergent capabilities such as style transfer that go beyond their
training data.
Those abilities upend an assumption borrowed from operations
research. Until then, it was generally assumed that the best way
244 CHAPTER 6. INTELLIGENCE
to solve an optimization problem was to steer the process by
the very metric used to evaluate the final solution. Proxy loss
functions—approximations of the true loss—were acceptable only
when they offered significant computational benefits—a trade-off
between solution quality and resources. However, the emergent
capabilities of generative models have repeatedly surpassed prior
state-of-the-art methods driven by more direct loss functions.
In the generative era, the relationship between the score for
a given task and the loss function used to drive stochastic gra-
dient descent has become tenuous. Self-supervised learning has
supplanted supervised learning, which had dominated until then.
The intuition behind self-supervised learning is that the quantity
of unlabeled data
38
typically dwarfs the amount of labeled data.
Thus, if learning is decoupled from the narrow task of predicting
“labels” and instead focuses on intrinsic properties of the data, it
can better capture underlying structure.
While generative models have redefined the state of the art in
machine learning, leveraging them for supply chain purposes is not
as self-evident as many software vendors claim. Relational records
constitute the bulk of data relevant to supply chain, yet neither
text completion, text-to-image, nor image-to-text models are well
suited for processing such data. Two propositions commonly put
forward by vendors in the early 2020s warrant refutation.
First, large language models fit text, not numerical time-series
forecasting. Although there are proposals
39
to enhance the arith-
metic capabilities of such models, these developments are unlikely
to render language models competitive with models specifically
engineered for numerical processing. At best, large language mod-
els may become an extremely convoluted way to achieve what
simple parametric models can accomplish with a tiny fraction of
38
In machine learning, “labels” refer to known instances or known solutions
to a given learning task. For example, in automated translation from French to
English, the label for a French text is its corresponding, accurately translated
English text.
39
See xVal: A Continuous Number Encoding for Large Language Models (2023)
by Golkar et al.
6.3. SPECIALIZED INTELLIGENCE 245
the computing resources.
Second, chat-style interfaces usually hurt productivity. Convers-
ing with a chatbot is significantly slower than traditional navigation.
Any task intended to be repeated routinely deserves a “traditional”
user experience rather than a cumbersome conversational interface.
In an enterprise environment, employees are expected to be trained
and proficient with the system—not to “muddle through” it. The
enterprise perspective differs fundamentally from the end-consumer
perspective, where users are assumed to be less knowledgeable re-
gardless of the task.
In supply chain, large language models shine at drafting nu-
merical recipes and smoothing the informal work that surrounds
decisions; we will revisit these contributions later.
246 CHAPTER 6. INTELLIGENCE
Chapter 7
The Future
Every supply chain decision carries an implicit claim: “we expect
the company’s future conditions to make today’s allocation of
resources profitable. The prediction may remain unsaid, but it
cannot be evaded. A firm may ignore the forecasting challenge; it
cannot ignore the consequences.
Thus, the future—and, more generally, time—matters greatly
to supply chain. Yet time is deceptively difficult—not because it is
too abstract, but because it appears self-evident and resists delib-
erate thought. The future is akin to the air we breathe: ubiquitous
and yet invisible.
1
With such topics, one is tempted to take pre-
conceptions for granted and skip the intellectual work altogether.
Chemistry taught us that millennia-old preconceptions about air
were wrong
2
. Just so, in supply chain, unexamined beliefs about
tomorrow—often a tacit extrapolation of yesterday—quietly shape
prices, capacities, and inventories. When they are wrong, they
1
While chemistry has advanced to where it is common knowledge that “air”
is 78% nitrogen and 21% oxygen, with the remainder mostly argon and water
vapor plus traces of other gases, it still takes a deliberate effort to bring those
facts to mind.
2
Ancient Greek philosophers like Aristotle classified air as one of the four
fundamental elements—alongside earth, water, and fire—without recognizing
its composite nature. Only in the late 18
th
century, through scientists such as
Antoine Laurent de Lavoisier, was air’s true composition established.
247
248 CHAPTER 7. THE FUTURE
harden into plans that misallocate scarce resources and propagate
failures across the flow.
To anchor the discussion, this chapter connects the simple
observation—every decision is a bet on tomorrow—with the in-
struments that make such bets profitable. It proceeds in three
moves.
First, we surface the unspoken visions that steer how firms think
about the future. Much twentieth-century practice descends from a
teleological vision: settle tomorrow by forecasts and an engineered
plan. By contrast stands a rugged vision: treat tomorrow as a
field of opportunities under uncertainty and preserve options so
the firm can move when the odds turn favorable.
Second, we examine why forecast-driven planning collapses
in business even when it looks mathematically tidy: time-series
formalism flattens what matters; the future is shaped by decisions
not yet taken; small numbers, batching, adversarial behavior, and
fat tails shatter the neat symmetry between past and future. The
result is brittle orchestration and bureaucratic ritual.
Third, we assemble predictive instruments that fit the rugged
vision and the economics of optionality introduced earlier in the
book. We replace point time series with probabilistic forecasts (de-
mand, lead times, prices, returns), add high-dimensional forecasts
(market baskets, substitution, contagion), and introduce functional
forecasts that speak in the native objects of the flow rather than
in equispaced time buckets. These tools do not guess “the” future;
they price alternatives and make opportunity costs explicit.
7.1 Visions
[A vision] is what we sense or feel before we have constructed
any systematic reasoning that could be called a theory [..]
A vision is our sense of how the world works [..] Visions
are all, to some extent, simplistic though that is a term
usually reserved for other people’s visions, not our own. A
Conflict of Visions (1987), Thomas Sowell.
7.1. VISIONS 249
Of all the unspoken ideas that drift around us, the most power-
ful shape our intuition of causality for the objects at hand—here,
supply chains. Indeed, this intuition defines how we frame situ-
ations, how we see problems—whether we see them at all. Here,
vision refers to that intuition of causality.
Vision precedes theory, which precedes models. Visions perme-
ate companies—their cultures, practices, and processes. Visions
matter because they serve as a rough compass for thought. Amid
the infinite subjects that could claim our attention, visions decide
what is worth thinking about.
For example, “Day 1”, as advocated by Jeff Bezos, Amazon’s
founder, is one such vision. It emphasizes a perpetual startup
mindset—continuous innovation and proactive action—to avoid the
stagnation of “Day 2”. It means treating every day as if it is the first,
encouraging rapid decisions, embracing experimentation—even at
the risk of failure—and favoring long-term opportunities over short-
term gains. By adhering to the “Day 1” mindset, Amazon aims to
sustain its entrepreneurial spirit and avoid the complacency that
plagues larger, more mature organizations.
Yet the opposite vision also has its proponents. As Captain
Chesley “Sully” Sullenberger—famed for the Hudson River emer-
gency landing that saved 155 lives—remarked, “For 42 years, I’ve
been making small, regular deposits in this bank of experience,
education, and training. And on January 15, the balance was
sufficient for a very large withdrawal. This view dominates avia-
tion, combating complacency not through disruptive innovation
but through relentless refinement of existing practice. Past failures
were paid in blood, and future ones will be as well.
Visions govern how people instinctively approach complex sys-
tems—systems beyond what a human mind can readily grasp in
full. Consider a retail store struggling to maintain adequate stock,
with half its shelves empty. Instinctive diagnoses will vary greatly,
depending on one’s vision of how supply chain works.
Take a professor of supply chain analytics. He may instinctively
blame inaccuracies in the demand forecasts driving replenishment.
Here, the vision places the burden of service quality on a techno-
250 CHAPTER 7. THE FUTURE
logical solution—on software. This vision extends to his academic
community, whose research influences the design and accuracy of
such software, thereby reinforcing a technology-centered view.
By contrast, a regional manager in the same chain may blame
poor store stewardship. In this vision, people—the store manager
and staff—are responsible for running the store efficiently. Respon-
sibility lies first with those closest to the problem. This vision also
implicates upper management: they allow an ineffective store man-
ager to persist in his position, further underlining a people-focused
view.
These two visions arise from the same empty shelves but assign
responsibility—and, consequently, resolution—to entirely different
quarters. One turns to a technological solution, the other to a
management solution. A third, equally plausible diagnosis lies
outside the firm: rampant shoplifting makes displayed inventory
vanish. Naturally, whether the store’s problem stems from faulty
software, poor management, or an adverse public-order context is
another matter entirely. Visions prove nothing; they merely govern
our immediate assessment of complex situations.
Visions are consequential. History shows that many businesses
came to dominate their markets thanks to the peculiar visions of
their founders—“visionaries,” aptly named
3
. Yet the same visions
can, over time, set companies up for failure. Thus, for any company,
it is reasonable to identify the dominant visions held by upper
management and assess their fitness for the business.
Unlike corporate strategies, visions are rarely discussed, let
alone weighed for merits and demerits. Within the same company,
people may hold radically different visions. Because visions are
rarely spelled out, employees feel that whenever they push, others
pull—and vice versa. Yet divergence in vision is seldom named as
the cause.
3
Standard Oil, General Electric, Ford, Kodak, IKEA, McDonald’s, Apple
are but a few that became, at some point, vision-led giants.
7.1. VISIONS 251
7.1.1 Values
In politics as well as in business, leaders often highlight divergent
values to underscore differences with their rivals. The phrase “We
do not have the same values” can be heard on all sides. Yet this
perspective, while not without merit, tends to obscure the profound
influence of visions.
In A Conflict of Visions (1987), Thomas Sowell astutely notes
that when people meet differing interpretations of the same facts,
they tend to attribute the differences to values. Often the variation
in values is far less pronounced than the catchphrase “We do not
have the same values” suggests. In the political realm, for instance,
one would be hard-pressed to find anyone advocating for poverty,
crime, or war. Yet despite shared values against these ills, people’s
visions guide them toward starkly different solutions.
The same holds in supply chains. Regardless of field or sector,
companies seek profitability, growth, service quality, and waste
reduction. Companies that openly oppose such values are rare, if
they exist at all. Different visions yield markedly different strategies
and practices, all aimed at the same values.
Consider Amazon’s founder, Jeff Bezos, who has often empha-
sized a relentless focus on the customer. He once said, “The most
important single thing is to focus obsessively on the customer. Our
goal is to be Earth’s most customer-centric company. That is a
statement of values—and a fairly mundane one. Indeed, execu-
tives rarely devalue customers in public. When an executive is
caught doing so, he rarely keeps his position afterward. What
distinguishes Amazon is not its espoused values—which align with
most businesses—but its distinctive vision and culture.
Thus, as we assess how understanding the future shapes supply
chain practice, we must remember that men may propose radically
different solutions while sharing the same values. More often than
not, the divergences are rooted in conflicting visions.
252 CHAPTER 7. THE FUTURE
7.1.2 Time
The passage of time shapes our experience and our understanding of
the world. It is so concrete and omnipresent that abstracting away
the dimension of time is difficult. Yet many—if not most—supply-
chain challenges deserve this treatment, if only to surface the
underlying visions at work with regard to time. Thus we pose
a series of questions about time. At this stage, answering those
questions is unimportant and will not be attempted. Instead, we
aim to characterize the visions—the intuitions of causality.
Do we see the future as knowable or unknowable?
Forecasting is only as valuable as the portion of tomorrow
that can be anticipated in practice. When the future is at least
partly knowable, forecasts are powerful instruments; when it is
dominated by noise, they are mere ceremony. Planning inherits
the same premise: it pays only if some features of the future can
be foreseen reliably enough to guide commitments. If outcomes
routinely diverge from expectations, planning effort turns to waste.
Do we see the future as fated? Or open to change?
If the future is fixed, a forecast is an early measurement of a
state that will occur; accuracy is judged by its match with the
realized outcome. If the future is malleable, a forecast acts as
a directive shaping behavior; accuracy is judged by how closely
execution conforms to the plan.
Do we see uncertainty as a flaw or as risks and opportunities?
Few dispute that the future is uncertain. One view treats that
uncertainty as a defect to be minimized—via sharper forecasts,
tighter controls, and prompt corrective actions. The other treats it
as raw material—to be priced, hedged, and occasionally embraced
when odds favor it.
Do we see acting sooner as preferable to deferring until a later,
better-informed decision?
Momentous decisions made on the spot strike some as rash and
others as decisive. Likewise, inaction reads either as weak-willed
hesitation or as strength to wait for better information.
Do we see knowledge as a characteristic of what doesn’t change,
7.1. VISIONS 253
or as the specific circumstances of time and place?
Scientific knowledge aims at permanence: theories are cast as
timeless claims, and reproducibility assumes a phenomenon that
does not change with date or venue. Business knowledge is the
opposite. Once a fact is widely known, its private value collapses.
What creates an edge is the specific circumstances of time and
place—who wants what, at what price, given today’s capacities
and constraints. Valuable knowledge is therefore local, timely, and
perishable.
Do we see the past as a foundational asset to build upon, or a
flawed legacy steering people toward complacency?
Founders may start from a blank page; every successor inherits
a status quo. One stance treats that legacy as a compounding
asset—skills, supplier ties, data, and routines that, when refined,
reduce risk and cost. The other treats it as a liability—sunk costs,
ossified habits, and internal constituencies that bias choices toward
yesterday’s market.
Do we see investment in people as indefinitely accretive, or is
accretion reserved for technologies and techniques?
Some see human capability as compounding. With practice,
feedback, and better mental models, planners keep getting faster
and more accurate; thus spending on training is a capital-like
outlay. Others expect an early plateau: once competence is reached,
durable gains come chiefly from improved techniques and software;
investment should therefore target tools rather than people. The
stance taken strongly governs what counts as an “investment” in
the first place.
These questions are, of course, false dichotomies: the truth may
lie between the poles, vary with context, or lie elsewhere entirely.
Far from mere philosophical musings, these questions show that
one’s intuition about time—about the future—is consequential.
Diverging visions do not merely produce divergent solutions: people
may not even agree on the problems worth solving.
Within supply chain circles, views of time and the future have
largely converged into what we call the teleological vision: the
belief that tomorrow can be fixed in advance by forecasts and
254 CHAPTER 7. THE FUTURE
then engineered into existence through compliance with a plan.
Over the 20
th
century, this view became dominant and remains the
default in the 21
st
century. It is seldom named, rarely questioned,
and silently assumed by most authors and enterprise software
alike. The next section makes this vision explicit and examines its
consequences.
7.2 The teleological vision
The future remains inaccessible only to those who have
not embraced the mathematical instruments that modern
science offers. Whatever weaknesses still plague our present-
day forecasting models will ultimately be remedied by the
unstoppable force of progress. Furthermore, those instru-
ments not only tell us what will come to pass but also why.
Thus, the modern leader, unlike his primitive predecessor,
leaves nothing to chance. His plans are no longer rooted in
the fallible intuition of men; rather, they are engineered to
the precision that characterizes this machine age.
These lines capture the essence of the teleological vision
4
. Over
the course of the 20
th
century, this vision became the dominant
view in intellectual circles. By intellectuals we mean people who
make a living selling their ideas (writers, historians, academics),
as opposed to selling tangible services or products. This definition
passes no judgment on intelligence; it simply labels occupations.
For example, a journalist is an intellectual, while a neurosurgeon
is not. However, the neurosurgeon can be expected to be more
educated and more intelligent than the journalist; such is the
expectation of every patient placing his life in the surgeon’s hands.
Once a view grips intellectual circles, it often spreads and shapes
public opinion
5
. The teleological vision became dominant—if
not exclusive—within supply chain circles, among authors and
4
The Oxford dictionary defines “teleological” as “relating to or involving the
explanation of phenomena in terms of the purpose they serve rather than of the
cause by which they arise.
5
See Intellectuals and Society (2009) by Thomas Sowell.
7.2. THE TELEOLOGICAL VISION 255
practitioners alike. Enterprise software vendors entrenched this
vision through workflows and technologies that left little or no
room for alternative views.
Unfortunately, this vision is as popular as it is flawed. Despite a
mountain of contrary evidence, it remains one of the most seductive
economic ideas of all time. The prospect of weeding this vision out
of supply chain is daunting but necessary. We must first examine
where this view came from and how it spawned its supply-chain
offshoots.
7.2.1 Historical emergence
Modern quantitative forecasting
6
—as opposed to the traditional
soothsayer, augur, oracle, or prophet—emerged at the beginning
of the 20
th
century in the United States with the first generation
of economic forecasters. Demand for forecasts was fueled by the
emergence of an investing middle class unique to the United States
at the time. At the start of the 20
th
century, only half a million
Americans owned stocks; by the end of the 1920s, that figure rose
to 10 million. Those early forecasters were confident that their
techniques would ultimately make economic uncertainty a relic of
the past:
Week after week, his [Roger Babson] newsletter explained
to its readers that economic events were neither random nor
uncertain but governed by forces that could be understood
through careful study. He began issuing regular forecasts
after the Panic of 1907. [. . . ] He offered prognostications
on everything from prices to employment to sales. —Walter
A. Friedman, Fortune Tellers (2013)
6
The modern flavor of weather forecasting predates economic forecasting by
more than half a century, dating back to the invention of the electric telegraph
in 1835. By the late 1840s, weather forecasts could be made from knowledge of
weather conditions farther upwind, because information traveled faster than the
winds themselves. However, weather forecasting belongs to the natural sciences,
whereas economic forecasting does not.
256 CHAPTER 7. THE FUTURE
Babson’s swagger set the template for economic pundits through-
out the century. Babson’s core idea was to adapt Newton’s third
law
7
—also known as the law of action and reaction—to economics.
He held that it would ultimately render the course of the econ-
omy as predictable as that of celestial bodies. While now widely
regarded as naive and misguided, mid-20
th
century forecasters re-
mained as confident as Babson, shifting their trust from Newtonian
mechanics to statistics.
The Soviet Union’s Gosplan, founded in 1921, began issuing
annual plans in 1925. The committee drew directly on a teleologi-
cal vision that the future could be conquered. Yet the committee
produced nothing but unrealistic plans that could not be executed.
The results proved disastrous. The Soviet economists Nikolai
Shmelev and Vladimir Popov write: “The administrative system
was distinguished by economic romanticism, profound economic
illiteracy, and incredible exaggeration of the real effect that the
‘administrative factor’ had on economic processes and on the moti-
vations of the public.
Popular culture echoed these beliefs: Isaac Asimov’s science-
fiction series Foundation (1951) captures the prevailing views of
the era’s intellectuals. The central storyline turns on a fictitious
theory, “psychohistory”, blending particle physics and sociology,
said to achieve near-perfect predictive power when vast numbers
of humans are involved. However, Asimov concedes that some
individuals can counter psychohistory’s predictive power. It takes,
however, what amounts to a demigod—a human blessed with
immense psychic powers—to do so.
Most twentieth-century intellectuals remained supremely con-
fident in the ultimate predictability of human affairs, even as
economic forecasts kept failing. While Gosplan’s failure can be
attributed, in part, to the general flaws of socialism, market-
driven American pioneers such as Moody’s and Standard & Poor’s
also suffered setbacks. Both began by selling investment publica-
7
Newton’s third law states that when two bodies interact, they apply forces
to one another that are equal in magnitude and opposite in direction.
7.2. THE TELEOLOGICAL VISION 257
tions—manuals, company digests, and market letters that included
explicit forecasts—rather than merely tabulating past accounts.
By mid-century, both firms had pivoted to credit rating—a “safer”
business that spared them the embarrassment of inevitably faulty
forecasts.
Forecasting setbacks did not slow operations research: as the
field rose in the 1950s and 1960s, it embraced the teleological
view. The future was taken as a known input to the model,
either under stationary conditions or via a time-series forecast.
By the end of the 1970s, time series had become the cornerstone
of quantitative supply chain theories. The prominence of time-
series forecasts increased further with the rise of S&OP (Sales and
Operations Planning) at the end of the 20
th
century. Their status
and importance remained essentially unchallenged until the early
2010s.
7.2.2 Time-series paradigm
A time series is a sequence of values of a quantity observed at succes-
sive times, typically at equal intervals
8
between them. Extending
the series beyond today yields a forecast—specifically, a time-series
forecast. While time series can represent anything—from sunspot
counts to a country’s population—most supply chain authors and
software use them primarily to represent demand, measured in
units (eaches) or in monetary terms.
A time series is an unbounded sequence of numbers laid out
on a ruler, with “0” marking today. Everything to the left has
already happened; everything to the right has yet to be inferred. In
practice, we fill those slots with forecasts, treating time in separate,
evenly spaced steps (days, weeks, and so on).
This formalism clarifies a few assumptions that underpin the
8
A time series featuring equal time intervals is said to be equispaced. In
practice, non-equispaced time series are so rare in the literature that any
reference to a time series implicitly assumes equal intervals. However, note that
monthly and yearly intervals are not strictly “equal”: both months and years
have varying numbers of days.
258 CHAPTER 7. THE FUTURE
Time
Value
Present
Figure 7.1: Historical time series (solid) up to the present, and its
forecast (dashed).
time-series paradigm. First, time series have no beginning and no
end, like a geometric line. Of course, any visual representation is
truncated to a finite segment for practical reasons. However, this
fragment should not be confused with the whole.
Second, time series are structurally symmetrical with regard to
their time dimension. By “structural” we do not mean past and
future values coincide, only that they are of the same nature. They
are undifferentiated save for the cutoff at time zero, a matter of
convention.
Third, time series adopt a discrete time axis. This choice
drastically simplifies both the display of time series and the models
built atop them, whether analytical or predictive.
These three assumptions match the prevailing view in the
natural sciences. A meteorologist studying temperatures at a given
location adopts the same paradigm. Though records begin at some
date, the meteorologist does not doubt temperatures could have
been recorded earlier. Conversely, measurements can continue
indefinitely. Although temperature evolves continuously, records
are deliberately discretized—for example, one daily measurement
at a fixed hour—to ease analysis. Finally, the physical laws are
7.2. THE TELEOLOGICAL VISION 259
nearly perfectly time-symmetric
9
with respect to time.
It is hard to overstate the time-series paradigm’s dominance in
supply-chain circles at the beginning of the 21
st
century. Nearly
all forecasting literature revolves around demand time-series fore-
casting. Few authors acknowledge alternative forecasting tar-
gets
10
—beyond demand—and fewer still acknowledge alternative
forecasting structures—beyond time series. Forecasting has effec-
tively become synonymous with “demand time-series forecasting”.
Similarly, most “applied optimization”—the heir to operations
research—takes time series as input whenever possible. Again, few
acknowledge alternative perspectives.
On the software side, most reporting systems rely on OLAP
(online analytical processing) cubes that, by design, embrace the
time-series paradigm. Among the cube’s dimensions, one is invari-
ably time. Crucially, such cubes are incompatible with alterna-
tive—more expressive—structures beyond time series.
On the management front, the widespread practice of S&OP
(Sales and Operations Planning) puts time series squarely at the
center. Cross-functional teams convene to produce a single deliver-
able: the strategic plan. In practice, that plan is a bundle of time
series—usually the firm’s projected sales—arrayed across future
time buckets.
9
Nearly all fundamental laws of physics, such as Maxwell’s equations, are time-
reversal symmetric. Reversing time (
t 7→ t
) leaves their equations unchanged.
However, the Standard Model exhibits time-reversal violation via the weak force
(CP violation). TCP violation was discovered and experimentally verified in
the 1960s. Nevertheless, it remains an advanced consideration, largely ignored
outside physics—especially at the “vision” level.
10
A Google Scholar survey over roughly the past 50 years (query run in 2024)
indicates a ratio on the order of 1,000 papers mentioning “demand forecasting”
for every paper mentioning “lead time forecasting. The imbalance is striking
given that nearly every firm operating a supply chain must anticipate both
future demand and future lead times.
260 CHAPTER 7. THE FUTURE
7.2.3 Planning through time-series
In business, a time-series forecast is not a neutral projection as in
the natural sciences; it doubles as a commitment. Once a forecast
is sanctioned, it becomes the baseline that sets budgets, reserves
capacity, and schedules replenishments. The organization then
works to make those numbers hold; the forecast becomes less a
guess about tomorrow than an instruction to shape it.
Once forecasts are set—future demand treated as known—all
that remains is to orchestrate resources to meet it promptly at
minimal cost. In the teleological view, planning means fixing future
demand and assigning the matching resources.
Indeed, if the future is treated as settled—one known daily
demand for every SKU at every location—then the planning prob-
lem collapses into arithmetic: the material decisions (what to buy,
make, or move; where; when; and how much) are mechanically
inferred from the forecast itself
11
. Execution then reduces to
gauging compliance with “the plan”. For example, if tomorrow’s
e-commerce forecast for a given SKU at the fulfillment center is
exactly 2 units, stocking fewer is a planned stockout; stocking more
is, by definition, avoidable overstock that a more precisely timed
replenishment “should” have prevented.
Once the forecast is sanctified as “the plan,” the firm’s agency
migrates to the ritual of fixing that number. The forecast sits on
the throne; everything else is compliance. Hence the turf wars
around the baseline—not a dispute about statistical merit, but a
budget negotiation in disguise. To forecasters, the wrangling looks
perverse; to managers, “accuracy” is incidental. The real prize
is a larger share of capital, capacity, and attention for one’s own
fiefdom.
Orchestration then runs into a structural mismatch: forecasts
11
This perspective is exemplified by the book Flowcasting the Retail Supply
Chain (2006) by American supply chain consultants Mike Doherty and Andre
Martin. The flowcasting approach consists of establishing daily forecasts at the
store level, SKU by SKU, and of deriving all the allocations across the multiple
echelons of the network from those “foundational” forecasts.
7.2. THE TELEOLOGICAL VISION 261
are cast in one geometry, resources in another. The baseline
may be weekly by SKU, while trucks depart daily and never on
Sundays; production is batched around changeovers; purchase
orders obey MOQs and price breaks. A small army of planners
is therefore tasked with back-scheduling every detail—shifting
quantities across days, squeezing orders before cutoffs, padding
here, trimming there—so the ledger appears aligned with the
baseline. To an outsider this bustle looks like “making decisions”.
In truth, discretion is minimal: choices are bounded by compliance
with the plan, and the remaining degrees of freedom are clerical
rather than economic.
This view also explains why pricing is almost always ignored by
supply chain authors, practitioners, and software vendors. Indeed,
because prices affect demand, they belong within supply chain—as
we maintain throughout this book—yet the classic stance, following
the teleological view, ignores them altogether. Prices are simply
expected to be settled prior and independently of the planning
phase. To the traditional supply-and-demand planner, prices are a
given—neither to be assessed nor challenged.
This view of planning—i.e., a forecast steering the allocation
of resources—is so dominant in present-day supply chain circles
that few even acknowledge alternatives. Disagreements are nar-
rowed to the choice of forecasting methods, inventory models, and
orchestration workflows. Yet, despite its popularity, teleological
planning has demerits and deserves scrutiny.
7.2.4 Limits of planning
So let us call here the teleological fallacy the illusion that
you know exactly where you are going, and that you knew
exactly where you were going in the past, and that others
have succeeded in the past by knowing where they were
going. Antifragile: Things that Gain from Disorder (2012)
by Nassim Nicholas Taleb
Planning, as it emerges through the teleological vision, is deeply
flawed. Still, despite a century of contrary evidence—most noto-
262 CHAPTER 7. THE FUTURE
riously the fiascos of the Soviet Gosplan—it remains the central
doctrine of large organizations, public and private. Few ideas have
seduced the intellectual circles of the 20
th
century so thoroughly
while delivering so little. Before examining why this fascination en-
dures, we first clarify planning’s technical shortcomings, beginning
with the inadequacies of time series.
As a mathematical structure, time series are ill-suited to reflect
business situations. It is tempting—but profoundly misguided—to
recycle time series as used in the natural sciences. The crux of the
issue is that, in business settings, time series categorically ignore
entire facets of the phenomenon under scrutiny.
Consider the meteorologist taking one temperature measure-
ment every day at the same place—the time series of interest.
The meteorologist knows that the measurement itself could be
made more precise with slightly better equipment. He knows the
measurement could be taken twice, a minute apart, or 100 meters
apart. None of these improvements, however, is expected to change
the picture radically. It is always possible to make measurements
more precise or more granular—but with diminishing returns. The
time series is an approximation, yet the meteorologist controls its
nature and extent. The meteorologist is never “seconds away” or
“meters away” from a radical breakthrough that would upend his
understanding of the weather.
Business doesn’t play by those rules. Seemingly minor measure-
ment changes can radically alter the assessment of the situation.
Consider a company that has sold the same product for years. Sales
are steady—about 1,000 units per week—with little fluctuation.
What are the odds that the demand for this product will drop
to zero next week and remain at zero forever? To answer this
question, let’s consider two situations, both exhibiting identical
historical time series of weekly sales.
Consider two demand structures producing the same weekly
total of about 1,000 units: Case A—1,000 independent customers,
each buying roughly one unit per week; Case B—a single cus-
tomer buying roughly 1,000 units per week. The historical time
series—weekly totals over years—looks identical in both cases, yet
7.2. THE TELEOLOGICAL VISION 263
the tail risks diverge. In Case A, taking next week’s sales to zero
would require losing essentially all customers, a process that—short
of catastrophe—unfolds over many weeks or months. In Case B,
one decision by one buyer suffices; the hazard of complete collapse
is the churn probability of that single account.
Consider the inventory implications. In Case A, declines typ-
ically appear gradually, allowing stocks to be trimmed as churn
reveals itself; write-offs tend to remain contained. In Case B, a
large write-off is a matter of when, not if : once the sole buyer
defects, the stock accumulated for them becomes overhang. Thus,
two situations with identical weekly sales time series entail opposite
risk profiles—and call for different decisions.
Consider a second setting: a store sells an average of 10 units per
week of a given SKU. The store is open seven days and replenished
daily. What on-hand level delivers reliable service? The point
forecast for daily demand is
1
.
4 units, yet two demand structures
sharing the same time series imply opposite stocking needs.
Variant A many small baskets. Ten distinct customers each
buy one unit during the week. With daily replenishment, keeping
roughly 5–6 units on hand suffices for most days: one day of cycle
stock plus a modest buffer covers ordinary noise.
Variant B few large baskets. One customer per week buys 10
units at once. Holding fewer than 10 units guarantees a stockout
for that visit; to achieve decent service, you need at least 10,
and because two such customers may arrive on the same day,
20—possibly 30—is the relevant range.
The time series is identical in both variants, yet the econom-
ically sensible stock position differs by a factor of four or more.
What matters is the distribution of basket sizes, not the average
parceled into daily buckets.
As the two examples above demonstrate, even for a single
SKU, time series flatten critical aspects of the flow. Time series
are a lossy representation of transactional history: the projection
discards information, and unlike meteorology, what is lost often
proves essential. Consequently, the resulting plans are invariably
dysfunctional because they ignore critical aspects of the situation.
264 CHAPTER 7. THE FUTURE
As the weakness lies in the time-series paradigm itself, it is delu-
sional to think the matter can be cured by greater sophistication.
Adding more series, extending their horizon backward or forward,
increasing granularity, or improving point accuracy won’t fix the
problem.
Many more similar examples could be adduced. Even for a
single SKU, time series fail: with perishables (each unit has its own
shelf life); with repairables (each unit has its own life cycle); with
reverse logistics (returns create confusion); with product launches
(no series yet), etc. Indeed, it is hard to find a vertical where a
sizable share of the business is incompatible with the time-series
paradigm—and thus with “planning”.
These single-SKU caveats do not vanish at scale; they explode.
In a sizable firm with thousands of SKUs, the time-series lens—and
the planning rituals built on it—produces the same errors, only
magnified. Consider a sportswear company with its own stores
that, among many items, sells a black backpack. Under the plan-
ning view, the firm prepares a 12-month sales projection for that
backpack and sets production accordingly, then repeats the exer-
cise across the assortment. The practice is so common that few
question it, yet its premise is unsound.
The projection is nonsensical. Prominent in-store placement
would lift demand; confining the backpack to e-commerce would
suppress it. If the company introduces many backpack vari-
ants—colors and sizes—the variants will cannibalize demand for
the original backpack. If the company adopts an aggressive price,
making the backpack the tip of a marketing campaign to draw new
customers into the store network, demand will be far higher than
if it is merely one unremarkable product within a vast assortment.
The plan doesn’t make sense because each projection is con-
ditional on decisions that have yet to be made, yet the exercise
proceeds as if those decisions had already been made. Once the
plan is issued, clerks must reverse-engineer their choices to make
subsequent commitments appear to conform to it. In practice, the
plan is not impossible; it is made “feasible” only through costly
approximations—expedite fees, excess buffers, artificial smooth-
7.2. THE TELEOLOGICAL VISION 265
ing, and deferred obligations—that keep the ledger aligned while
eroding economics. Feasibility here is an accounting artifact rather
than an operational truth.
There was nothing special or surprising about Gosplan’s pro-
ducing infeasible plans. This is a defect inherent in teleologi-
cal planning. Corporate executives, feudal lords, and political
commissars—adopting the same view—would likewise end up with
infeasible plans. To understand why, remember that supply chain,
at its core, is an economic process: allocating resources that have
alternative uses.
When conjuring a plan, managers implicitly command thou-
sands of scarce resources and expect them to be ready at the right
time, place, quantity, and cost. The odds of perfect alignment
between the plan’s innumerable expectations and the resources
actually available to the company—at every time and location
throughout the planning horizon—are essentially nil.
Time-series planning—and, more broadly, the teleological vi-
sion—does not merely fall short; it misdirects effort. Yet well into
the 21
st
century, it remains the default in most companies and in
nearly every supply chain forum. How did a repeatedly refuted
notion become orthodoxy? The explanation lies less in logic than
in incentives. The next section examines those incentives—the
bureaucratic appeal of planning—and shows why the doctrine
endures despite its poor results.
7.2.5 Bureaucratic appeal
Corporations are in love with the idea of the strategic plan.
They need to pay to figure out where they are going. Yet
there is no evidence that strategic planning works—we even
seem to have evidence against it. A management scholar,
William Starbuck, has published a few papers debunking the
effectiveness of planning—it makes the corporation option-
blind, as it gets locked into a non-opportunistic course of
action. Antifragile: Things that Gain from Disorder (2012),
Nassim Nicholas Taleb.
266 CHAPTER 7. THE FUTURE
The Gosplan might be gone, but its spirit lives on in large
corporations and government agencies. The teleological vision is
undeniably appealing, but not solely because of ignorance or stu-
pidity. Its staying power owes less to ignorance than to incentives
that reward plan theater; unless those incentives are named and
confronted, the doctrine will endure.
At its core, planning is a bureaucratic function executed by
back-office specialists. The function is unavoidable: the firm needs
a mechanism to allocate a tangled mix of resources across time
and locations. They seldom touch the physical flow directly; they
orchestrate it indirectly. The allocation problem mirrors the flow it
governs and is too intricate for any one person to manage end-to-
end. Because the work is analytical and data-heavy, specialists are
markedly more productive than generalists at it. Thus, spreading
the workload across the entire management layer is rarely viable;
the sound pattern is to centralize the task with a small cadre of
planners.
As in any bureaucracy, failure is punished—often harshly—while
success barely registers. For example, a successful salesman clos-
ing twice his expected yearly sales quota is in a strong position
to negotiate a steep raise. Yet a planner who halves the inven-
tory write-offs of his peers rarely receives comparable recognition.
While a salesperson’s success is viewed as personal, a planner’s
achievements are seen as dependent on many other actors inside
and outside the firm, such as suppliers. Conversely, a salesperson
is seldom dismissed for missing a lucrative deal, whereas a planner
who triggers a major write-off may face severe career consequences
if the loss is traced to the plan he proposed.
Whether justified or not, such preconceptions pervade large
companies, particularly among middle managers. Consequently,
members of the planning bureaucracy soon learn that dodging ac-
countability is safer than claiming credit for the occasional success.
Predictably, a process-oriented culture displaces a results-oriented
one. This outcome is sustained by redefining “good planning” as
adherence to codified processes. This twist makes the bureaucracy
nearly unassailable: success is redefined as continual compliance
7.2. THE TELEOLOGICAL VISION 267
with rules that it wrote for itself.
Unfortunately, mere process compliance yields little tangible
return in planning. Corporate planning is unlike aviation safety,
where each now-rare crash is examined to identify the adjustments
that will prevent a repeat. Instead, planning mistakes are so
frequent and so significant that such adjustments often introduce
at least as many flaws as they fix. As a result, mediocrity remains
a long-term characteristic of planning.
In particular, time-series forecasting creates an accountability
trap for the planning team. A point forecast is guaranteed to be
wrong at the point of use; when shortfalls or overages occur, the
miss is visible and, inside the firm, it accrues to the forecasters.
Bureaucracies avoid such single-owner exposure. The routine
response is to broaden “ownership” of the forecast—pulling sales,
marketing, finance, and operations into the ritual—not primarily
to sharpen the numbers, but to ensure that any error becomes a
collective one.
This maneuver typically appears as a large “collaborative” ini-
tiative that claims to improve forecast accuracy. S&OP (Sales and
Operations Planning) is one of the most popular flavors of such
initiatives. S&OP’s endless meetings ensure that many other par-
ties co-sign the forecasts. There is no empirical evidence that such
meetings improve planning quality. In fact, evidence shows that
large committees almost always deliver inferior results, whatever
the task at hand.
In short, teleological forecasting appeals to clerks because it
lets them sidestep both decisions and consequences. Forecasts
serve as an occupational buffer that deflects responsibility for the
resulting resource allocations. Planners then tweak the forecast-
ing process—often by branding it “collaborative”—to dilute their
exposure even further.
Although a straightforward analysis lays bare the flaws of the
teleological view, applying that critique to a specific company
is often hard. These flaws are often masked by methodological
bolt-ons that give the planning process a “scientific” varnish.
Critique is not a method. Firms still need a compass—one that
268 CHAPTER 7. THE FUTURE
does not pretend tomorrow is a fixed extension of yesterday. The
alternative is to treat uncertainty as an input rather than a defect,
to regard optionality as an asset to be cultivated and spent, and
to judge choices by their risk-adjusted contribution to profit rather
than by plan fidelity. This stance, common among entrepreneurs
and rare in bureaucracies, recenters agency on allocation rather
than on the forecast used to excuse it. We call this the rugged
vision, examined next.
7.3 The Rugged Vision
The future is irreducibly messy. Customers are fickle, em-
ployees capricious, and suppliers unreliable. Yet rivals face
the same turmoil. You need not be perfect—only better.
Every bump is also a passing lane. Decisive action beats
elaborate planning, and luck rewards the prepared. Prepa-
ration, then, is the art of cultivating opportunities rather
than guessing tomorrow’s exact market.
These remarks distill the rugged vision. The teleological camp
treats the future as a journey that demands an ever-more-detailed
map—the forecast. If the map is precise enough, no other instru-
ment is needed: it alone should guide the traveler to a known
destination. Traveling without such a map feels reckless to its
believers.
By contrast, the rugged vision relies on continuous opportunity
assessment—a homing beacon, not a map. If that beacon is sound,
progress is a matter of moving steadily in its direction. Knowing
the destination in advance adds no speed; shifting terrain soon
renders static maps useless.
Even though entrepreneurs practice it daily, many intellectuals
dismiss the rugged vision. Their scorn springs less from doubts
about its efficacy than from moral overtones, as though embracing
it reveals a flaw in character.
The first stigma arises from the implicit association of the
rugged vision with gambling and other forms of idle speculation.
7.3. THE RUGGED VISION 269
However, while the entrepreneur does take his chances, this isn’t
“idle” speculation, and the process is expected to be profitable
for both the entrepreneur and his clients. For example, the en-
trepreneur who launches a product that becomes a massive hit may
be judged an undeserved success, as his economic contribution—to
the inattentive observer—seems minor. However, to achieve that
one success, he may have launched dozens of products, losing money
on each that didn’t hit. Furthermore, whether a product is a minor
variation or a radical improvement lies solely with consumers. A
major commercial success suggests that consumers found some-
thing distinctly appealing. Unfortunately, the costs of exploring
many angles to better serve the market are largely invisible to
outsiders, so success is often dismissed as pure luck—like a lottery
win.
The second stigma arises from characteristic behaviors that
many observers deem exploitative. Again, observers focus on the
benefits—real or imagined—gained by the entrepreneur, skewing
their perception. First, in a free market a transaction occurs only
when both parties consent. Accordingly, no matter the price asked
by the seller, the buyer cannot be “exploited” unless he agrees to
the price
12
. For example, a firm that rushes portable generators
into a storm-damaged region may charge far more than the “normal”
price. Yet nothing about this is exploitative. Buyers are not forced
to buy the generators, and those who do judge themselves better
off with one than without
13
. More generally, in the rugged vision,
12
Most countries nowadays have all sorts of compelled transactions in place,
forcing people to pay for goods or services they didn’t choose: schooling,
healthcare, roads, vaccines, foreign aid, etc. Assessing whether those compelled
transactions are a net positive for society or not, and in which circumstances
they can be considered exploitative, is beyond the present scope. Let’s however
point out that those “transactions” have nothing to do with a free market.
13
The “price gouging” laws are the archetype of this flawed understanding of
economics in general and of supply chains in particular. Those laws restrict both
the capacity of buyers to get what they want, when they want it the most—thus,
incurring a net loss for those underserved buyers—but also restrict the capacity
for the sellers to expand their offering through extra investments that could
only be recouped if their prices are higher than usual.
270 CHAPTER 7. THE FUTURE
prices shape the flow: buying low and selling high is its driving
force. Yet it is often miscast as exploitative because inattentive
observers don’t realize that “buying low, selling high” isn’t luck
but requires constant effort and entails unavoidable financial risks
as well.
Since the 19
th
century, intellectual circles have labored to
stigmatize the rugged vision
14
and have spread this perception
to the public. Many quirks of what passes for the “mainstream”
supply chain theory nowadays—such as dodging the obvious impact
of price on demand—owe much to this stigmatization.
Yet truth is not decided by popularity; the rugged vision is
the superior alternative to the teleological one, both conceptually
and empirically. Conceptually, the rugged vision embraces the
profit-seeking perspective and its attendant traits. For example,
adversarial behaviors have no place in the teleological view yet fit
naturally in the rugged view. Empirically, the rugged vision proves
more resilient to ambient chaos because it is far less dependent
on plan correctness, yielding superior risk-adjusted decisions. For
example, the very notion of “opportunities” is self-evident in the
rugged vision yet missing from the teleological one.
7.3.1 Irreducible uncertainty
The rugged vision assumes that the future is resolutely uncertain,
without implying that it is entirely unknowable. The rugged view
neither posits that possible futures are unthinkable nor that all
are equally likely. On the contrary, this view holds that some
futures are much more likely than others, that such futures can
be identified, and thus that resources can be allocated to support
those probable futures. In turn, customers can be served more
diligently and more profitably.
14
See The Anti-Capitalistic Mentality (1956) by Ludwig von Mises for an
extensive discussion on why intellectual elites, and academia in particular, are
almost invariably fierce opponents of entrepreneurial behaviors, which includes
the rugged vision presented in this chapter.
7.3. THE RUGGED VISION 271
The rugged vision holds that part of tomorrow can be antic-
ipated, often only in coarse, structural terms. Much of what we
know is comparative—“a gram of gold will almost surely remain
pricier than a gram of silver”—or relational—“SKU A will outsell
SKU B at equal price”. Such knowledge is operationally useful
yet awkward for equispaced point-time series, which offer only one
number per date. Time-series slots cannot express inequalities,
orderings, or conditional statements; they flatten relative facts into
absolute guesses.
Even when the knowledge is not comparative, its structure
resists the time-series mold. Consider aviation: global passenger
counts rise in most years, punctuated by rare, severe collapses. A
point series forces us to pick one number per future date; it cannot
encode “almost always up, except for catastrophic down years with
small probability”. Capturing asymmetry and tail risk requires
probability distributions or regime variables, not a single baseline
path. In short, the time-series formalism is too narrow for the
kinds of claims that give firms a real edge.
Seen from this angle, the rugged view is no less “predictive” than
the teleological one; it simply uses a different grammar. Rather
than filling time buckets with a single baseline path, it assigns
probabilities to families of outcomes and records comparative
facts—e.g. that SKU A will likely outsell SKU B at equal prices,
that a heat wave lifts water sales, or that lead times carry a fat right
tail. What matters is not a solitary number per date but the priced
set of alternatives these statements unlock. A forecast is deemed
“good” insofar as it creates an economic edge—enabling the firm
to commit, defer, hedge, or reprice better than rivals—rather than
because it minimizes an academic error score.
Unlike the teleological view, which flattens tomorrow into a
single quantity-over-time, the rugged view treats the future as a
portfolio of uncertainties spread across the whole flow. Quantity
demanded is only one variable. Prices and commercial terms
can shift; a rival may extend the warranty from three to five
years, forcing a response; product mix and substitution evolve with
merchandising. Lead times wobble—often with fat-tailed delays;
272 CHAPTER 7. THE FUTURE
purchased and operating costs drift; suppliers and production lines
sometimes fail. Each of these deserves its own probability law
because each reshapes opportunity costs as much as demand does.
The practical task is to price these uncertainties jointly rather than
pretend one scalar forecast can stand in for them all.
In the rugged view, there is not one future but a boundless
multitude. Reducing that multitude to a handful of scenarios—let
alone a single forecast—is wishful thinking. Because the future is
only partly knowable and brimming with uncertainties, listing every
plausible outcome is impossible. Any effort to craft a sweeping
“grand-master plan” therefore borders on self-deception.
Instead, the rugged view treats planning as an attempt to
reconcile diverse and contradictory insights about the future in
order to allocate resources most profitably. Going big on one
unlikely outcome may be acceptable if the returns are high and
offset the risks. Conversely, a very likely outcome may merit no
allocation if no profit can be had, however certain.
In summary, the rugged vision treats tomorrow as a portfolio
of heterogeneous clues. It refuses to compress them into a single
time-series baseline and keeps each in its native form—probabilities,
regimes, inequalities, and comparative statements. The practical
problem is synthesis: turning disparate clues into consistent choices.
That task belongs to the tooling, not the vision. The remainder of
this chapter introduces the theory and instruments that perform
this synthesis.
Before the machinery, one principle: in the rugged view, the
decision—the allocation itself—comes first.
7.3.2 Opportunistic allocations
The rugged vision treats the future not as a fixed “baseline” of
units to purvey, but as a rolling stream of opportunities—some
dismally unprofitable, others very lucrative. Every decision is,
above all, an economic valuation of the opportunity to be seized.
In the rugged vision, the decision sits atop—a deliberate allo-
cation of scarce resources to an opportunity. Forecasts are inputs
7.3. THE RUGGED VISION 273
to that choice; in the teleological vision, by contrast, the forecast
wears the crown. This sharp, distinctive focus complicates any
attempt at dialogue between proponents of the two visions. To
the rugged exponent, forecasts are sophisticated but ultimately
secondary, wholly trumped by the need to act decisively. To the
teleological exponent, the forecasts and their associated plan are in-
separable from a rational course of action. Decisions not grounded
in a plan are seen at best as shortsighted and at worst as whimsical.
Yet while the rugged vision is fundamentally opportunistic, it
is just as forward-looking as the teleological vision. Indeed, every
allocation of resources comes with an unavoidable opportunity cost.
As a result, a seemingly profitable opportunity might be rejected
because it could open the door to a better one or, conversely, spawn
problems that erode the expected gains.
For example, let’s consider an airline leasing an expensive,
repairable aircraft part to a peer so that one of its jets is not
grounded for lack of that component
15
. As the part is urgently
needed, the airline can command a very high price for the lease
from its peers. Yet the airline will frequently decline to lease the
part, even if, by basic accounting rules, the operation appears
excessively profitable on paper. Indeed, the airline must consider
its own fleet. When leasing the part, the airline increases the risk
of grounding one of its own aircraft precisely because this very
part ends up missing. The cost of this risk can entirely offset the
profit from the lease, despite a price point that, on paper, appears
highly profitable.
Let’s note that the teleological vision is entirely sidestepping
the matter of opportunity costs. Indeed, from this perspective,
either the plan is deemed feasible—in which case no customer will
be left unserved—or it is deemed infeasible, in which case demand
forecasts are revised and lowered until the plan becomes feasible
15
In the aviation industry, an “AOG” (aircraft on ground) refers to an incident
where an aircraft is grounded due to a missing part or any other technical issue
that must be addressed prior to letting the aircraft fly again. Such incidents
are costly: a single delay can cascade into missed connections and, if extended,
force the carrier to provide hotel rooms for stranded passengers.
274 CHAPTER 7. THE FUTURE
again. In the teleological vision, the very notion of opportunity is
buried—along with nearly every other supply-chain concern—inside
the time-series projection of future demand.
Exact calculation of these opportunity costs is difficult in prac-
tice and acknowledged as such in the rugged vision. Thus, as an
alternative to explicit mental arithmetic on opportunity costs, the
rugged vision favors comparative assessments. For example, should
the company allocate its last unit to a new but unknown customer,
or keep it implicitly reserved for a loyal customer the company
isn’t willing to disappoint? While comparative assessments can be
made through sophisticated methods, the rugged vision emphasizes
a general preference for looking at every allocation through the
lens of its alternatives.
Through such comparative assessments, one can also decide
whether to allocate at all: if a preferable deferred opportunity exists
for the same allocation, the current one should be disregarded. For
example, the company can stop buying additional units, reserving
cash for later, once current stock is further depleted and needs are
clearer.
As a result, the rugged vision doesn’t need a “plan” in the sense
of the teleological vision. Resources will be allocated incrementally
until opportunity costs are deemed too high to continue. The
existence of preferable deferred opportunities is all the company
needs to avoid exhausting its resources through premature allo-
cations. Thus, despite its opportunistic stance, the rugged vision
is no more short-sighted than the teleological one. Both rely on
estimates of the future and are therefore forward-looking, even if
their predictions differ in form.
Once allocations are comparable, they can be prioritized. A
single, continuously resorted priority list can steer the entire flow as
new information revises each valuation. At enterprise scale, this list
easily runs to millions of entries—each admissible allocation—well
beyond human curation. The rugged vision says what to rank and
why. How to maintain and act on that ranking is an engineering
matter we will address when introducing the instruments that
mechanize the vision at scale. First, we need one more ingredient:
7.3. THE RUGGED VISION 275
the recognition that the future is not a mirror of the past but the
consequence of decisions yet to be made—the asymmetry of time.
7.3.3 Asymmetry of time
Unlike the perspective common in the natural sciences—where
many laws are effectively time-reversal symmetric and tomorrow is
treated as another stretch of the same process—the rugged vision
treats the future as shaped by choices not yet taken. In business,
tomorrow is not a mirror of yesterday but the consequence of
present commitments. Forecasting while deferring the decision
reverses causality: decisions cause outcomes, not the other way
around.
We have already discussed the case of a sportswear company
selling backpacks. Future sales cannot be thought of as a mere
projection of past sales, because the company can shape demand
along many dimensions: lowering the price, introducing a larger
backpack model, or offering it in a new color, paying for advertising,
featuring the item prominently in stores, etc. Thus, from the
rugged vision, it does not make sense to treat the coming year as
an extension of the previous year. The coming year is categorically
different from the previous year.
Even more unsettling to teleological planners, the rugged vision
treats dependence on decisions not yet taken as perfectly normal.
Present commitments may be made expressly contingent on later
choices: we secure and price options today precisely so that we can
decide with fresher information tomorrow, when opportunity costs
are clearer. To the rugged practitioner, this is not circularity but
prudence—acting now to enlarge the future choice set, while defer-
ring selection among those choices until the market has revealed
more.
For example, a fashion brand might negotiate and pay for a
large upfront capacity commitment from a trusted supplier. This
commitment puts the supplier in a position to aggressively source
the raw materials, such as textiles and leather. This mitigates the
risk of the supplier running into delays caused by later difficulties
276 CHAPTER 7. THE FUTURE
sourcing the raw materials, while also paving the way for lower
acquisition costs. However, when the commitment is struck, the
brand has not yet decided what it will order. Under the rugged vi-
sion, those details can wait. Provided the capacity is broadly right,
postponing the purchase order is preferable because it preserves
the option to align later orders with the market’s latest desires.
The value of preserving options is another cornerstone of the
rugged vision. Indeed, whenever the company commits to a course
of action, it loses some of its agency. For example, once a man-
ufacturer triggers a production batch, the raw materials are con-
sumed and can no longer be used for an alternative production
run. Similarly, if the same manufacturer commits to triggering
the production batch next month, the matching raw materials are
effectively reserved for that upcoming production.
Beyond direct commitments, options also erode through prece-
dent—the ratchet effect. Once a behavior is observed, counterpar-
ties update their expectations and make reversal costly. A fashion
brand that ends a season with deep markdowns trains customers
to delay purchases and expect the same discounts next season; a
“temporary” service upgrade becomes the new baseline; a one-off
MOQ waiver becomes the new rule. Like a mechanical ratchet,
motion in the tightening direction is easy, while release is resisted.
In the rugged view, every action is priced not only for its immediate
margin but also for how it expands or shrinks tomorrow’s option
set; moves that lock future choices must clear a higher bar than
those that preserve flexibility.
More generally, whenever part of the future is locked in, it
becomes functionally identical to the past, since it can no longer
be acted upon, even if events have not technically unfolded yet.
Not only does the rugged vision treat the future as radically dis-
tinct from the past, but it also cherishes preserving what makes
the future distinct—namely, the company’s agency. To rugged
practitioners, committing oneself to a final course of action is a last
resort, undertaken only when the benefits of commitment outweigh
the opportunity costs.
For those who hold the teleological vision, this stance is of-
7.3. THE RUGGED VISION 277
ten perceived as fickle or mercurial. There is no commitment to
one’s former self, only to formal agreements involving third par-
ties. For example, when probed about a future course of action,
an exponent of the rugged view will simply offer his best guess,
without any particular attachment to the statement. It’s just one
option—tentatively the most likely—but there is no certainty. This
option coexists with a vast number of alternatives that, in aggre-
gate, are even more likely. Later—as events unfold—the course
will be revised in light of new information.
From the teleological viewpoint, the existence of an underly-
ing plan is taken for granted: it is the central characterization of
the future. The possibility that there is no plan at all is nearly
unthinkable and is treated with contempt as amateurism when en-
countered. Thus, changing the course of action implies changing the
plan. Such an outcome is seen as a reflection of an underlying prob-
lem. This outcome reflects either inadequate execution—failure to
stay compliant with the plan—or an inadequate plan in the first
place
16
.
Quantifying a future deemed dependent on decisions yet to be
made yields numerous technical difficulties. In particular, time-
series forecasts and most planning instruments of the teleological
vision end up being inadequate. As the supply-chain literature
offers next to nothing to support the rugged vision, most of its
exponents rely on back-of-the-envelope calculations. Those calcu-
lations are deemed crude by teleological planners. Yet, as will be
shown in what follows, instruments supporting the rugged vision
can match—indeed, often exceed—the sophistication of classic
planning methods.
16
Large bureaucracies tend to evolve mechanisms that diffuse accountabil-
ity—layers of committees, ambiguous mandates, and gameable metrics. A
textbook case is the Space Shuttle Challenger accident (1986): the Rogers
Commission concluded that the launch decision was structurally flawed, with
responsibility fragmented across NASA and contractor management and dissent
filtered out by process rather than owned by any single decision-maker. For the
broader pattern, compare Parkinson’s Law (1955) and Pournelle’s “Iron Law of
Bureaucracy” (2010).
278 CHAPTER 7. THE FUTURE
7.3.4 Superiority of the rugged vision
While the rugged vision enjoys virtually no support from intellec-
tual circles
17
, it is vastly superior to the teleological vision. Em-
pirically, when successful entrepreneurs disclose their vision, they
invariably lean toward the rugged vision. Ingvar Kamprad (Ikea),
Jeff Bezos (Amazon), and Warren Buffett (Berkshire Hathaway)
are a few who took such public stances. Successful entrepreneurs
devote nearly all their effort to running their businesses and thus
do not produce as many papers or books as intellectuals who spe-
cialize in such work. Hence, writings favoring the rugged vision
are rare.
Furthermore, there is ample empirical evidence of the dismal
results of the teleological vision. For seven decades the Soviet
Union’s Gosplan imposed severe economic inefficiencies on more
than 100 million people. More broadly, in both private and pub-
lic sectors, large multi-year plans frequently derailed, generating
immense costs. Conversely, grand-planning successes that made
a large company’s fortune are unheard of. At best, plans execute
with limited overhead—in cost and delay—and that is the full
measure of “success” here.
Those observations—both successes under the rugged vision
and failures in its absence—stem largely from a single factor: how
uncertainty is managed. The teleological vision tries to eliminate
uncertainty; the rugged vision embraces it. Thus teleological
methods are fragile to shocks, whereas rugged-vision methods can
mitigate—or even profit from—the same shocks.
In a world with little or no uncertainty, the teleological vision
might perform admirably. We do not live in such a world. Free
markets alone suffice to create substantial uncertainty for all partic-
ipants. Indeed, free markets enormously magnify minor variations.
For example, Walmart drove competitors into bankruptcy—Kmart
in 2002 and Sears in 2018. Walmart took Kmart’s revenue from
17
These intellectual circles also include management consultants who, like
academics, depend on a steady stream of publications to gain the notoriety their
careers demand.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 279
over $36 billion in 1994 to zero despite being only 2.2% cheaper
18
.
In a free market, being 1% worse than a competitor typically yields
a 100% loss of market share within a decade or two. Thus, even if
non-market sources of uncertainty—wars, hailstorms, earthquakes,
epidemics—were vanquished, irreducible uncertainty would still
define free markets.
Supply-chain instruments—whether algorithms or methodolo-
gies—that support the rugged vision are scarce—an unsurprising
consequence of its lack of supporters among intellectual circles.
Most practitioners rely on experience and the intuition it yields.
Yet nothing prevents us from introducing such tools—an endeavor
these chapters undertake step by step.
7.4 Predictability of human affairs
A supply chain’s performance hinges on the capacity of the com-
pany, both its managers and its software systems, to adequately
anticipate future events. These events include, above all, human de-
cisions. Perfectly predicting natural events is conceivable—though
rarely attainable beyond simple cycles like tides—but perfect fore-
sight in human affairs is impossible. All it takes is a contrarian
customer
19
changing his mind to undermine a supposedly “perfect”
forecast. Yet habit makes actions somewhat predictable in the
short run.
For instance, when a supermarket manager places a pack of
yogurt on the shelf with a price tag, he expects a customer to
buy it. Moreover, he expects the sale to occur well before the
yogurt reaches its expiration date. Accurate anticipation keeps
18
See Wal-Mart Stores, Inc. (1994), European Case Clearing House, by
Stephen P. Bradley, Pankaj Ghemawat, and Sharon Foley.
19
When visiting a local grocery store, all it takes to undermine the forecast
of the retailer consists of emptying the shelf of a given product, immediately
causing a stockout. While a few products are stored in large quantities making
this exercise impractical, such as bottled water, many more products are stored
in very limited quantities, low single digit quantities, and thus, making it quite
affordable to adopt such an adversarial behavior.
280 CHAPTER 7. THE FUTURE
write-offs and stockouts in check—a discipline most supermarkets
master. Profitable supermarkets prove that such anticipations can
be produced at scale as routine business.
In general, supply chains remain profitable because managers
anticipate future events well. For the most part, this means antici-
pating decisions to be made by others. Until the late 19
th
century,
the process relied on intuition and experience. In the early 20
th
cen-
tury, formal analytical methods appeared, and model accuracy
began to be measured rigorously. These methods flourished after
1950, buoyed by steadily increasing computing power. This ush-
ered in the time-series paradigm, which, as discussed previously,
became the dominant predictive approach in supply chain.
Yet the time-series lens is brittle in business settings. It ab-
stracts away the very levers that make tomorrow unlike yesterday
and compresses sparse, discrete events into fractional averages.
Where it helps, it does so for two reasons only: short-run momen-
tum and calendar cyclicities. We examine those narrow strengths,
then replace the lens with instruments that fit the economics of the
flow: (i) probabilistic forecasts that attach distributions to demand,
lead times, prices, and returns; (ii) high-dimensional forecasts
that capture baskets, substitution, and cross-effects across items,
channels, and locations; and (iii) functional (policy-conditioned)
forecasts that speak in the native objects of the flow—orders, al-
locations, and prices—rather than in evenly spaced time buckets.
These tools do not guess “the” future; they price alternatives and
make opportunity costs explicit.
7.4.1 Momentum and cyclicities
Time-series forecasting treats human affairs as periodic measure-
ments projected forward. Once past numbers are recorded, the
task is to propose sensible values for future dates—and to do it as
faithfully as our tools allow.
With millions of papers
20
on time-series forecasting, it is difficult
20
On Google Scholar (November 2024), the query “time series forecasting”
7.4. PREDICTABILITY OF HUMAN AFFAIRS 281
to overstate the perceived importance of this paradigm in academia.
Yet, for a corpus of this magnitude, it has surprisingly little to offer.
It rests on two principles that give time-series models predictive
power: momentum and cyclicities. They matter, but their limits
are obvious.
Momentum
21
, in supply chain settings, is the short-run reluc-
tance of people to change their habits. This reluctance is not mere
stubbornness; change often carries sizable costs. Assessing whether
change improves on the status quo may require considerable effort,
and the rollout may undermine the value of existing productive
assets.
For example, a manufacturer may decide to source its machine
tools from a lower-priced supplier. As machine tools are not simple
commodities, assessing suitability is time-consuming. Teams must
be retrained and processes adjusted to make the most of the new
machines. Finally, existing spare parts may lose most of their value
if they are incompatible with the new supplier’s machines.
Thus momentum explains why the moving-average model fam-
ily—including exponential smoothing—often provides a decent
short-term baseline: the near future resembles the recent past.
Moreover, momentum governs more than demand: lead times,
return rates, defect rates, purchase and selling prices, etc.
As habits shift, momentum’s predictive power wanes; the Lindy
effect tells us how fast.
If a book has been in print for forty years, I can expect it
to be in print for another forty years. But, and that is the
main difference, if it survives another decade, then it will
be expected to be in print another fifty years. This, simply,
as a rule, tells you why things that have been around for
a long time are not "aging" like persons, but "aging" in
reverse. Every year that passes without extinction doubles
returns roughly 4,230,000 results. The figure is indicative only: duplicates and
near-duplicates inflate it, while missed items deflate it. Either way, a quick
audit confirms that the correct order of magnitude is indeed “millions”.
21
In classical mechanics, momentum equals mass times velocity. It is a vector
quantity.
282 CHAPTER 7. THE FUTURE
the additional life expectancy. This is an indicator of some
robustness. The robustness of an item is proportional to its
life! Antifragile: Things That Gain from Disorder (2012),
Nassim Nicholas Taleb.
The Lindy effect governs habits as surely as objects, and in a
shifting world it is critical in managing allocation risk. Substantial
investments demand momentum that is both strong and durable.
Practitioners use the Lindy effect to assess the lifespan of a given
momentum.
For example, a two-week surge in breakfast-cereal sales might
suffice for a supermarket to expand its inventory. However, the
brand observing the same surge might delay capacity expansion
by several months. The supermarket’s bet is modest; if the surge
is transient, the overstock will work off over time. By contrast, the
brand’s capacity investment is far larger and, if the surge fades,
must be written off.
Cyclicities capture the human tendency to anchor decisions
to the calendar. In practice, the usable calendar regularities are
few and stable: hour of day, day of week, day of month, and day of
year. Everything else of lasting value is a variant or composition
of these four, plus a handful of quasi-cyclical events that do not
track the Gregorian calendar exactly.
Hour of day. Stores have opening hours; e
-
commerce has order
cut
-
offs and late
-
evening spikes. Sub
-
daily ledgers reveal sharp
intraday patterns (e.g., lunch
-
break lulls, end
-
of
-
day surges). With
daily data, intraday cycles mostly vanish from view except through
induced day-of-week effects—for example, Sunday closures zero
out retail demand.
Day of week. Households shop on weekends; B2B demand
concentrates Monday through Friday; returns cluster on Mondays.
For many assortments, the weekly profile explains more variance
than any other single factor.
Day of month. Payroll, benefits, bill runs, and end
-
of
-
period
targets create month
-
end “hockey sticks”—especially in shipments.
Note the distinction: a month-end shipping spike is usually a
7.4. PREDICTABILITY OF HUMAN AFFAIRS 283
bureaucratic gating artifact, not genuine consumption. Treat it as
a calendar effect on observed flows rather than a persistent lift in
underlying demand.
Day of year. Seasons, vacations, and civic holidays dominate
the annual profile. “Seasonality” here is no mystery; it is the
calendar’s imprint on habits (heaters in winter, sandals in summer).
Retail “events” are likewise calendar-bound; Black Friday (the day
after the fourth Thursday of November in the USA)
22
reliably
realigns baskets every year.
Beyond the four base frequencies sit a few quasi
-
cycles: reli-
gious or lunar events (Ramadan, Chinese New Year, Easter) that
drift relative to the Gregorian year and must be handled with ex-
plicit, locale-specific calendars rather than naive monthly indices.
23
School calendars and local festivals are of the same kind: discrete
calendars with predictable date lists, not free-floating “patterns”
to be discovered.
From a modeling standpoint, the most economical treatment
is a small set of multiplicative indices—one per relevant calendar
factor—regularized toward 1 under weak evidence. Geography
and channel merit separate indices (what moves on Saturday in
retail often does not in B2B); closures imply structural zeros rather
than imputation; sparse SKUs borrow strength from their peers
through hierarchical pooling. This keeps the calendar explicit—and
portable—across assortments.
Put plainly, once momentum (the short
-
run persistence of
habits) and a compact calendar (the four base cyclicities plus a
few explicit quasi
-
cycles) are in place, almost all of the practical
signal that time
-
series models can harvest has been harvested.
Elaborate decompositions—Fourier fireworks, exotic lags, baroque
state machines—rarely survive out
-
of
-
sample in business settings.
Everything beyond momentum and calendar belongs to other
22
Black Friday and its derivatives (e.g., Cyber Monday) are defined relative
to civic holidays, not by astronomical cycles.
23
Ramadan migrates roughly 10–11 days earlier each Gregorian year; Chinese
New Year falls between late January and mid-February. Use dedicated binary
calendars for such events rather than smearing their effects across months.
284 CHAPTER 7. THE FUTURE
instruments (pricing, promotion, substitution, lead-time modeling),
not to the time-series lens itself.
The next section shows that, after momentum and calendar
are accounted for, the remaining tail-chasing within the time-series
paradigm yields vanishing returns for supply chains.
7.4.2 The dead end
Despite tens of thousands of new papers each year, time-series
forecasting has proved a technological dead end for supply chains
24
for years. The clearest demonstration of this claim appears in the
M5 results, a large public forecasting competition held in 2020—the
largest to date—using a store-level sales dataset from Walmart.
In this competition, a team from Lokad—a supply chain soft-
ware specialist—placed first at the SKU level
25
—the most disag-
gregated and, from a supply chain perspective, the only level that
ultimately matters for flow decisions. Their winning forecasting
model
26
was a simple multiplicative parametric model, using only
momentum and cyclicities as introduced in the previous section.
Furthermore, the vast majority of those parameters were basic
cyclicity indices, i.e., multiplicative coefficients reflecting the “day
24
Our argument only applies to supply chain settings. Natural sciences offer
all sort of periodic measurements which can also be approached through the time-
series paradigm. Those time series may exhibit all sort of sophisticated patterns
that can be used to anticipate future events with great accuracy. Conversely, in
pure finance, when no physical goods are involved, even a minuscule pattern
can be relentlessly exploited by an actor to arbitrage the market and generate
profits in the process. In contrast, in supply chain, statistical patterns need to
be nontrivial to be harnessed.
25
See The M5 Uncertainty competition: Results, findings and conclusions
(2020), by Spyros Makridakis, Evangelos Spiliotis, Vassilis Assimakopoulos, Zhi
Chen. In the overall score, which involved forecasts performed over a dozen of
aggregation levels, the Lokad team was initially ranked No. 6, but this ranking
was later revised to No. 5, as one of the better-ranked teams was found guilty
of cheating the competition.
26
See A white-boxed ISSM approach to estimate uncertainty distributions of
Walmart sales (2021) by Rafael de Rezende, Katharina Egert, Ignacio Marin,
Guilherme Thompson, December 2021
7.4. PREDICTABILITY OF HUMAN AFFAIRS 285
of the week”, “day of the month”, and the “month of the year”
factors.
At the SKU level, this model outperformed more sophisticated
approaches, including deep learning models (with millions of pa-
rameters) and gradient-boosted machines (with thousands of trees).
At other aggregation levels, this simple model remained within a
narrow margin of the winners’ accuracy.
With 909 teams worldwide in the M5, every notable time-series
method was surely tried. Yet, in the end, a simple model achieved
comparable—if not better—accuracy than its more sophisticated
counterparts. The reason is simple: no model managed to capture
valuable information from the historical time series beyond mo-
mentum and cyclicities. No model managed this feat because such
information is not present in the M5 time series.
For the Lokad team, far from being a surprise, this outcome
confirmed a realization formed about a decade earlier, after running
numerous time-series forecasting benchmarks on datasets from
dozens of corporate clients. These datasets hide no undiscovered
cyclicities, no latent lags where one series predicts another, and no
secret trends waiting to be found
27
. More generally, within business
records, there are no alternative patterns to exploit through more
sophisticated time-series forecasting models.
The absence of evidence isn’t evidence of absence. While al-
ternative time-series patterns—i.e., neither momentum nor cyclici-
ties—remain elusive, such patterns may yet be discovered. Yet the
odds of a breakthrough after such a staggering volume of research
are low. By the end of WWII, the predictive power of momentum
and cyclicities had already been identified. The M5 competition
then illustrated that, 70 years and millions of papers later, no
further principles have been uncovered. Most of what has passed
for “progress” since then has merely refined the parameterization
of those two effects—smoother trend terms for momentum and
27
A steady trend over multiple years can be seen as a particular case of
momentum, where the business is expected to keep growing as it did in the
recent past. Introducing a trend factor in a time-series model is straightforward,
and numerous techniques exist to do so.
286 CHAPTER 7. THE FUTURE
ever-finer calendar indices for cyclicities—yielding marginal gains
with steep diminishing returns once a clean multiplicative baseline
is in place.
There are numerous patterns to exploit beyond momentum
and cyclicities, but they lie outside the narrow confines of the
time-series paradigm. Thus, to take advantage of those patterns,
the time-series paradigm must be abandoned.
The persistence of ever-new time-series “advances” owes less to
misunderstanding than to incentives: novelty markets better than
admitting that, beyond momentum and cyclicities, little signal
remains to harvest. Hence the parade of variations decade after
decade, each adopting whatever technique happens to be fashion-
able in computer science
28
. For supply-chain forecasting, these
waves add little of substance, as the M5 results above illustrate.
7.4.3 The law of small numbers
Momentum and cyclicities, as predictive principles, are undeniably
powerful. When large numbers are involved, human affairs become
quite predictable using nothing but those two principles. For
example, traffic-jam dates in France can be predicted years ahead
with near-certainty. Those traffic jams reflect tens of millions
of individual decisions entrenched in lifetime habits. Transport
infrastructure is still evolving, but the progress is slow enough not
to interfere much with such predictions a few years out.
Unfortunately, supply chains are fundamentally about small
numbers. Whenever a number characterizing the flow gets large,
say exceeds 20, batching occurs and the number is lowered, owing
to economies of scale along the way.
Let us illustrate this principle with a local shipment of small
mechanical parts. A single urgent part goes by scooter. As quan-
28
New time-series forecasting models have been introduced steadily for seven
decades. Dominant trends include parametric models (1950–1980), hidden
Markov models (1990s), support vector machines (2000s), gradient-boosted
machines (2000s), deep learning (2010s), and generative pretrained large models
(2020s).
7.4. PREDICTABILITY OF HUMAN AFFAIRS 287
tities rise, the mode changes in discrete steps: a few dozen parts
are shrink-wrapped into one bundle and ride on the next courier
run; several bundles are packed into a carton and sent by delivery
van; a stack of cartons becomes a pallet moved by a medium-duty
truck; a handful of pallets justify scheduling a full-truckload. In
practice, the flow advances in quanta—unit bundle carton
pallet truckload—rather than one unit at a time.
As a result, supply-chain time series are almost invariably sparse
(observations are mostly zeros) and erratic (non-zero observations
vary significantly).
Day
Qty
0 2 4 6 8 10 12 14
Figure 7.2: A smooth, high-volume series with a weekly cycle (solid line)
contrasted with a sparse, erratic series (gray bars).
Time-series methods perform poorly on sparse series: they yield
fractional daily volumes (e.g., 0.3 units/day) that lack operational
meaning at the point of use and, unsurprisingly, prove inaccurate
against real decisions. A common palliative is to “hide” sparsity
through aggregation—lengthen the time bucket (days
months)
or broaden the scope (SKU category).
Yet even if aggregation technically yields more accurate time-
series forecasts, it is little more than an optical illusion. The
decision’s structure—resource allocation—dictates the prediction’s
granularity, not the other way around.
For example, consider a store manager selling a few jars of a
given mustard brand each month. In his case, the replenishment
decision has two fields: a date (when to reorder) and a quantity
288 CHAPTER 7. THE FUTURE
(how many units). The manager needs to know if today is a good
day to place the reorder, not whether this month is the right
month to reorder. Similarly, he needs to know how many jars
(same brand, same size) to reorder. An assessment in condiment
units—the product category—is useless to the manager.
The law of small numbers explains why the hope of engineering
precise time-series forecasts is greatly misplaced—even considering
situations that aren’t entangled in decisions that have yet to be
made. Small numbers render the data irreducibly erratic. This is
not a transient defect of present-day statistical instruments, soon
to be fixed by superior ones. On the contrary, whatever numerical
instruments we bring to supply chain must be tailored to small
numbers and otherwise limited datasets.
These constraints do not call for a finer time grid or a more
elaborate smoother; they require a different grammar. When
counts are small and actions are discrete, the meaningful unit of
prediction is not “one number for next Tuesday” but a probability
law over admissible outcomes—how many, if any, and how late.
The first instrument in that grammar is the probabilistic forecast:
a distribution that accommodates sparsity, integer quantities, and
fat-tailed surprises, and applies equally to demand, lead times,
returns, and prices. We start with lead times and demand to fix
ideas.
7.4.4 Probabilistic forecasts
Probabilistic forecasting considers all possible future outcomes for
an event and assigns probabilities to each. A probabilistic forecast
is the complete set of those probabilities. A non-probabilistic
(classic) forecast that pinpoints one outcome is a point forecast;
the term is rarely used, since “forecast” usually implies “point
forecast”. Probabilistic forecasting is fundamental for supply chain
because it embraces the law of small numbers, both theoretically
and empirically. It marks our first major departure from the
equispaced point time-series paradigm.
Probabilistic forecasts quantify uncertainty across the entire
7.4. PREDICTABILITY OF HUMAN AFFAIRS 289
flow, not just demand. This is our second break with mainstream
practice: we attach distributions to every uncertain driver that
moves cash and atoms—lead times, inbound fill rates, supplier
reliability, purchase and selling prices, returns, scrap, and available
capacities. Throughout this book, “probabilistic” is used in this
broad sense. To fix ideas, consider varying lead times. Neither
supplier nor carrier is perfectly reliable; unscheduled delays happen.
What we need is the distribution of the time-to-receive for a just-
placed order. A probabilistic lead-time forecast provides exactly
that distribution. The histogram below summarizes the day-by-day
probabilities.
Lead time (days)
Probability
0 1 2 3 4 5 6 7 8 9 10 11 12
Figure 7.3: Illustrative lead-time distribution: impossible at
2 days
on the left; peak at the nominal 3-day delivery; fat-tailed risk of longer
delays.
The histogram shows that delivery in under three days has zero
probability. This reflects the selected shipping method’s minimum
transit time. The distribution’s mode occurs on the third day,
where the probability is highest. This is the nominal lead time. It
is the duration expected when both supplier and carrier perform as
planned. Small but non-zero probabilities stretch far to the right;
they represent disruptions that inflate lead times well beyond the
nominal. In the worst case, the delivery never occurs.
29
Like point forecasts, probabilistic forecasts can be computed
from historical data. Setting modeling technicalities aside, let us
29
Strictly speaking, if there is a nonzero probability that delivery never occurs
(e.g., loss, destruction, seizure), the mean lead time is infinite: some probability
mass sits at “never”, so the expected lead time diverges. In practice, medians
and quantiles are more meaningful than means for such distributions.
290 CHAPTER 7. THE FUTURE
clarify what probabilistic forecasts bring to supply-chain decision-
making.
Probabilistic forecasts convey richer information than point
forecasts. However, they are not intrinsically more accurate. Prob-
abilities add an information dimension absent from point forecasts.
In particular, one can collapse a probabilistic forecast into a point
forecast by taking the distribution’s mean or median. Thus, a
more accurate probabilistic forecast will mechanically yield a more
accurate point forecast. Given a probabilistic model and a point
model, empirical validation is required to determine which is more
accurate. As classes, “probabilistic” and “point” models confer no
systematic accuracy advantage. Nevertheless, the extra informa-
tion provided by probabilistic models is often critical for supply
chain decisions.
Revisiting the varying-lead-time example above, note that the
distribution’s fine print is critical for managing late-delivery risk.
Assuming goods arrive at the median lead time would be a grave er-
ror: by definition, half the shipments would arrive late. Conversely,
basing operations on a far-right value of the distribution—a high
quantile—is equally unreasonable. Most goods would arrive much
sooner, creating problems such as excessive inventories.
Lead times are only one of many examples where variability
and uncertainty critically impact supply chain decisions.
Revisiting the earlier single-buyer variant from Limits of plan-
ning. Suppose a factory has shipped 1,000 units per day for years
to a single customer. In time-series terms, the point forecast is
a flat line at 1,000. Now add information the time series ignores:
the account is long-lived but not immortal—the weekly churn
probability is about 0.1%. One year out (52 weeks), the expected
daily demand is (1
0
.
001)
52
×
1000
934 units. Mathematically
correct, perhaps, but operationally nonsensical. A year from now,
daily demand will be either 1,000 or 0.
30
Any capacity plan based
30
1,000 units per day, or 0 units per day, or something else entirely. Indeed,
the customer can also simply change its ordering volume; we ignore that third
option here for clarity, though real settings would require considering it.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 291
on “934 units per day” will misallocate resources.
A probabilistic forecast, by contrast, makes the collapse risk
explicit: most probability mass remains on “still 1,000”, with a
smaller mass on “drops to 0” over the horizon. That view drives
a two-pronged policy: secure current service while the account
persists, and prepare a contingency for its eventual loss. The point
forecast attempts to predict a single “true” future and averages
away risk; the probabilistic view prices alternatives and prevents
the planning failure the flat line invites.
Thus, probabilistic forecasting reveals things that point lenses
cannot show. Attempting to capture a “one true future”, the point
forecast remains blind to otherwise obvious patterns. Moreover,
the probabilistic perspective typically permits leveraging additional
data beyond the narrow historical time series, or extracting more
signal from the same data. We illustrate this point with another
example.
Consider a DIY store with a product that sells on average 0.5
units per day. Although unrealistic, for clarity assume demand is
constant with no cyclicity. The store is replenished daily from a
warehouse that serves similar stores. The demand forecast is again
a flat series of 0.5 units per day. Now consider the probabilistic
forecast for daily demand, illustrated below.
Count
Probability
0 1 2 3 4 5 6 7 8 9 10
Figure 7.4: Discrete demand histogram: overall decline with pronounced
peaks at 4 and 8 units.
The product is indeed a corner protector intended for child-
proofing. Although protectors can be purchased individually, most
customers buy them in multiples of 4. The histogram above shows
292 CHAPTER 7. THE FUTURE
this: the probabilities of daily demand at 0, 4, 8, and 12 units
are much higher than for other counts. This probabilistic demand
forecast makes clear that the store should hold 8 units in stock
perhaps 12 to deliver high service quality. This stock level
could not have been anticipated by merely considering an average
demand of 0.5 units per day.
Probabilistic forecasts not only embrace the law of small num-
bers but also eliminate the need to handle nonsensical fractional
projections. While 0.5 units per day may serve as a convenient
abstraction to summarize demand for a packaged product, cus-
tomers cannot buy fractional units: only integer quantities are
admissible. Many authors and software solutions shrug off this
issue, but that attitude is misguided. Stock can only be replen-
ished in integer quantities. Accepting a fractional point forecast
introduces large approximations that can occasionally generate
substantial overheads for the company.
For example, if cycle demand is intermittent—say 0.1 unit
per period—there is a real risk it will be rounded to zero. When
this happens, no inventory is replenished; the observed demand
disappears and the flow halts. Far from a theoretical edge case,
this situation—an inventory freeze—is a routine problem for inven-
tory policies governed by point forecasts rather than probabilistic
ones. More generally, probabilistic forecasts naturally match the
dominantly discrete nature of the flow, a consequence of the stan-
dardization principle discussed earlier.
Although any distribution can be reduced to a single num-
ber—mean, median, or mode
31
—doing so discards the information
that gives the probabilistic view its value. The benefit lies in the
entire shape—its asymmetries, tails, and multimodality—not in a
hand-picked “central” value. No single summary can preserve that
information. The next chapter shows how to make decisions that
exploit the full distribution rather than collapsing it back into a
single point.
31
The mode of a probabilistic distribution is the point where the probability
mass function attains its maximum. The mode need not be unique.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 293
Generating probabilistic forecasts is arguably more technical
than producing point forecasts; however, in this age of powerful
computers the overhead is largely inconsequential, provided ap-
propriate methods are used. Because of this hurdle, one might
be tempted to do the reverse: first compute a time-series point
forecast, then convert it into a probabilistic forecast. For exam-
ple, one may approximate a sum over a segment with a normal
distribution, as is sometimes done for safety stocks.
Unfortunately, this approach yields none of the intended bene-
fits. Probabilistic forecasts are superior because they carry more
information, but that extra information does not appear out of
thin air. That extra information is extracted by processes specifi-
cally designed to draw more information from historical data than
methods aimed at producing point forecasts.
A further subtlety: a probabilistic forecast is not vindicated
merely because the realized outcome had non-zero probability under
the distribution. What matters is the weight the model assigns
to the realized outcome. A good model concentrates probability
where reality tends to fall and withholds it elsewhere. A bad
model does the reverse—sprinkling tiny probabilities everywhere
or assigning 1% to events that occur weekly. Thus probabilistic
forecasts should be judged not by binary hits but by how much
probability they allocate to what actually occurs, while remaining
as concentrated as the evidence permits. This calls for proper
scoring rules that reward well-calibrated, sharp distributions and
penalize misplaced confidence.
Accordingly, we assess probabilistic models using metrics tai-
lored to distributions. Although probabilistic metrics—e.g. log-
likelihood—are less familiar than point-forecast counterparts—e.g.
mean absolute error—they are not mysterious. The fine print of
these metrics is technical and not critical to the present discus-
sion. For now, we can proceed knowing those metrics exist and
that, given two probabilistic models, they indicate which model is
deemed “best”.
294 CHAPTER 7. THE FUTURE
7.4.5 Fat tails
In the natural sciences, many phenomena follow normal (Gaussian)
distributions a remarkable fact. For example, across the world’s
population, men’s heights are approximately normally distributed.
Similarly, men’s weights are approximately normally distributed.
Even if we filter the sample—for example to blue-eyed men or to
men living north of the 45th parallel—the distribution remains
approximately normal. We obtain similar normal distributions for
body temperature, bone density, or sleep-cycle durations.
Value
Density
µ
µ σ µ + σ
Figure 7.5: Thin-tailed normal distribution centered at 3 with standard
deviation 1. Compare with the lead-time histogram.
Normal distributions are ubiquitous across the natural world.
Air temperatures, rainfall, and wind speeds over weekly or longer
horizons follow normal distributions. They also appear in devia-
tions of apparent stellar brightness and in gene-expression levels
across human and nonhuman populations.
As mathematical tools, normal distributions are exceptionally
convenient. They benefit from simple analytical expressions. They
enjoy simple closed forms, ubiquitous open-source libraries, and a
rich algebra: sums of normals remain normal, and appropriately
scaled sums of many random variables converge to a normal, etc.
Unsurprisingly, most supply-chain authors embrace normal dis-
tributions; yet the view is fundamentally wrong. Once human
decisions enter the picture, thin tails are the exception. Most
business-relevant variables are fat-tailed: extreme outcomes occur
far more often than the bell curve allows; distributions are typ-
ically skewed; tails dominate totals. Modeling them as normal
7.4. PREDICTABILITY OF HUMAN AFFAIRS 295
systematically understates risk and misprices opportunity.
Consider a stadium with 10,000 spectators. No single person
can account for even 0.1 % of the crowd’s total body mass; human
weight is thin-tailed as a matter of biology. Net worth is the
opposite: one attendee may plausibly own more than the wealth
of everyone else combined. Wealth is fat-tailed
32
.
More generally, anything akin to entrepreneurial success yields
fat-tailed outcomes. The top 10 smartphone brands sell more
units than all other brands combined. In any given year, the
top 100 musical artists worldwide sell more songs than all other
artists combined. In French presidential elections, the top three
candidates have always received more votes than the two dozen
candidates that trail them.
Supply-chain flows are likewise prone to fat-tailed shocks: de-
mand can jump to implausible levels with little warning. In De-
cember 1973, a throwaway joke by late-night host Johnny Carson
on The Tonight Show about an impending toilet-paper shortage
triggered a nationwide run; shelves emptied not because supply
changed but because expectations did. Nearly five decades later,
the same SKU showed the same tail from a different cause: in
March 2020, amid lockdown uncertainty, households stockpiled
staples and U.S. toilet-paper sales spiked roughly eightfold
33
—a
dramatic swing for an item that is usually flat. Different triggers—
rumor versus policy shock—produced the same operational reality:
a long, thin tail became a sudden, overwhelming wave.
Sales can also grow markedly as a result of deliberate marketing
campaigns. For example, with its “Just Do It” campaign and strong
product, Nike lifted its domestic sports-shoe share from 18% to
32
Wealth’s fat-tailed inequality is not peculiar to capitalism. Feudalism, social-
ism, and other authoritarian regimes yield even greater inequality than capital-
ism. Laws and customs do not change the nature of a phenomenon—specifically,
whether it is thin- or fat-tailed. Naturally, this has never stopped anyone from
promising the opposite to his fellow men.
33
On March 12, 2020, sales of toilet paper were up 845% week-over-week
versus the prior month, according to NCSolutions, which aggregates U.S. retail
purchase data.
296 CHAPTER 7. THE FUTURE
43% and grew worldwide sales from $877 million to $9.2 billion over
1988–1998. By 2021, the figure reached $44.5 billion—a roughly
fiftyfold increase over a few decades.
Engineering feats can likewise leave one company utterly domi-
nating its market. In 2024, SpaceX placed about 90% of the world’s
tonnage into orbit. Even more remarkable, it achieved this with
only a tiny fraction of the investment made by its competitors—
almost entirely government programs—to build orbital launch
capabilities.
Demand is not the only phenomenon subject to fat tails; most
flow variables are as well. Prices can swing to astounding levels—
in both directions. For example, on April 20, 2020, the price of
West Texas Intermediate crude fell below zero for the first time,
reaching $-37.63 per barrel. The negative price resulted from
expiring futures: holders were obligated to take physical delivery,
storage was full, and they dumped contracts at any price to avoid
logistical and financial consequences.
Lead times also exhibit extreme behavior. As of 2023, Airbus
reported a backlog of over 7,000 aircraft—roughly 7–8 years of
output at current capacity. High demand for fuel-efficient models
such as the A320neo family has significantly contributed to this
backlog. The backlog also reflects the high unit capital cost and
lengthy production times. Moreover, other notable airframers were
suffering comparably large backlogs.
From a supply chain perspective, fat tails matter because
profits and losses are dominated by extreme events. Far from being
unpredictable, ubiquitous fat-tailed distributions guarantee that
extreme events will routinely appear. Thus, probabilistic forecasts
are not sufficient; models must reflect the fat-tailed nature of
supply chain phenomena.
Conversely, any supply-chain model that adopts normal—or
otherwise thin-tailed—distributions is guaranteed to understate
risk and, consequently, to produce fragile decisions. Such fragility
puts the firm’s survival in jeopardy: markets—especially when
more prudent or opportunistic competitors exist—punish it when
extremes arrive.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 297
The presence of normal distributions in supply chain mate-
rials—whether publications or software—should be treated as a
litmus test for incompetence. This includes all thin-tail methods,
such as safety stocks. Indeed, a casual look at empirical distribu-
tions from real-world supply chains shows the normal distribution
is nowhere close to a reasonable fit
34
. Thus, by advocating such
methods, authors reveal they have not even bothered to look at
any real-world supply chain. A company embracing this nonsense,
in any guise, only does its competitors a favor.
7.4.6 The price of knowledge
Probabilistic forecasts braid together two distinct kinds of uncer-
tainty; separating them clarifies both modeling and action.
Definition (Aleatory uncertainty).
The inherent randomness or variability in the phenomenon
itself—i.e., even with perfect knowledge and infinite data, the
outcome still exhibits randomness.
For overseas lead times, weather systems and port congestion
inject variability no party can eliminate; even with perfect infor-
mation and infinite history, the day a container clears the terminal
still varies. That irreducible variability is aleatory.
Definition (Epistemic uncertainty).
The uncertainty arising from incomplete information or in-
sufficient data about the phenomenon—i.e., uncertainty about
the parameters or even the form of the probabilistic model.
By contrast, when the firm has placed only a handful of or-
34
A telltale misuse is modeling lead times as normal. Beyond thin tails, a
Gaussian assigns nonzero probability to negative delays—implying shipments
arrive before they are sent. Any model that lets time run backward is unfit for
operations.
298 CHAPTER 7. THE FUTURE
ders with a new supplier, the shape of the lead-time distribu-
tion is poorly known. Each additional order, clearer timestamps,
and tighter Incoterms shrink that ignorance. This is epistemic
uncertainty—uncertainty about our model of the phenomenon, not
the phenomenon itself.
he distinction matters because, in supply chain, past observa-
tions used to generate point or probabilistic forecasts do not fall
from the sky. They result from past decisions—that is, from past
allocations of resources. Thus, knowledge about the future is paid
for. For example, a retail network may place an extra product on
the shelves of selected stores to assess demand potential. This is
an investment—both the cost of goods and the shelf space—for
uncertain returns.
Investing resources to reduce epistemic uncertainty—that is,
to acquire knowledge—is fundamental in supply chains. To those
who hold a rugged vision, continuously investing scarce resources
to probe the market is so self-evident it scarcely merits mention.
It is simply the natural way of conducting business.
Yet for mainstream supply chain authors who mostly adhere
to the teleological view this entanglement between allocations
and knowledge is denied rather than merely left unsaid. While
most authors realize that market knowledge does not appear out
of thin air, they turn to “market studies” and similar bureaucratic
instruments
35
to acquire knowledge beyond what historical data
offers.
From the teleological perspective, the chief benefit of the market
study is that it leaves the planning paradigm wholly unchallenged.
The study is expected to be completed before planning begins
and to furnish the information needed to produce accurate time-
series forecasts where historical data is lacking. Intellectually,
market studies neatly separate the acquisition of knowledge from
35
A “market study” is the bureaucrat’s comfort blanket: slow, expensive, and
low-signal. It produces slides and committees, not knowledge and decisions.
Accountability diffuses to the point of vanishing while time-to-learning stretches
by quarters. If the goal is to learn, small, instrumented field trials beat purchased
reports almost every time.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 299
its exploitation. Yet, like many good-sounding ideas, it fails in
practice.
We return to this link in Chapter Decisions, where we show
how decisions can—and often should—be tuned to harvest the
value of reduced epistemic uncertainty.
7.4.7 High-dimensional forecasts
Because “forecast” is almost always taken to mean “time-series
forecast”, other forms are overlooked. The time-series formalism
is restrictive and covers only a fraction of real-world supply-chain
situations. Yet only a lack of imagination keeps us from exploring
broader kinds of forecasts.
In practice, a time-series forecast is one-dimensional. Even
when the horizon spans many periods, each date’s error is evaluated
in isolation; the multi-period forecast collapses into a stack of
independent one-period forecasts. Whether the numbers come
from one shared model or unrelated formulas is immaterial to
evaluation: standard accuracy metrics treat them as separate
guesses and ignore how errors co-move across the horizon. That
blindness matters as soon as decisions couple dates, items, or
locations.
To show why high-dimensional accuracy matters, consider an
idealized two-product case. Consider a store selling two distinct
products that are a perfect pair of substitutes
36
. The products
expire at the end of the day, and are replenished daily. Demand
for the two products is 10 units per day; customers pick randomly
between them while stock remains.
Three candidate daily forecasts illustrate the point:
(A) 5 units and 5 units
(B) 10 units and 0 units
36
If the products were indeed perfect substitutes, then the store manager
would have no reason to keep both products as part of the store’s assortment.
Let’s suspend our disbelief for a second, a more convincing, yet more complicated
example will be provided afterward.
300 CHAPTER 7. THE FUTURE
(C) 6 units and 6 units
Given substitution, a suitable accuracy metric must assign the
same loss or value to (A) and (B). Indeed, both anticipations are
equally good—indeed perfect under our simplistic assumptions.
However, the metric must score (C) worse, since it overestimates
total demand by 2 units. No one-dimensional metric can satisfy
both requirements. Only a measure that looks at the two items
jointly recognizes that (A) and (B) tie while (C) is worse. All
classic scores (mean absolute error, mean square error, and so on)
wrongly rank (C) above (B). The only way forward is to broaden
the perspective to a 2-dimensional accuracy metric.
The example exposes the limits of a one-dimensional view; to
see why they matter, we now turn to a more realistic, probabilistic
example.
Consider a grocery store and its assortment problem: the
choice of the exact list of distinct articles to put on the shelves.
Customers rarely buy a single item; most leave with a basket of
several products. A superior assortment yields bigger or more
profitable baskets on average.
Every assortment still faces stockouts: when a shelf is empty,
customers may pick a substitute. Substitution mitigates the stock-
out. Conversely, the customer may forgo not only the missing
article but also the complementary articles he intended to buy.
Contagion worsens the stockout.
Only by examining historical market-basket data can we study
these substitutions and contagions. Indeed, when market-basket
data are available, we have a natural entry point to assess how
products relate to one another from the customer’s viewpoint.
Furthermore, if loyalty cards are used, sequences of market baskets
each attributed to a distinct customer can be reconstructed.
Loyalty data supplements market-basket data for such analysis.
The technical details are not needed here. What matters is
that the relevant information lies in market-basket data. Without
it, no matter the sophistication of our endeavor, the behavioral
analysis fails for lack of it.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 301
In contrast, when sales data are flattened into time-series, we
do not even know how many distinct customers walked into the
store on any given day. We cannot see substitutions and contagions
because all that is left are tentative correlations between time-series
that are both sparse and erratic. Time-series alone cannot support
a behavioral analysis.
Circling back to the assortment challenge, note that a market-
basket forecast conditioned on the chosen assortment offers
a straightforward solution. If we can predict future market baskets
under a revised assortment, we can assess whether it will be superior
to the current one at least in terms of sales volume.
Interpreted as a point forecast, a “reliable” market-basket
forecast is a category error. A single basket often mixes ten or
more SKUs; combinations explode, and shoppers freely substitute
at the shelf. Expecting to name the exact basket in advance
seems—and is—impossible; hence the idea is routinely dismissed.
The mistake is the framing: what matters is not one guessed basket
but the probabilities of co-occurrence and the lift items exert on
one another—precisely the structure we turn to next.
Yet, viewed probabilistically, the outlook is starkly different.
Clearly, not every conceivable market basket is equally likely. For
example, any basket containing a pair of products never observed
together in history is improbable. Establishing a probability distri-
bution over the market-basket space may look technically daunting,
but the machine-learning community has developed a sizable “bag
of tricks” for such predictions.
The point-forecast perspective, by flattening everything around
the “one true future”, gives the misguided impression that the
quality of an anticipation can be assessed pointwise, i.e., one
dimension at a time; yet this ignores the dependencies within the
system. Under a probabilistic perspective, those dependencies
become evident through the implausibility of certain outcomes.
Demand can rise or fall, but it is implausible for it to rise in odd-
numbered postal-code areas while falling in the even-numbered
ones. This observation implies a high-dimensional perspective.
Within supply chains, such dependencies are ubiquitous, pre-
302 CHAPTER 7. THE FUTURE
cisely because—as we have seen—they are systems. One product’s
success can cannibalize its substitutes; a grounded jet can cascade
into missed connections; cheaper energy can lift an entire sector’s
margins. The opening of a flagship superstore may attract new
patrons who marginally improve the business of all nearby stores,
whatever they sell. Similarly, a successful marketing campaign can
lift the sales of all the products of a given brand.
Designing high-dimensional accuracy metrics is technical and
lies outside this chapter. What matters is the principle: joint
forecasts must be judged in the geometry of the decision—as a
whole, not coordinate by coordinate. Proper scoring rules exist for
multivariate distributions and let us compare models on a simple
criterion: do they concentrate probability where reality tends to
fall, while preserving the dependencies that drive economics? With
that foundation in place, the remaining gap is agency: a forecast
that ignores our own levers still projects the status quo. The
cure is to condition predictions on the policy the firm intends to
apply. We call such policy-conditioned anticipations functional
forecasts—the subject of the next section.
7.4.8 Functional forecasts
The future state of the supply chain depends on decisions yet to be
made. Decisions by third parties—customers, suppliers, competi-
tors—should be forecast like any other uncertainty. Indeed, those
decision-making processes may well forever remain black boxes for
the company. By contrast, the company’s own forthcoming deci-
sions warrant different treatment. After all, the company needn’t
guess its own decisions; it controls its decision-making.
As commonly used, forecasts grant the firm little agency. They
describe what is likely under frozen policies, not what could happen
under alternatives. They condition on decisions already taken
when the baseline is computed and remain blind to those still
on the table. A manufacturer may cut prices today and, quite
properly, see forecast demand lift; yet the same forecast cannot
speak to next month’s price move or a promotion not yet chosen.
7.4. PREDICTABILITY OF HUMAN AFFAIRS 303
In short, the baseline projects the status quo—a “do-nothing-new”
counterfactual—useful for orientation, not a substitute for deciding.
To see why unfinalized decisions matter, consider a fashion
brand. The brand runs two collections per year, with products
ordered ahead of the season. Assume no in-season replenishments.
The brand displays products at the season’s start at full price.
For each product, planned quantity matches expected seasonal
sales, with a stockout at season’s end. Quantities are set so no
end-of-season discount is expected.
Yet by season’s end, some products underperform expectations,
leaving unsold inventory. The company’s pricing policy dictates
steep end-of-season discounts on underperformers, precisely to
liquidate stock.
Under dynamic pricing, the brand can rationally open with
larger stocks than under flat pricing. If demand proves higher than
forecast, the extra inventory yields profit. Conversely, if demand
is overestimated, the policy still liquidates inventory with minimal
loss. Pricing is no technicality; it is integral to the brand’s flow
strategy.
A “regular” demand forecast, probabilistic or not, cannot cap-
ture the sales trajectories induced by this policy. When the baseline
forecast is computed, there is no reason for projected demand to
exhibit—at season’s end—a late spike from discounts. Discounts
arise only when demand was overestimated; with accurate demand,
initial stock would be lower and no end-of-season discount would
be needed. The policy makes demand partly self-correcting: sales
align with stock levels only insofar as discounts are aggressive.
A functional forecast, by contrast, explicitly incorporates one
or more decision rules. Those policies are taken as given; they are
not learned from historical data. Moreover, those policies remain
isolated within the forecasting logic. Thus, for a model to be
“functional” (policy-conditioned), one should be able to swap a
policy for an alternative and directly obtain the revised forecast. In
that sense, a functional model is incomplete: without the required
policies, it cannot produce forecasts.
As we will see next, functional models need not be much harder
304 CHAPTER 7. THE FUTURE
to implement than their unconditioned counterparts. However,
assessing their validity is a thorny challenge. Indeed, as long as
the policies fed to the functional model remain close to those
historically practiced by the company, a well-engineered model
should behave sensibly. However, if a policy ventures into untested
territory, the model’s validity becomes uncertain.
Revisiting the case above: if the brand historically used end-of-
season discounts between 15% and 30%, the data support modeling
any combination within that range. But if the brand runs its
functional demand model with discounts between 30% and 60%,
any figures produced will be, at best, highly speculative. Those
predictions lack direct empirical support; extrapolating customer
response to untested prices may prove very surprising.
Unfortunately, beyond narrow situations, judging whether a
functional model will return sensible figures under an arbitrarily
novel policy is a problem of general intelligence. Assessment must
be tailored to the specifics. For example, trial the new policy on a
limited scale solely to acquire further empirical data. This process
is exploration, discussed in the next chapter.
Without functional forecasts, a company merely projects the
status quo, replicating its past policies. If the company wants
to evolve, it needs the capacity to project how envisioned policy
changes will affect the flow.
At small scale, functional forecasts are obvious, though seldom
named. Consider a neighborhood baker choosing his opening
hours. Longer hours enable more purchases—early commuters and
late shoppers can buy—yet consume a scarce resource: time with
his family and rest. The relevant forecast is not a single number
(“tomorrow’s demand is 180 loaves”) but a function linking decision
to outcome (“expected sales as a function of opening hours”). By
selecting the input—the schedule—the baker partly manufactures
the demand he intends to serve; this is the functional view.
At scale, when thousands of forecasts are generated, many firms
lapse into status quo projections driven by bureaucratic inertia.
Naturally, this attitude serves them poorly. Unfortunately, most
supply chain textbooks—and the software built on them—embrace
7.4. PREDICTABILITY OF HUMAN AFFAIRS 305
the same perspective.
Technically, functional forecasts matter for two reasons.
First, a software-engineering reason: they enforce a clean sepa-
ration between the predictive core (models that turn records into
probabilistic statements about demand, lead times, prices, returns,
and costs) and the reified policy (explicit code and parameters
that implement pricing, allocation, service rules, and other levers).
Once disentangled, the same predictive core can be exercised un-
der multiple admissible policies; policies can be swapped without
retraining; and the economic effect of each rule becomes auditable.
Second, a statistical reason: predictors learn only what past
policies exposed. When a policy ventures outside that data sup-
port—new price ranges, promotion cadence, service promises—the
forecast becomes an extrapolation. Under such a policy shift, preci-
sion degrades and uncertainty widens; results should be treated as
provisional until measurements accumulate under the new policy.
With these constraints in view, functional forecasting builds
the bridge from “what is likely” to “what we would do if . . . . The
natural instrument to exercise that bridge at scale is a simulator—a
stochastic, discrete model that embeds policy and propagates
uncertainties. We now turn to it.
7.4.9 Stochastic simulations
A simulation is a model that imitates a real-world system and
lets us observe how its state evolves over time. As computing
power grew, simulators proliferated, including supply-chain ver-
sions often marketed as “digital twins”
37
. Properly understood, a
simulator is a high-dimensional probabilistic forecast: it produces
many plausible futures—with implicit frequencies—rather than
one ordained trajectory. Used this way, simulation is a legitimate
instrument for anticipating future supply-chain states. Yet because
this probabilistic nature is rarely made explicit, outputs are too
37
“Supply Chain Digital Twin” is a marketing term many vendors use for
their simulation offerings.
306 CHAPTER 7. THE FUTURE
often taken at face value; assumptions go unexamined, and few
bother to check that the simulated distributions resemble what
operations actually deliver.
Supply-chain simulators vary widely, yet nearly all share three
traits: they are microanalytic, discrete, and stochastic.
A microanalytic (bottom-up) simulator decomposes the sys-
tem into minimal units governed by simple rules. Units include
agents—customers or suppliers—and inanimate entities such as
SKUs. The microanalytic view assumes system complexity is
merely apparent and arises from repeated application of simple
rules.
A discrete simulator (discrete-time or discrete-event) proceeds
through a sequence of events. Each event occurs at a particular
instant and marks a change of state in the system. Between
events, nothing changes—an expedient that vastly simplifies the
simulator’s software implementation.
A stochastic simulator (often called a Monte Carlo simulator)
uses non-deterministic rules that produce random outcomes. Thus,
repeated runs from identical initial conditions produce different
results. Use deterministic rules when possible; reserve stochastic
rules for unknowns, reflecting our imperfect knowledge of the
system’s evolution.
Simulators are instruments of functional forecasting: they map
explicit flow policies to predicted outcomes. By encoding decision
rules—what to buy, make, move, and price—and when—along
with operational constraints, the simulator produces forecasts
conditioned on those rules. That conditioning is the point: it puts
alternative policies on equal footing and lets us quantify their
economic consequences.
Bypassing that conditioning almost always forces crude approx-
imation. For example, treating demand as unaffected by quality
of service—assuming repeated stockouts never depress observed
sales—blinds the simulator to the very behavior the policy in-
duces. Such a model systematically overstates attainable volumes,
misprices options, and steers decisions off course. Hereafter, “func-
tional forecast” denotes this policy-conditioned prediction rather
7.4. PREDICTABILITY OF HUMAN AFFAIRS 307
than an unconditional time series.
The software implementation of such simulators is conceptu-
ally straightforward. A main loop traverses the edges of a graph
representing the flow. Nodes typically represent SKUs—or aggre-
gates when finer granularity is too costly. Each node carries a
small set of rules, either deterministic—e.g., decrement stock per
outgoing unit—or stochastic—e.g., draw random demand for the
period. Each iteration of the main loop represents a discrete time
step—typically a day, possibly shorter.
If one doesn’t look too hard at validity, such a flow simulator can
often be built in a few days. Hence its popularity with enterprise
software vendors. The functional design is appealing: it yields
arbitrarily fine-grained policy predictions. Yet without rigorous
validation, that trust is misplaced.
In fact, simulator outputs can be—and often are—outright
nonsense. A rule’s plausibility does not guarantee its correctness,
nor does pairing two valid rules ensure the simulator combines
them correctly. Mundane coding errors also creep into the simula-
tor’s codebase. As noted earlier, novel policies may undermine the
stochastic rules if the simulation enters “uncharted” territory. Ulti-
mately, validity is empirical and must be settled by measurement.
A duality links probabilistic forecasts and stochastic simulations.
From a probabilistic forecast, it is possible to generate samples
according to the underlying probability distribution (such samples
are usually referred to as deviates). From a complete probabilistic
forecast, one can derive the stochastic rules that drive the system;
conversely, observing a simulator reveals the probabilities of all
outcomes. Thus a simulator can estimate the complete probability
distribution over all admissible states; hence a probabilistic forecast
can be derived from a simulator.
In practice, numerous technicalities hinder conversion between
a probabilistic forecast and a simulator. The probabilistic forecast
might not be sufficiently fine-grained to specify all the simulator’s
rules. Conversely, the number of runs needed to estimate all the
probabilities may be prohibitively large. Even so, the duality
implies that stochastic simulations share the accuracy metrics of
308 CHAPTER 7. THE FUTURE
probabilistic forecasts.
Therefore, treat a simulator like any forecast: quantify its
accuracy. The work is technical; ignoring quantitative assessment
invites costly mistakes.
Chapter 8
Decisions
Supply-chain practice is the continual allocation of scarce resources
under uncertainty. The aim is not a plan but a profitable flow:
convert cash into goods, goods into service, service back into
cash—while preserving the options that future information will
render valuable. Even mid-sized firms make thousands of such
allocations daily—and the largest, millions—so the decision system
cannot be ad hoc. It must make choices explicit, price them, and
then sequence and execute them so that unattended software can
issue them safely at scale. Systematization does more than guard
against a rogue buyer exhausting working capital; its purpose is to
produce decisions that, in expectation, earn superior risk-adjusted
returns.
To close the gap between principle and practice, we now sharpen
the notion of a decision first sketched in Chapter 1.
Definition (Decision).
A flow commitment that allocates scarce resources among
admissible options, foreclosing alternatives in pursuit of the
highest expected risk-adjusted rate of return.
A definition, however crisp, leaves two practical questions. First,
where does a decision begin and end once time and randomness
309
310 CHAPTER 8. DECISIONS
enter? Second, by what single criterion are alternatives compared
when they draw on the same scarce pools—cash, capacity, atten-
tion? In supply-chain settings, a decision is not a line on a plan. It
is a flow commitment taken at a granularity fine enough to change
economics, made under uncertainty, and recorded in a ledger that
future events will audit.
Rendering such commitments computable and auditable re-
quires bringing four ingredients to the surface. The frame comes
first: the present sphere of agency—admissible moves, feasibility
tests, cadence, and the few ratchets that erase tomorrow’s op-
tions—must be written—ideally as code. Next comes a probabilistic
view of near futures: demand, lead times, returns, reliabilities, and
any other distributions that materially affect outcomes. Third, all
penalties and rewards must be expressed in money and attached
to owners—stockout pain, obsolescence, capital charge per unit
time, congestion, late fees, the option value of waiting—so unlike
frictions become commensurable on one ledger. Finally, marginal
commitments are ranked by their expected, risk-adjusted aperiodic
rate of return; those that clear the hurdle are issued, logged, and
later judged against realized outcomes.
The chapters so far have prepared this stance: optionality and
variability (Chapter 1) supply the space of moves and explain why a
probabilistic lens is mandatory; economics (Chapter 4) provides the
coin-denominated objective; information and intelligence (Chap-
ters 5–6) delimit the roles of records, reports, and decision engines.
The present chapter turns these pieces into practice. It begins
by thinking the decision: surfacing the frame, banishing artifacts,
opening the economics to inspection, and restoring reasoning at the
margin. Subsequent sections examine unattended execution and
the optimization machinery suited to repeated, uncertainty-laden
choices, so that each commitment becomes a small, priced wager
the firm can both defend ex ante and audit ex post.
8.1. THINKING THE DECISION 311
8.1 Thinking the decision
A decision, as defined above, is a commitment that erases alter-
natives. Thinking the decision therefore begins one step earlier,
with the act that selects which alternatives are even seen and
how they will be priced. In most firms, this prior act remains
tacit. People talk in the grammar of familiar “policies”—FIFO,
full trucks, minimum order quantities, shelf plans—so the choice
appears obvious and the unchosen, invisible. The danger is not
clerical error but metaphysical slippage: a policy silently substi-
tutes for the economic question it was meant to answer. Reversing
that slippage—bringing the question back to the surface, keeping
artifacts in their place, making the economics explicit, and rea-
soning at the margin—turns decision
-
making from ritual into a
scientific practice fit for automation.
8.1.1 Surfacing the frame
Managers are tempted to think with solutions—not a folly, but a
correct and useful heuristic. The space of conceivable “problems” is
boundless—designing an anti
-
gravity engine is a problem too—and
without the scent of a solution most inquiry would be wasted. Yet
policy
-
first thinking fossilizes quickly. “Serve stores FIFO” begins
as a decent rule of thumb and, over time, hardens into doctrine;
each exception is patched by another rule, and soon the firm steers
by accreted case law.
What must be surfaced, before any policy, is the frame. It
is the concrete, inspectable boundary of agency for the present
decision cycle: what can be altered now and what cannot; which
options are admissible; which random elements will matter; which
commitments would ratchet the future; and at what prices this
slice of reality will be judged. The frame should be written, not
merely described. In a modern organization, it lives as code: a
short program that enumerates admissible moves, encodes the
feasibility tests, and calls the valuation function. If the program is
long, the frame is muddled; if the program is opaque, the frame is
312 CHAPTER 8. DECISIONS
political.
Once framed, the same situation admits different moves. Con-
sider an importer that fills containers “product
-
by
-
product until
full”, deferring the rest to the next sailing. The policy sounds
prudent; the frame it hides is narrow indeed. Make the admissi-
ble options explicit—co-loading across suppliers, temporary cross-
docking to other regions, price nudges synchronized with sailing
dates, late binding between look-alike items—and then the opti-
mization no longer searches within one corridor but roams a small
atlas. The gain comes less from a cleverer solver than from a better
question. Problem design is not icing; it is the cake.
8.1.2 Banishing artifacts
Once a frame exists, numerical artifacts will appear in the ma-
chinery: forecasts, buffers, service levels, ABC tags, budgets, and
their innumerable cousins. They are often useful as intermediate
representations, just as a chemist uses reagents that never leave
the bench. Trouble begins when an artifact escapes the lab and
is enthroned as a target. Goodhart’s law then takes over: the
measure becomes the goal and ceases to measure.
Service level is the classic escapee. It began life as a proxy for a
money
-
denominated trade
-
off between shortage pain and carrying
cost; it ended as a percentage that commands the organization.
Planners raise it to feel safe, finance lowers it to free cash; neither
side is forced to state prices. Forecast accuracy plays the same role
for teleological planning: its improvement is celebrated even when
profit falls.
Artifacts should be kept on a leash. Inside the decision en-
gine, they are acceptable as transient coordinates that make a
computation tractable; outside, they have no standing of their
own. A replenishment is not “good” because its SKU hit 97%
service; it is good because, given the options, constraints, and
probabilistic futures considered in the frame, it maximized the
firm’s risk-adjusted rate of return. The engine may well compute
a distribution over daily demand, a shadow “service level” implied
8.1. THINKING THE DECISION 313
by the shortage penalty, and a working
-
capital charge per unit per
day; it may even print them for inspection. But dashboards and
incentives must anchor on decisions and their realized economics,
not on the laboratory glassware that helped to compute them.
There is a second reason to banish artifacts from the altar:
they are systematically harder to value than the decision itself
is. A safety stock number has no invariant economic meaning
across items, seasons, or geographies; its virtue is parasitic on
the hidden penalties and opportunities it stands in for. Turning
such a chameleon into a KPI invites bureaucracy to colonize the
flow. The cure is to treat every artifact as a private, provisional
object, recomputed as needed from authoritative records, never
hand
-
maintained, never made a target, and discarded as soon as a
better internal representation presents itself.
Artifacts are not only numerical. Organizations also mint
bureaucratic artifacts—policies and partitions—that began as rough
safeguards for attended processes and later ossified into rules
that substitute themselves for the economic question. When such
artifacts leave the lab and acquire institutional standing, they
distort decisions exactly as service levels did: the means becomes
the end.
As British historian Cyril Northcote Parkinson cautioned in
Parkinson’s Law (1957), “work expands so as to fill the time
available for its completion. Bureaucracies grow by reflex; large
organizations running sizable supply chains are no exception. As
they swell, extraneous rules and internal partitions accumulate
in their decision-making processes, surfacing mainly as made-up
constraints and made-up silos—both of which sap efficiency and
blur accountability.
Made
-
up constraints typically arise after a mishap. Roger from
purchasing orders eighteen months’ worth of sales by mistake; in
response, management imposes a policy capping purchase orders at
“six months of sales”, measured by a moving average. In manual
settings, clerks enforce this cap with common sense: they ignore it
for new products with no history, for highly seasonal items whose
comparable is twelve months back, for fast
-
growers whose future
314 CHAPTER 8. DECISIONS
dwarfs the past, or for planned promotions whose lift is engineered.
An unattended engine cannot “sensibly ignore” rules; it either
obeys the letter or is declared noncompliant.
1
The remedy mirrors the treatment of numerical artifacts: keep
such rules private to the engine and express them as feasibility
tests and priced penalties, not as public commandments. If buying
above a naive “six
-
month” cap is occasionally rational, the cap
belongs in the objective as a coin
-
denominated nuisance, not as
a veto; newness, seasonality, and engineered lifts become explicit
features of the frame, not tribal exemptions. A rule written as
code, priced and owned, is legible and contestable; a rule written
as a decree invites gaming and stifles automation.
Made-up silos fracture a single economic question into a proces-
sion of artifacts so that lightly tooled teams can cope. A fashion
retailer, for instance, runs the warehouse-to-store flow through
three artifacts: an assortment list (what a store may carry), a
launch pack (how much at start), and a replenishment rule (what
to send later). The choreography looks tidy; it quietly manufac-
tures constraints. The assortment list pre-commits shelf real estate
and vetoes substitutions that early sell-through would justify; the
launch pack pre-allocates labor, line-haul, and facing capacity that
replenishment must then buy back—often at a premium; the re-
plenishment rule then pretends to optimize what is left. Money
is left on the table because the underlying question—where does
one more unit earn the highest risk-adjusted return, given current
constraints?—was broken into artifact-bounded procedures before
it could be asked end-to-end.
Here again, artifacts must stay on a leash. Silos may curate
1
The “Simple Sabotage Field Manual” (1944), published by the Office of
Strategic Services (precursor of the CIA), recommends bureaucratic sabotage.
The infiltrated agent must insist that the organization apply all policies, all
rules, and all regulations to the letter (no matter how trivial), thereby burying
the organization in red tape and slowing progress to a crawl. Far from being an
insight of historical interest, modern companies operating large supply chains are
more susceptible to bureaucratic sabotage than ever—so much so that accidental
bureaucratic self-sabotage has become part of the daily routine for many, if not
most, large companies.
8.1. THINKING THE DECISION 315
those artifacts; they must not own the decision. The engine should
see the end
-
to
-
end frame—admissible moves, shared constraints,
and prices—and produce a single flow of commitments. If gover-
nance requires organizational boundaries, the interface must carry
prices and feasibility, not quotas and vetoes: the distribution center
(DC) publishes shadow prices for scarce resources; stores publish
shortage penalties; assortment is a proposal the engine may expand
when economics justify it. In other words, keep the ceremony
parochial and the economics unified.
Once artifacts—numerical and bureaucratic—are domesticated
as private, priced, and provisional objects, the stage is set for the
only debate that matters: the prices themselves. The next section
turns to that point directly.
8.1.3 System economics
The previous chapter established money as the firm’s common scale.
System economics requires that every decision be priced from the
vantage of the whole firm, not a departmental KPI. In practice,
this means the prices inside the objective function—stockout pain,
obsolescence, shelf congestion, late fees, working-capital cost, op-
tion value of waiting, goodwill loss—are stated once company-wide,
attached to named owners, versioned, and exposed. Concealed
prices do not become neutral; they bias the flow toward whoever
smuggled them in. A hidden penalty for backorders is a private
agenda masquerading as science.
Making the economics system-wide does not require a plebiscite
for every coin; it requires numbers legible and contestable across
functions. Finance should publish the firm-wide cost of capital
and liquidity limits; operating teams should translate them into
per-unit, per-day charges reflecting inventory half-lives and sal-
vage values. Merchandising should not demand “98 % availability”
without pricing the next point in coins per day; logistics should
not insist on full-truck thresholds without pricing the congestion
and spillover they avert. Where markets exist, borrow external
prices; where they do not, mint private valuations and defend
316 CHAPTER 8. DECISIONS
them. To arbitrate across silos, scarce shared resources—dock
slots, shelf space, line time, capital—should carry shadow prices:
engine-computed, provisional numbers expressing the current op-
portunity cost of using one more unit. They replace quotas and
vetoes; they are not a new policy. We return to them in the discus-
sion of sequential decisions below. The goal is not unanimity but
comparability on the company P&L. Once penalties and rewards
are expressed in the common coin, options that used to live in
different provinces—“grant a refund beyond policy”, “buy one
more unit of a long-tail SKU”, “hold a truck to top up a route”,
“extend the return window for this class of customers”—compete
on the same ledger.
This system view also clarifies accountability. If a decision sys-
tematically underperforms once outcomes materialize, the dossier
shows whether the failure lies in the frame (options omitted), the
forecasts (poorly calibrated tails), or the prices (penalties mis-
stated). Corrections then target the offending ingredient rather
than diffusing into vague calls for “better collaboration”. Local
KPIs lose their alibi: a move that flatters a departmental metric
but destroys coins elsewhere will be exposed by the aggregate
ledger. Hidden prices breed politics; open, system-level prices
breed performance. With company-wide prices in place, the engine
returns to its natural stance: reasoning at the margin under a
single ledger.
8.1.4 Embracing the margin
Economics is a marginalist discipline; so is supply chain when
practiced properly. The relevant question is almost never “what is
the right stock level for this SKU this season?” but “what is the
risk-adjusted return of one more unit—here, now—given everything
already committed?” The answer is a number in coins per unit per
unit time, and it shifts as the commitment shifts. Thinking at the
margin forces grain, time, and contingency into view.
Grain first. Decisions should be computed at the finest granu-
larity that materially changes their economics. For commodities,
8.1. THINKING THE DECISION 317
the unit is a piece; for transport, a stop or a cubic meter; for
aviation rotables, a serial number whose cycles and hours render
two parts with the same P/N economically incomparable. An
airline can rightly sell a given unit to a rival’s AOG (Aircraft On
Ground) request while keeping another ostensibly identical one to
hedge its own tail risk; the marginal values diverge because the
insured futures differ.
Time next. Marginal values are aperiodic: they must account
for the half-life of the load, not just the lead time. A crate that
clears in two days with deferred payment can be quite profitable,
though the same crate, paid up front and sold over eight weeks,
is mediocre. When the engine ranks admissible moves by their
aperiodic rate of return—as argued in Chapter Economics—it
naturally prefers quick-cycling bets and penalizes long ratchets
unless their upside is commensurate.
Finally, contingency. Margins live on distributions, not on
points. The extra unit that looks profitable under the median
forecast may be ruinous once a fat-tailed lead time is acknowl-
edged; conversely, a unit that seems wasteful under a thin-tailed
model becomes attractive when a rare but catastrophic stockout
is correctly priced. This is why a probabilistic forecast is not a
luxury: it is the substrate on which the marginal calculus operates.
At scale, embracing the margin turns the firm’s daily work
into a continual resorting of millions of micro
-
options. Each new
commitment reshapes the frontier: shadow prices emerge on tight
constraints, rival moves change the payoffs, yesterday’s best next
step becomes today’s third
-
best. A static plan cannot price moving
edges; a decision engine can. It does not conjure certainty; it
computes, at each instant, the best next wager and writes it into
the ledger with reasons. The rest is merely bookkeeping.
Thinking about the decision this way—frame explicit, artifacts
domesticated, economics open, margins sovereign—does not add
ceremony. It subtracts superstition. It also aligns with the rugged
vision of the previous chapter: decisions become small, auditable
wagers placed under uncertainty, guided by prices the firm is willing
to defend, and revised as reality reveals itself.
318 CHAPTER 8. DECISIONS
8.2 Why automate
Computers are unreliable, but humans are even more unre-
liable. Any system which depends on human reliability is
unreliable. One of the many Murphy’s Laws.
The previous pages defined a decision as an auditable commit-
ment that erases alternatives and is justified only by its expected
risk
-
adjusted rate of return. Once the frame is made explicit
and the economics opened, a supply chain becomes what it has
always been in substance: a mill of innumerable small wagers
whose edge is earned at the margin. At modern scale, the relevant
edge is inaccessible to unaided clerks. What exhausts them is not
arithmetic—spreadsheets already shift that burden—but the sheer
combinatorics of admissible options, the demands of probabilistic
reasoning under fat tails, and the continual repricing of constraints
as reality moves. If the bottleneck is felt in SQL dashboards rather
than in steel and diesel, the firm has chosen to limit itself in the
one realm where limits are least binding—computation.
There is a second, quieter reason to automate: repetition under
control. A manual or semi-manual process changes every time
people change; staff turnover, local fixes, and shifting attention
make yesterday’s baseline incomparable with today’s. By contrast,
an unattended process can be duplicated, perturbed, and measured.
Two numerical recipes can run concurrently against the same
ledgers—one in authority, one in shadow—while drawing on the
shared staff because neither demands daily supervision. Measured
this way, progress ceases to be a matter of opinion; it becomes an
audited delta in coins.
Spreadsheets made clerical work tolerable; they never made it
effective. Their “logic” is a tangle of ad hoc formulas that neither
encode a frame nor price uncertainty. The result is familiar: last-
minute overrides, safety buffers stacked on safety buffers, and a
ritualized haste in which a clerk must finalize a hundred quantities
before lunch. It is tempting to imagine that granting everyone more
time would yield better answers. In practice, the pause inflates
headcount, deepens silos, and delays commitments—each delay an
8.2. WHY AUTOMATE 319
opportunity cost that rarely appears on the dashboard but always
appears on the P&L.
Automation, in the present sense, is not “using more soft-
ware”. It is the institutional decision to let systems of intelli-
gence—introduced earlier—compute, issue, and log decisions unat-
tended under normal conditions. What follows examines this
stance from several angles: as an asset, as a discipline for over-
rides, as a cultural turning point, and as the natural outcome of a
century-long trajectory from management to machinery.
8.2.1 A productive asset
Automation turns decision
-
making from perishable labor into a
capital good. A system of intelligence is not a report that “assists”
a planner; it is an engine that commits coins on the firm’s behalf
according to an objective it can defend. The expense profile flips:
what was OPEX—hours spent nudging cells—becomes CAPEX—a
codebase whose operating cost is negligible relative to the volume
of choices it processes. Each engineering day spent improving this
asset is accretive: either decisions get better, compute gets cheaper,
or upkeep gets lighter. If none of these occurs, no further money
should be spent.
This asset is productive because it encodes the frame and
the prices. Unstated heuristics that once lived in memos and
meetings migrate into version
-
controlled code. The engine reads
authoritative records, infers the distributions that matter (demand,
lead times, returns, repair times), evaluates admissible options
under those distributions, and writes back auditable commitments.
It does so at the cadence reality demands, not at the pace a
committee can endure. Dual-run becomes routine: the incumbent
recipe holds the pen while a challenger produces shadow decisions,
and a clean A/B delta accumulates in the ledger. When confidence
drops—the tails look miscalibrated, a constraint begins to bind
unexpectedly—halting heuristics suspend emission; the fallback is
not a thousand exceptions but a stop and a fix.
Software is cheap to run and, if kept small and legible, cheap
320 CHAPTER 8. DECISIONS
to maintain. Its performance limit is not human patience but the
economics encoded in it. Any genuine insight—about substitution,
deferring or expediting, rerouting, the option value of waiting—can
be folded into code and thereafter applied at machine scale. The
clerical ceiling disappears; what remains is the strategic ceiling set
by the firm’s ability to price what it values.
8.2.2 Manual overrides
Automation always raises the question of when to intervene. An
emergency stop belongs in any powered machine; routine “manual
corrections” usually signal a design flaw.
Vendors have long softened brittle automation by deputizing
users as human coprocessors. Alerts and “exceptions” flood inboxes;
planners are asked to inspect most outputs and override many.
This pattern defeats the purpose. If a system needs a person to
catch bad decisions, that person will soon distrust it, and the
organization will revert to spreadsheet theatre with extra steps.
The cure is architectural rather than hortatory. First, the
engine must be built so that every decision it emits is sound on
its own terms: framed within admissible options, priced in coins,
and justified under a probabilistic view of the future. Second,
when unsure, the engine must halt—not pester. Halting is not
abdication; it buys time for the only interventions that durably
improve outcomes: edits to the frame, the prices, or the forecast
semantics. Third, trust is earned by design: dual-run before
cutover; visible dossiers for every commitment; explicit, versioned
valuations; and measured deltas after the fact. Chapter Engineering
will call this discipline experimental optimization.
Overrides do not vanish; they change nature. A rare, targeted
override edits a valuation (“the shortage penalty for this class of
customers was understated”), narrows the option set (“do not ship
lithium batteries by air this month”), or repairs a semantic fault
(“this field changed meaning after the ERP patch”). It does not
scribble a different number on a purchase order while leaving the
machine untouched.
8.2. WHY AUTOMATE 321
8.2.3 Culture and unrest
Rising productivity invariably unsettles incumbents. Tiberius
refused cranes; the Luddites smashed looms. Modern firms are
spared neither anxiety nor politics, and white-collar workers face a
novel twist: they are asked to help automate their own jobs.
The paradox dissolves once roles are seen clearly. The work
to be mechanized is the clerical grind of enacting yesterday’s
heuristics. The work to be elevated is the articulation of frames
and prices, the stewardship of semantics, and the rapid rewiring of
logic when the world shifts. Software cultures solved this tension
decades ago. They prize people who make themselves obsolete at
the keyboard—because value moves to the next problem: the tool
becomes the asset; the engineer moves on. Where this culture is
resisted, markets—not memos—adjudicate. Firms that refuse to
compound the asset are displaced by those that do.
Executives cannot purchase this culture off-the-shelf, but they
can remove the incentives that poison it. Reward decision quality,
not seat counts. Pay vendors for outcomes, not logins. Promote
the contributors of better frames and valuations, not the curators
of larger dashboards. Above all, give the few who can actually
rewire the engine the mandate to do so quickly—their leverage
dwarfs a roomful of human schedulers.
Two recurrent patterns merit daylight. First, headcount is still
treated as a proxy for status. Managers who grow the number
of “seats” under them gain budget and influence; automation,
which collapses seats, feels like a demotion. Reverse the currency.
Tie prestige to uplift per person and per watt. Make shrinking
attended seats an explicit objective and celebrate the teams that
make themselves obsolete at the keyboard; in a software culture,
this is advancement, not abdication.
Second, per
-
seat licensing and “engagement” metrics make
vendors complicit in keeping humans in the loop. A system that
bills by logins, clicks, or “active users” is economically opposed
to unattended flow. Contracts should be priced on outcomes the
firm wants—volume of unattended commitments issued with audit
322 CHAPTER 8. DECISIONS
trails, measured uplift against a dual-run baseline, time to rewire
logic after a frame change—not on the clerical motion the tool
induces. When money is paid for results rather than for screens,
incentives align with automation rather than against it.
Finally, beware of metric theater: when bonuses hinge on accu-
racy percentages or on-time scores detached from coin-denominated
valuations, energy flows to improving the indicator rather than the
economics.
8.2.4 Human exceptionalism
The claim that “a computer must never make a management
decision because it cannot be held accountable”
2
seems humane;
it is, in fact, confused. Most large organizations already run vast
tracts of policy that leave no room for discretion. A credit hold on
overdue accounts is enforced whether a person clicks the button or
a program does. Accountability attaches to the policy’s authors
and the executives who endorsed the prices and constraints it
encodes, not to the clerk who executes it.
Insisting on a human hand for symbolism degrades both speed
and quality. It also invites a worse failure mode: invisible responsi-
bility. When a rule lives in a spreadsheet and a routine exception
is scribbled into a cell, no one learns. When the same rule lives in
code and an outlier forces a halt, the design is amended in daylight.
The relevant test is economic, not anthropomorphic: given the
same inputs, the same admissible options, and the same audit trail,
does the system yield higher risk
-
adjusted operating profit than
the incumbent? If yes, it is “intelligent enough” for the task.
8.2.5 From management to machinery
The puzzle is not why automation is desirable, but why it took
so long to become routine outside a few niches. For most of
the 20
th
century, decision processes were necessarily managerial;
2
An IBM presentation from 1979
8.2. WHY AUTOMATE 323
only human minds could digest information and emit sensible
choices. The question then became how to divide the labor across
many minds. Consulting authors proposed elaborate fabrics of
committees and calendars to make the division bearable. The
fundamental flaw was silent: what is “best” for a fragment seldom
aligns with what is best for the whole. A homeware division that
deploys one million coins at an expected return of ten percent can,
without malice, preempt the kitchenware division from deploying
the same million at fifteen percent. However finely the organization
is sliced, all managers draw on the same finite pools of cash,
inventory, capacity, and attention.
Bureaucratic methods tried to heal this with more communica-
tion. Meetings, memos, and escalations transported fragments of
knowledge upward, then downward, hoping that alignment would
follow. Yet allocation is quantitative; without a common ledger of
prices, coordination reverts to hierarchy. Operations research, born
in wartime, promised a different future: let algorithms optimize the
allocation end to end. Hardware constraints, crude data semantics,
and a taste for deterministic toys conspired to blunt that promise.
By the late 1970s, Russell Ackoff was already writing its obituary.
Spreadsheets filled the vacuum. They multiplied clerical capac-
ity but froze decision
-
making at a pre
-
scientific stage. Attempts
to replace them with “planning” modules failed because those
modules inherited the same deterministic posture and the same
artifact worship. Users drifted back to files; vendors added more
screens. The decision load returned to white-collar ranks, now
larger than the blue-collar ranks that move cartons.
The economics never changed: repetitive supply
-
chain deci-
sions can and should be automated. The enablers matured quietly.
Systems of records stabilized; probabilistic methods, once exotic,
became pedestrian; hardware became cheap; and a new architec-
tural species—systems of intelligence—emerged to sit alongside
ledgers and reports rather than pretending to replace them. Fully
unattended decision engines have been in production for more than
324 CHAPTER 8. DECISIONS
a decade in first-movers
3
. The pattern will spread, and spread-
sheets will keep their place as Swiss-Army knives—handy in a
pinch, absent from the production line.
Two further obstacles, often overlooked, explain the long delay.
First, distributing a decision across many heads is intrinsically
treacherous. No natural ordering exists that makes local choices
commute. Advertising budgets, shelf space, and factory capacity
for a new product constrain one another; whatever team “goes first”
traps the others into an inferior corridor. The correct resolution is
not another committee, but a unified optimization instance that
draws on each team’s expertise where it belongs—in valuations,
constraints, and scenarios—while letting the solver synthesize a sin-
gle commitment. Yesterday’s commitments, once enacted, re
-
enter
as data; the next cycle proceeds on a narrower but clearer stage.
Second, organizations accumulate made
-
up constraints to protect
themselves from their own past errors. Manual processes survive
by ignoring these rules when common sense demands it; software
cannot. The cost of automation is therefore paid up-front in the
removal or re-pricing of such constraints; the dividend is paid
forever in cleaner frames and higher returns.
The broader arc is familiar. Over the last three centuries,
profit
-
seeking entrepreneurs relentlessly substituted capital for
labor, and blue
-
collar shares fell accordingly. In many indus-
trialized regions, the headcount managing the flow now exceeds
the headcount physically moving it
4
. The next increment of pro-
ductivity will come from automating away the clerical part of
white
-
collar work. Resistance will be spirited; the economics will
be patient—and decisive.
3
For example, Amazon’s Supply-Chain Optimization Technologies (SCOT)
team has end-to-end responsibility for orchestrating Amazon Store’s supply
chain—demand forecasting, buying, inventory placement, and customer-promise
decisions. Senior leadership decided in 2014 to make several product categories
“100% automated”, with subsequent rollout across the retail business.
4
Warehouses, once requiring sizable workforces, have been increasingly re-
placed by fully automated warehouses since the 2010s. Transportation will
follow. In 2024, there are over 2 million fully self-driving vehicles in operation
worldwide.
8.2. WHY AUTOMATE 325
Finally, a word on the technological “why not earlier”. Until
recently, three preconditions were missing. Ledgers were too fragile
to serve as authoritative ground truth; probabilistic reasoning was
too cumbersome to be embedded in everyday code; and vendors
were rewarded for selling CRUD at intelligence prices. All three
have shifted. Systems of records have plateaued in value and
reliability; probabilistic toolchains fit on a laptop; and buyers have
learned to separate bookkeeping, reporting, and decision engines.
Once these are in place, the case for automation is not rhetorical;
it is arithmetical.
In short, we automate not to polish a plan but to restore eco-
nomics to the driver’s seat. Unattended decision engines put the
bottleneck where it belongs—in atoms and hours—while deliver-
ing the one precondition for sustained improvement: comparable
experiments that run daily, at scale, with coins on the line.
8.2.6 Where to start
Automation is not a slogan; it is an allocation problem: where
should the next coin of engineering effort be spent so that unat-
tended decisions repay fastest? In practice, the operative notion
is the difficulty of a decision process, understood economically as
the inverse of the expected return on investments in improving
it. High expected return, low difficulty; low expected return, high
difficulty. Two regimes matter most. The base difficulty is the cost
of migrating a flow from attended spreadsheets to an unattended
decision engine under an explicit frame. The marginal difficulty is
the cost of extracting the next increment of performance from a
flow that is already automated.
The point is simple and vivid: scale lowers base difficulty.
When millions of small, similar commitments recur and the frame
can be written down—admissible options, feasibility tests, and
money-denominated prices—the first automation passes typically
repay quickly. By contrast, small, idiosyncratic niches combine low
payoff with high marginal difficulty; they are the worst places to
start and should be tackled last, however tempting their apparent
326 CHAPTER 8. DECISIONS
tractability.
Consider a grocer whose national distribution center feeds
800 stores nightly. Automating DC
-
to
-
store dispatch has a short,
legible frame: admissible moves are “ship 0/1/2 units of SKU
i
to
store
j
tonight”; feasibility tests are truck cube, door slots, and
store caps; prices are shortage pain, aging, line
-
haul and handling,
plus a shadow price on distribution center stock; distributions are
next
-
day sell
-
through and late
-
truck risk. One numerical recipe
then covers tens of thousands of homogeneous micro
-
decisions per
day, so a first pass—greedy accumulation under these prices, with a
halt when confidence drops—repays quickly. By contrast, starting
with a boutique flow—quarterly pop
-
up assortments for twelve
flagships entangled with visual
-
merchandising rules, ad hoc launch
events, and planogram politics—offers little scale and murky prices;
the “frame” is mostly ceremony. The large river is easier than the
puddle: scale makes base difficulty low, while idiosyncrasy makes
it high.
Engineers often misread solver timing curves. For any fixed tech-
nique, enlarging a model—more variables, more constraints—almost
invariably induces superlinear growth in CPU and memory.
5
That
curve is a property of the method, not of the business problem.
Economics supplies levers the curve ignores: coarsen the frame
wherever granularity does not move money; shorten horizons to
the next moment of control; replace brittle hard limits with priced
penalties; and help the solver with warm starts and decomposi-
tions so yesterday’s answer speeds today’s search. The goal is not
algebraic purity but risk-adjusted return within a sensible compute
budget, day after day. In practice, a rough-but-priced formulation
that runs reliably outperforms “exact” models that time out—and
both beat spreadsheet clericalism once scale is present.
A practical triage follows. Begin where the frame is short and
legible: options are enumerable without heroics, feasibility tests
are mechanical rather than political, and the prices that govern
5
An algorithm is superlinear when its time or memory rises faster than the
problem size; doubling the variables can more than double the cost.
8.3. DECIDING IS OPTIMIZING 327
the objective—stockout pain, obsolescence, capital charge per
unit time, the option value of waiting—can be stated, owned, and
versioned. Favor flows whose cadence gives the engine time to think
(minutes to hours): overseas purchasing and consolidation, DC-to-
store dispatch, network stock balancing, repairables provisioning.
Defer micro-niches whose economics rest on unresolved, implicit
prices—governance problems before engineering ones. Above all,
let measurement decide: run challengers in dual-mode against
incumbents, attribute deltas in coins within a clear window of
responsibility, and promote only when the uplift pays for itself.
Seen this way, “where to start” is no mystery. Attack the
biggest rivers of decisions first, accept approximate answers that
respect the economics and the frame, and let the unattended engine
compound from there. Small problems feel safer; large ones are
easier.
8.3 Deciding is optimizing
Mechanizing choices requires a formalism that tells a machine what
to prefer and when to stop. “Optimization”, a branch of applied
mathematics, supplies that formalism—but only once it is recast to
fit the economic and iterative character of real decisions. The text-
book posture—state a model, hand it to a generic solver, await a
provably optimal plan—misfires in practice. What works is humbler
and more powerful: an optimization frame that exposes admissible
moves, prices their consequences in money, ingests probabilistic
views of tomorrow, and is engineered to run—unattended—again
and again as the world shifts.
8.3.1 The optimization paradigm
Classical mathematical optimization notation speaks of variables,
constraints, and an objective (see the annex). We keep the triad
but shift its meaning to match the decision grammar developed
earlier in this chapter and in Economics and Intelligence.
328 CHAPTER 8. DECISIONS
Variables are not blunt totals (“the quantity for SKU X this
month”) but marginal commitments at the finest grain that changes
economics. For some problems the grain is a unit, a stop, or a
cubic meter; for rotables it may be a specific serial number whose
history makes it economically unique. Thinking in marginals keeps
the engine aligned with the rate–of–return calculus: the question
is never “what is the right stock level?” but “is one more unit, here
and now, worth more than its next best use?”
Constraints are feasibility tests that decide which options may
be exercised. They should be written as code that inspects a
candidate move and either admits it or rejects it with a reason.
In supply-chain reality, very few limits are truly absolute: almost
any bottleneck can be relaxed for a price or a delay. It is therefore
prudent to move many “constraints” into the objective as priced
penalties, keeping only the handful that reflect physical impossi-
bilities or legal prohibitions. This makes unlike frictions—dock
congestion, crew overtime, shelf resets—comparable on the same
coin ledger and lets the engine find the least-bad compromise under
uncertainty.
The objective is a money
-
denominated valuation consistent
with the firm’s goal: maximize expected, risk
-
adjusted rate of
return. It is not a percentage target masquerading as economics.
A useful objective must be explicit about the prices it assumes:
stockout pain, obsolescence, capital charge per unit
-
time, late fees,
goodwill loss, option value of waiting. Hidden prices breed politics;
open prices breed performance.
Forecasts enter as first
-
class inputs. Because outcomes hinge on
futures that are partly unknowable, the engine does not score plans
against a single guessed trajectory but against distributions—over
demand, lead times, returns, reliabilities, and sometimes prices.
The objective remains a deterministic function of its inputs; the
randomness lives in the forecast objects that feed it. This separa-
tion keeps causality legible and the blame assignable when a bet
underperforms: was the frame narrow, the valuation misguided,
or the forecast miscalibrated? The heavy probabilistic machinery
is developed in the next section; here it suffices to insist that a
8.3. DECIDING IS OPTIMIZING 329
decision engine must be built from the start to ingest distributions,
not points.
Finally, a word on expression. Mid-century doctrine tried to
separate statement from resolution—one “declarative” file for the
problem, a black-box solver for the rest
6
. That separation flatters
elegance; it handicaps practice. A decision engine that runs every
day benefits when its model helps the method: variable orderings
that reflect economics; warm starts from yesterday’s solution;
decompositions aligned with geography or lifecycles; caches that
remember expensive subcomputations; feasibility tests that fail
fast. This is not impurity; it is mechanical sympathy between
problem and machine.
8.3.2 On optimality
Within a fixed frame—finite resources, explicit admissible moves,
stated prices—there is a best allocation. The point matters: “good
enough” is not an abdication of rigor but a recognition that eco-
nomics, not algebra, decides when a search should stop.
Two lessons follow. First, supply-chain problems exhibit di-
minishing returns along most levers. Extra inventory improves
service until it no longer does; extra expedites tame lateness until
they no longer do; extra assortment widens reach until it clogs
attention. This curvature is good news: it turns our discrete search
into something that admits pseudo
-
gradients. Greedy steps, lo-
cal improvements, neighborhood exchanges, and short look
-
ahead
policies—when guided by proper prices and fed with probabilistic
inputs—climb quickly toward high ground. These problems are not
cryptographic; flipping one bit does not turn feasible into impossi-
ble and collapse value to zero.
7
. Algorithms that rely exclusively
6
This separation was canonized by modeling languages such as AMPL: A
Modeling Language for Mathematical Programming (1993), Fourer et al. The
model/solve split is pedagogically clean and excellent for benchmarking; in day-
to-day supply-chain optimization it often impedes the “mechanical sympathy”
that practice demands—warm starts from yesterday’s plan, domain-aware de-
compositions, cached subcomputations, and explicit economic instrumentation.
7
This contrast is not accidental but the product of selection. Industrial
330 CHAPTER 8. DECISIONS
on splitting abstract spaces and tightening linear relaxations tend
to grind on the wrong geometry; the constraints of business are
rarely tight enough, early enough, for half-space pruning to pay.
Second, “the optimum” is conditional. Change the frame and
the summit moves. Add a liquidation channel; allow price nudges
tied to sailing dates; widen what may co
-
load; legalize late binding
between near
-
substitutes: yesterday’s paragon becomes today’s
local hill. This is not a failure of optimization; it is its purpose.
In Kuhn’s terms, routine progress happens within a paradigm;
step changes happen by changing the paradigm.
8
A living deci-
sion engine must therefore be built for experimental optimization:
dual
-
run challengers against incumbents, promote only when mea-
sured deltas justify it, revert cheaply when they do not. The
obsession with certificates of global optimality in toy worlds dies
here; the currency is uplift measured on the firm’s books.
8.3.3 The solver within
A solver is the software component that, given a frame, forecasts,
and prices, searches the admissible space and proposes commit-
ments. It is the heart of the engine but not the engine itself. Around
it sit the indispensable organs: semantic adapters to read author-
itative records; probabilistic modules to produce distributions;
valuation ledgers to expose prices and owners; halting heuristics
supply chains are continuously pruned by market pressure: configurations that
require brittle, all-or-nothing “perfect plans” fail disproportionately and are
retired; what remains is a population of routines, contracts, and physical layouts
that tolerate noise, delays, and partial failure. Robustness is seldom the result
of explicit design alone; it accumulates through survivorship. As a consequence,
most operational choices admit neighborhoods of near-equivalent moves that
deliver comparable returns, which is why approximate, greedy search guided
by correct prices often outperforms the pursuit of exact global optima in toy
models.
8
Thomas S. Kuhn, The Structure of Scientific Revolutions (1962). In business
settings, paradigm shifts are typically small and frequent: a new admissible
move, a repriced penalty, a different cadence. Their effect on the “best” plan is
nevertheless incommensurable with what came before.
8.3. DECIDING IS OPTIMIZING 331
to suspend emission when confidence drops; and the cut
-
over ma-
chinery that promotes proposals to facts and re
-
ingests them on
the next cycle.
Because the engine runs under uncertainty and repeats, several
engineering stances prove decisive.
First, stochastic control beats deterministic planning. The solver
should rank admissible moves by their expected, risk-adjusted rate
of return under the supplied probability distributions. Sequential
dependence—today’s move shaping tomorrow’s options—can be
handled by optimizing policies rather than single acts, typically
within a receding
-
horizon loop in which each cycle recomputes with
fresher information. Functional forecasts—simulators that map
“what we would do if. . . to outcome distributions—are the natural
partner for such policy search. The mathematics is straightforward;
the novelty is to make it small, legible, and auditable enough to
run every day.
Second, penalties tame unenforceable limits. In the stochastic
world, “never exceed X” is either impossible or ruinous. Pricing
violations brings limits under one roof and aligns them with the
money objective. Chance-constraint language may be useful for
discourse; prices do the work. We return to unenforceable lim-
its—and to the relation between chance constraints and priced
penalties—in what follows.
Third, design for repeat resolution. Most supply-chain choices
are not one-off industrial design problems (e.g., siting warehouses or
fixing lanes) but small, stochastic allocations made again tomorrow
under slightly different facts. A solver that prospers here must
ingest distributions, price options in coins, and be cheap to halt
and rerun. The output is not a monolithic plan with a certificate;
it is a steady stream of better wagers under a common ledger. This
stance runs against the dominant OR posture—one-shot global
models with static constraints—which is why only a handful of
firms use solvers day-to-day in operations.
Fourth, instrument for trust. Every emitted decision must come
with a dossier: the admissible options considered, the prices and
distributions used, the marginal deltas that justified the choice,
332 CHAPTER 8. DECISIONS
and the tests passed on the way out. Dual
-
run should be the
default: the incumbent recipe holds the pen while a challenger
writes shadow decisions; measured differences accumulate in coins.
When something smells wrong—tails miscalibrated, a constraint
starting to bind—halt. Flooding users with “exceptions” deputizes
them as unpaid coprocessors and teaches them to distrust the
system. A clean stop invites the only interventions that help: edits
to the frame, the prices, or the semantics.
Finally, a sober word about the toolbox sold under the OR
label. After seven decades of research and two more of ever
-
faster
hardware, generic linear
-
and mixed
-
integer solvers remain marginal
in day
-
to
-
day supply
-
chain operations. They show up for one
-
off
design work—network siting, long
-
horizon capacity studies—and
for a few tightly bounded niches. They almost never govern the
recurrent, uncertainty
-
laden choices that move coins every day.
This is not managerial stubbornness; it is a mismatch between
instrument and phenomenon.
Classical OR technology presumes a world that supply chains
do not inhabit. Uncertainty is compressed into point surrogates
or thin
-
tailed “safety” terms; objectives are written in artifact
units—service levels, accuracy percentages, utilization—rather
than in money; and a monolithic plan is produced as if execution
were a one
-
shot plan rather than a stochastic control loop that
repeats at operational cadence. The modeling creed that separates
“pure statement” from “impure method” then forbids the very
hints that practice requires: priced penalties for unenforceable
limits, explicit shadow prices on shared resources, warm starts
from yesterday’s commitments, and greedy
-
accumulation rules
steered by windows of responsibility. Faced with fat tails, ratchets,
and option values, the global MILP either times out, optimizes
the wrong thing, or yields a brittle schedule that collapses at first
contact with reality.
The empirical resolution is simple and unglamorous. Treat
solvers as subroutines inside a larger, economics
-
aware, stochastic
search rather than as sovereigns over the decision. Let probabilis-
tic forecasts supply the futures; let the objective speak only in
8.3. DECIDING IS OPTIMIZING 333
coins; move most “hard” limits into priced penalties; recompute
marginal commitments at the cadence reality allows; halt when
confidence drops; run challengers against incumbents in parallel;
and embed domain structure wherever it buys return. Classical
primitives—assignment, min
-
cost flow, knapsack, routing—remain
invaluable, but only when they are fed proper prices and glued
into a loop that respects uncertainty, repetition, and ratchets.
Seen this way, the verdict on OR in operations is not that
the mathematics is wrong, but that the posture is misaligned.
Tools built for stationary toys and one
-
shot plans cannot manage
the combinatorics of options under fat
-
tailed uncertainty where
profit lives in the tails. Compute should therefore be spent where
it purchases coins: on better frames, better prices, and better
probabilistic views, not on certificates of optimality for the wrong
question. The next section puts a price on compute itself.
8.3.4 The economics of compute
Optimization has a cost. Minutes of CPU, hours of wall
-
clock,
days of engineering—each must be justified the same way any other
resource is: by its expected, risk
-
adjusted return. The relevant
horizon is operational. Rescheduling pick robots invites millisecond
cadences; overseas buying does not. Vendors peddle “real
-
time” as
if time itself were a virtue; it is not. The right cadence is the one
at which an extra unit of compute no longer repays itself in coins.
Two practical consequences follow. First, accept that some deci-
sion cycles are “slow by design”. Large, stochastic, high
-
dimensional
searches cannot be driven to hundred
-
millisecond latencies without
either gutting the frame or trivializing the objective. Amdahl’s
law
9
bites as soon as the engine must touch broad contexts to
avoid local stupidity. Second, engineer graceful degradation. Run
a quick pass that captures 80 % of the value with cheap heuristics
9
Gene Amdahl, 1967: the speedup of a system is bounded by the fraction
that cannot be parallelized. In decision engines the non
-
parallelizable parts
are often I/O, semantic checks, or global reconciliations that keep the solution
coherent.
334 CHAPTER 8. DECISIONS
and good prices; spend the remaining budget where shadow prices
scream for attention; stop when expected uplift per minute falls
below the minute’s cost. Because the engine repeats, tomorrow’s
run will harvest today’s leftovers with fresher information and a
warmer start.
The same calculus governs model size. Bigger instances are not
“harder” in any metaphysical sense; they are only more expensive
with a fixed technique. If a method blows past a sensible budget,
change the method, coarsen the grain where economics permits, or
shrink the horizon. Measured uplift, not aesthetic purity, decides.
In sum, optimization is the language through which a decision
engine expresses preferences under uncertainty. When reframed
around marginals, prices, and admissible moves; instrumented for
repetition, halting, and audit; and engineered with sympathy for
both hardware and economics, it becomes what the automation
program needs: not an oracle that dictates a plan, but a tireless
clerk of the firm’s intentions, forever proposing the next
-
best wager
and writing it—legibly—into the ledger.
8.4 Deciding under uncertainty
Decisions are priced wagers on futures we do not control. Earlier
in this chapter, we adopted the stance that makes such wagers
computable: the objective is deterministic and money
-
denominated,
while uncertainty enters through probabilistic forecasts—distributions
for the facts that matter (demand, lead times, reliabilities, returns,
sometimes prices). We do not bury randomness inside the objec-
tive; we feed the engine distributions and ask it to rank admissible
commitments by their expected, risk-adjusted rate of return.
This posture departs from the teleological vision (Chapter The
Future), which treats uncertainty as a defect to be engineered
away by better point forecasts. Even small errors in such point
estimates can have discontinuous consequences: a turbine grounded
for one missing vane is worth far less than “99.98 % of a turbine,”
and a rival that is 1 % cheaper can erase 100 % of your sales.
8.4. DECIDING UNDER UNCERTAINTY 335
In supply chains, profit and loss live in the tails. Methods that
optimize against a single guessed trajectory therefore manufacture
confidence in place of robustness.
Nothing here hinges on vocabulary. Calling the problem “stochas-
tic optimization” simply acknowledges that futures are represented
by distributions and that valuation is taken in expectation (and,
when warranted, with explicit risk adjustments). What matters
in practice is where uncertainty is carried and how it is priced.
As argued above, the right place is in the forecasts that enter the
objective, not in the objective itself; and the right prices are the
firm’s own coin
-
denominated penalties and rewards—stockout pain,
obsolescence, capital charge per unit time, congestion, late fees,
the option value of waiting.
One might be tempted to conclude that once expectations are
taken, the “deterministic” and the “stochastic” cases collapse to the
same mathematics. They do not, in practice. Deterministic recipes
encourage brittle plans, ignore fat tails, and hide trade
-
offs behind
artifact KPIs; an engine designed for distributions must, instead,
price tails, expose valuations, halt when confidence drops, and
repeat at the cadence reality allows. This difference of posture—not
a change of symbols—drives results.
With this foundation in place, the remainder of the section
turns to operational consequences of deciding under uncertainty.
First, we examine how point
-
based methods systematically pro-
duce fragile decisions—choices that look lean on paper and then
hemorrhage coins when the world wanders off the median. We then
treat unenforceable constraints, which must be handled by prices
or explicitly bounded risks rather than by edicts; we put a price on
learning via the exploration–exploitation trade
-
off; and we justify
funding options that are unprofitable today but hedge tomorrow’s
regimes. Each topic applies the same grammar: admissible moves,
probabilistic views, and a single money ledger.
336 CHAPTER 8. DECISIONS
8.4.1 Fragile decisions
In the present book, a decision is fragile when a small deviation
from the assumed “typical case” produces a disproportionate loss
in its risk
-
adjusted rate of return. Operationally, fragility appears
as both a higher variance of realized returns and—once fat tails and
ratchets are correctly priced—a lower expectation than competing
moves that keep options open. Fragility is not about cataclysms;
it is about the humdrum surprises that visit every flow: a supplier
that ships short, a truck that misses a slot, a store that closes for
a day, a post that goes viral and empties a shelf. Each event is
surprising locally; taken in aggregate, they are not.
Fragility is largely manufactured by policy. The teleological
posture—one ordained future, a plan to enact it, and performance
indicators that police compliance—breeds decisions that look won-
derfully lean on paper and hemorrhage coins when the world takes
even a modest detour. Point forecasts compress futures to a single
trajectory; “hard” limits are treated as edicts; utilization is pushed
toward saturation; buffers are justified by formulas whose hidden
prices no one owns. Such procedures do not merely face variability;
they amplify its costs by removing the very options that variability
makes valuable. It is therefore more precise to speak of fragile
policies—rules that systematically produce fragile decisions.
Two structural features make these policies brittle. First, most
supply
-
chain payoffs are lopsided: the downside lives in the tails
while the upside is bounded. One missing component can ground an
aircraft worth millions; being 1 % dearer than a rival can wipe out
the sale entirely. Thin
-
tailed surrogates and median
-
case planning
hide these cliffs and thus underprice the option to avoid them.
Second, operational ratchets erase alternatives. A premature store
allocation that looked harmless under a baseline forecast prevents
serving a better request tomorrow; a “full
-
truck” routine that ships
on the calendar locks tomorrow’s consolidation out. Decisions
that maximize paper “efficiency” by binding early are, in practice,
decisions that sell options cheaply and buy trouble dearly.
The optics are treacherous. Fragile decisions look efficient:
8.4. DECIDING UNDER UNCERTAINTY 337
trucks depart visibly full, distribution centers look empty, head-
count looks thin, and inventory sits very close to the edge. Non
-
fragile
decisions look, at a glance, a bit “wasteful”: a sliver of capacity is
kept available, a pool of stock is held back at the distribution center,
a second supplier is retained, a few repairables are prepositioned.
Yet once tails and ratchets are priced in money, the apparently con-
servative stance delivers a higher expected, risk
-
adjusted aperiodic
rate of return. The premium that buys slack is an option premium;
it repays itself whenever the world misses the median—which it
reliably does.
A mundane illustration clarifies the point. Consider line main-
tenance for a jetliner on a tight turn. The work scope is uncertain
until the first panels open; one missing vane can strand the aircraft
and cascade missed connections. A deterministic routine that
provisions a kit sized to the “average job” will look lean ex ante
and fail expensively whenever demand spikes on a few parts. A
rugged routine begins with distributions for the uncertain work
scope and for repair–return delays, attaches a coin
-
denominated
penalty to schedule slip, and prices each extra part as an option
against that penalty. The resulting kit is not “minimal”; it is prof-
itable. Unused parts go back to stock; their carrying charge is small
next to the loss avoided when tails bite. The same logic governs
any high
-
penalty, low
-
probability shortage: fat tails dominate and
thin-tailed arithmetic manufactures fragility.
A second illustration lies inbound. Suppose a warehouse faces
random supplier drift and a finite number of receiving doors. A
weekly, plan
-
compliant ordering calendar will occasionally stack
sailings and overload the dock; the remedy becomes firefight-
ing—deferrals, driver detention and demurrage
10
, overtime crews,
and off-contract priority premiums for scarce dock slots.
A rugged stance treats arrivals and dock times as distribu-
tions, prices overload through explicit congestion penalties (the
10
“Detention” compensates a carrier when a truck waits beyond free time at
loading/unloading; “demurrage” is the terminal or storage fee charged when a
container overstays its free time.
338 CHAPTER 8. DECISIONS
same economics that surface as detention, demurrage, overtime,
or priority
-
slot premiums), and lets the decision engine defer or
advance orders until the expected, risk
-
adjusted return of the next
batch clears the firm’s shadow cost of capital plus the option value
of waiting. The cadence becomes an output of economics, not an
input from the calendar; the dock is not “always busy,” but coins
stop leaking.
Downstream, premature allocation is another quiet source of
fragility. Shipping scarce units early to weak stores freezes them
where they rotate slowly and prevents serving the winners that
emerge a few days later. A rugged routine attaches a visible shadow
price to distribution center stock under pressure—coins per unit
per day that represent the option to serve a better request tomor-
row—and releases a unit to a store only when that store’s marginal
return overtakes the distribution center’s hold price. Throughput
usually increases where it matters most, even though more units
sit at the distribution center for a little longer. The optics of
“busyness” improve under fragile policies; the economics improve
under robust ones.
Fragility also imposes a labor cost that escapes dashboards.
Fragile policies collide with reality, generating long exception
queues and conscripting planners as human coprocessors. The
organization normalizes chronic expediting, manual resequencing,
and after-the-fact overrides. This firefighting absorbs the attention
needed to build the capital good at hand: a small, legible decision
engine that encodes frames, prices, and probabilistic views, runs
unattended, and compounds improvements. Fragile policies not
only lose coins on bad days; they also keep the firm from investing
the time that would make bad days rarer and cheaper.
Engineering non-fragile decisions does not require heroics; it
requires a change in posture. The rugged view treats uncertainty
as the raw material of profit. Practically, this means (i) admitting
distributions—over demand, lead times, reliabilities, returns, and
sometimes prices—as first-class inputs; (ii) expressing all frictions
in coins—stockout pain, congestion, aging, late fees, the option
value of waiting—and owning those prices company-wide; and (iii)
8.4. DECIDING UNDER UNCERTAINTY 339
letting a stochastic optimizer rank marginal commitments by their
expected, risk-adjusted aperiodic rate of return. Small, priced slack
follows naturally: a bit of pooled inventory at the distribution
center, a few retained alternatives in transport, measured headroom
at critical resources, modest over-provisioning of kits where tail
penalties are sharp. These are not talismans against bad luck; they
are options purchased because the expected payout exceeds the
premium.
A simple test exposes fragility in practice. If a recommen-
dation collapses under small, realistic perturbations of inputs;
relies on calendar rituals rather than priced cadences; expresses
“excellence” in artifacts—accuracy percentages, service targets,
utilization scores—rather than in coins; and demands constant
human “exceptions” to remain tolerable, it is fragile. Conversely,
if a routine can articulate its frame, show its distributions, name
its prices, defend its cutoff for action versus waiting, and produce
dossiers that survive contact with realized outcomes, it is likely to
be robust—even if, to the untrained eye, it looks a little “wasteful”.
Once uncertainty is admitted, one last ingredient becomes
unavoidable: many limits cannot be guaranteed under all draws
of the dice. Dock slots, driver rosters, pick faces, and battery
charge cycles will occasionally be stressed by unlucky coincidences
even when decisions are sensible. The right treatment is not to
deny randomness but to price it. The next section turns to these
unenforceable constraints and shows how to handle them with
chance bounds—or better, with explicit penalties stated in the
same coin that governs every other trade-off.
8.4.2 Unenforceable constraints
In a stochastic world, most “constraints” are not laws of physics
but preferences backed by penalties. Treating them as edicts
that must hold under every draw of the dice produces brittle
plans and expensive firefighting. The frame introduced earlier
already suggests the right split. A few limits are truly hard—legal
prohibitions, physical impossibilities, safety barriers—and belong
340 CHAPTER 8. DECISIONS
in feasibility tests that reject candidate moves outright. Everything
else is soft: it can be violated for a price or delay and therefore
belongs in the objective as an explicit, coin-denominated nuisance.
Let’s consider, for example, a company passing replenishment
orders for a warehouse. The warehouse has a maximum inbound
capacity. If too many trucks arrive on the same day, the staff must
turn some away to return later. This is costly and disruptive, tying
up trucks planned for later deliveries. Thus, replenishment orders
are intentionally spread out to avoid excessive inbound shipments
on any given day. However, suppliers may face difficulties. Unfore-
seen delays may, infrequently, cause deliveries—initially planned
for distinct dates—to collide on the same day. Because supplier
delays are random—and often fat-tailed—such clashes can never
be ruled out. This would require impractically long delays between
successive orders. Spacing orders mitigates the risk of exceeding
inbound capacity, but it cannot eliminate it.
More broadly, most supply-chain constraints carry a real risk of
violation even when handled carefully. By definition, the company
has no control over its customers or its suppliers. Nor does it have
complete control over its own employees, who may fail to show up
for various reasons. Yet, while perfection is not of this world, the
company can steer its decision-making to respect those constraints.
Consider two approaches.
Chance constraints offer a less stringent criterion. Instead of
insisting a constraint be met for all outcomes, a chance constraint
requires it to hold with a minimum probability. One pragmatic
approach is to allow a small, explicitly quantified risk of breaking
a rule—for example, keeping the chance of dock overload below
5%. This keeps violations rare and controlled.
The main weakness of chance constraints, however, is that they
are fundamentally noneconomic—or rather, that the economic
trade-off is hidden inside an arbitrary percentage deemed accept-
able—a dimensionless number. While one can tune this percentage
to reflect the economics of the situation, doing so creates a new
problem to solve.
Penalized constraints are a simpler alternative: move the
8.4. DECIDING UNDER UNCERTAINTY 341
constraints into the objective through penalties. They treat every
breach as a costly nuisance and let the optimizer weigh that penalty
against the decision’s expected benefit. Each violation incurs
a large cost. The optimization then minimizes expected total
cost, naturally encouraging feasible or near-feasible solutions. A
sufficiently high penalty deters violations and steers the optimizer
toward nearly feasible solutions.
Penalized constraints suit supply-chain optimization because
they express operational limits in the same monetary terms that
govern the model. Almost any bottleneck can, in principle, be
relaxed for a price, and the penalty term captures that price
explicitly. The optimizer therefore searches for plans that avoid
ruinous outlays without being boxed in by rigid rules. Crucially,
because each penalty is stated in currency, dozens of heterogeneous
limits sum seamlessly into a single objective, allowing the model
to scale gracefully in high-complexity environments.
8.4.3 Exploration vs. exploitation
Decisions repeat. Each cycle updates our beliefs, and those beliefs
steer the next cycle. Historical data are therefore not neutral; they
are the by
-
product of yesterday’s choices. If a retailer has never
priced a SKU above 9.99, the ledger contains no evidence about
demand at 10.99. A solver that only “exploits”—always choosing
today what looks best under the current model—quietly freezes the
model’s blind spots. Profits then plateau not because the world
is exhausted but because the firm no longer learns where returns
might be higher.
The rugged stance (Chapter The Future) treats this endogeneity
as a design constraint. Occasionally, we must pick a move that
is slightly inferior on today’s ledger because the observation it
generates will improve tomorrow’s ledger by more than the foregone
coins. In economic terms, the firm buys a small option on knowledge.
The practical question is not philosophical—whether to “try new
things”—but quantitative: when does the expected value of the
information exceed the opportunity cost of the experiment?
342 CHAPTER 8. DECISIONS
Two kinds of uncertainty matter. Aleatory uncertainty is in-
trinsic randomness; no amount of effort will erase it. Epistemic
uncertainty is ignorance; it can be reduced by gathering the right
observations (cf. Chapter The Future). Exploration is the delib-
erate reduction of epistemic uncertainty when its expected divi-
dend—better decisions over the relevant horizon—outweighs the
immediate shortfall of the probe.
A small, concrete illustration makes the endogeneity point plain.
Consider a sports
-
drink line sold across one hundred stores. The
firm has long held an informal doctrine that “ten-coin” is the sweet
spot for impulse purchases; as a result, the price database is rich
around 9.99 and almost empty above 10.49. The demand model
inherits this bias and, when asked, advises once more to stay near
9.99. A rugged routine counteracts this inertia without gambling
the brand. It designates a tiny test cell—say, eight stores with
typical footfall—for two weeks, raises the price to 10.49 there, and
re
-
estimates the local price response. The cost is the estimated
margin lost (or customers deterred) against the status quo price
during the test period; the dividend is the uplift the new response
unlocks when rolled to the other ninety
-
two stores. If the expected
dividend, stated in coins, exceeds the experiment’s cost, the price
probe is rational—even if the eight stores underperform during the
test.
Exploration is not confined to pricing. A buyer facing a new
supplier with uncertain reliability can place a handful of deliber-
ately small, early orders to learn the tail of the lead
-
time distri-
bution before committing volume. A network planner can try a
modest cross-dock scheme in one region to measure congestion
penalties rather than assuming them. A merchandiser can let a
few low
-
sell
-
through items stock out on purpose (on a safe scale) to
learn true substitution rates instead of extrapolating from censored
data. In each case, the engine prices the probe like any other
move: admissible options are stated, futures are probabilistic, and
all effects—shortage pain, aging, congestion, capital charge per
unit-time—are expressed on the same coin ledger.
This trade
-
off is known in the literature as “exploration vs.
8.4. DECIDING UNDER UNCERTAINTY 343
exploitation”.
11
The classical bandit imagery is useful, provided
we keep the economics in view. The goal is not to chase abstract
“regret bounds” but to decide, at operational cadence, whether a
marginal probe clears the same bar that governs every commitment:
expected, risk-adjusted aperiodic rate of return. Three design
habits keep the process legible and auditable:
First, keep probes small and bounded. The window of respon-
sibility limits attribution; scope (number of stores, lanes, SKUs)
limits downside. Smallness is not timidity; it is how we price and
compare learning options cleanly against their dividends.
Second, make the learning dividend explicit: the expected
uplift—over the window during which the revised rule will be in
force—net of aging, markdowns, congestion, and capital charge.
Hidden “benefits” count as zero; only coins on the ledger count.
Third, embed exploration within the same unattended engine.
A probe is not a special project; it is another admissible move
with a dossier: the prior, the proposed perturbation, the window,
the prices, and the rule for promotion or rollback once outcomes
arrive. If the engine’s confidence drops, halting heuristics suspend
the probe exactly as they suspend ordinary emissions.
This stance also clarifies why the teleological view cannot
accommodate exploration. Under forecast
-
plan
-
comply, decisions
are only the last line of a deterministic program; forecasts live
apart from actions; and “accuracy” is policed as a percentage
detached from coins. In such a world, there is no budget for buying
information because the procedure presumes it already knows what
matters. In the rugged world, by contrast, quick, priced probes
are a source of edge: they narrow epistemic uncertainty where it
pays and let the firm move first where a small insight compounds.
We now turn from buying information to buying insurance.
Exploration funds observations that change the model; nurturing
options fund capabilities that are unprofitable today but would
11
For an example of an algorithmic strategy leveraging the exploration vs.
exploitation trade
-
off from an economic perspective, see “Multi
-
armed Bandit
Algorithms and Empirical Evaluation” (2005) by Joannes Vermorel and Mehryar
Mohri.
344 CHAPTER 8. DECISIONS
pay—handsomely—under specific regimes. The grammar remains
the same: admissible moves, probabilistic futures, one coin ledger.
8.4.4 Nurturing options
Resilience means funding options that are unprofitable today yet
could preserve future profits if conditions shift. Indeed, whether
a given range of options—offered by a supplier, a carrier, or an
assembly line—is valuable to the business depends on prevailing
market conditions. If those conditions change, certain options may
sharply appreciate or depreciate. The company can hedge such
risks by nurturing the right options.
For example, a company based in Westonia may presently
source a part from a supplier in Ruritania. Local alternatives in
Westonia are, unfortunately, not competitive with the Ruritanian
supplier. Moreover, their parts do not fully meet the company’s
specifications. The company is fully satisfied with its Ruritanian
supplier. Yet Westonia’s government may impose a massive tariff
on imports from Ruritania. Indeed, several rising political figures
have been outspoken in favor of such tariffs. If enacted, such tariffs
would severely reduce profits, and the firm could not switch to
local suppliers because their parts fail on specifications. Thus, to
insure against this risk, the company selects a local supplier and
helps upgrade its production so the parts meet specifications. Most
orders still go to the Ruritanian supplier, yet the firm nurtures
the local alternative, which may prove a profit-saver if the tariff is
imposed.
Nurturing options take many forms. The company might fund
unused capacity—manpower, storage, production, or transport.
The company might pay high service fees in exchange for favorable
termination clauses (to surrender a lease of a vehicle, a building,
or a piece of industrial equipment). The company might lower
assembly-line efficiency to increase reconfigurability, broadening
the range of designs the line can output.
The point of nurturing options is to prepare the company
for conditions that are unlikely yet far from impossible. Unlike
8.5. SEQUENTIAL DECISIONS 345
the exploration–exploitation trade-off discussed previously, the
investment’s goal is not discovery. From the outset, the option is
understood to be nonviable so long as present market conditions
persist. The investment is a hedge.
Assessing viability requires quantifying three elements: (1)
the probability of the trigger, (2) the savings the option would
yield under those conditions, and (3) the cost of nurturing it. If
probability-weighted savings exceed nurturing costs, hedging is
profitable.
Again, nurturing options make sense only under the rugged
view. Under the teleological view, the future is treated as a known
quantity. As a result, options not immediately economically viable
are dismissed. Indeed, without a supporting rationale, management
struggles to justify “unnecessary” expenses—especially those tied
to highly pessimistic conditions that would invalidate the current
course favored by the management team.
8.5 Sequential decisions
Many supply-chain choices are sequential: the payoff of a move
taken today depends on moves the firm will choose tomorrow.
Present commitments erase alternatives and reshape next cycle’s
option set—a ratchet effect noted in “Asymmetry of time” (Chapter
The Future). The apparent paradox—“how can I optimize now
when tomorrow’s decision is undecided?”—is routine: overseas
buying that must fill a container, store dispatches that preempt
later allocations, production batches that tie up capacity until the
next changeover.
In this chapter, we treat such situations not as isolated one-off
optimizations but as rules for acting over time. The clean formal
object is a policy.
Definition (Policy).
An algorithm that maps states to admissible actions—and
346 CHAPTER 8. DECISIONS
their timing—under stated prices and probabilistic forecasts, to
issue flow commitments that maximize expected risk-adjusted
rate of return.
Reifying a policy does two things. First, it brings the frame
into view: admissible moves and feasibility tests; the money-
denominated prices (penalties and rewards) the firm is willing
to defend; and the probabilistic view of futures that informs each
choice. Second, it enables closed
-
loop evaluation via a functional
forecast—a simulator that answers, “if we follow this policy, what
distribution of outcomes should we expect?” This pair—policy plus
functional forecast—lets us compare candidate rule
-
sets by the
same yardstick used for one
-
shot decisions: expected, risk
-
adjusted
rate of return.
That said, turning every sequential problem into a full policy
search rarely justifies the opacity and compute it entails. The space
of possible rules is vast; scoring them only at the level of whole
trajectories obscures marginal attributions; and naïve look
-
ahead
becomes too expensive for day
-
to
-
day operations. In practice we
keep the marginalist stance—rank small, auditable commitments
by coins per unit time—while using a few disciplined instruments
that capture the sequential coupling without reifying a full policy.
A running example fixes ideas. Suppose a buyer must raise an
overseas purchase order that is economical only when rounded to
a full container. The mix can combine many SKUs, but every unit
loaded today must cover demand until the next container from the
same supplier arrives. Yet that arrival date depends on a decision
the firm has not yet made. A literal, single
-
shot optimization
cannot value today’s order without assuming tomorrow’s. A policy
resolves the dependency (“order when the shadow price of the next
container exceeds
S
; allocate units by marginal return subject to
weight/volume constraints”), and a functional forecast projects the
distributions of lead times, demands, and dock capacities under
that rule.
The remainder of this section shows how to handle most of
this coupling with lighter means: a window of responsibility that
8.5. SEQUENTIAL DECISIONS 347
bounds the horizon over which today’s order is held to account
(next subsection); the economics of waiting, which prices the option
to defer and sets a cutoff rate of return for action; permutation
invariance, which justifies greedy accumulation of batched allo-
cations—with caveats where order truly matters (vehicle routing,
job-shop scheduling); and the postponement principle, which favors
late binding to keep future options open. Only when latency or
scale demands it do we perform an explicit, parameterized policy
search.
8.5.1 Window of responsibility
Sequential decisions create an attribution problem: today’s move
reshapes tomorrow’s options, and tomorrow’s move, in turn, will
repair or amplify today’s consequences. An engine that must
run every day needs a rule that assigns consequences to each
commitment without simulating the entire future. The practical
device is a window of responsibility: a bounded horizon over which
the present decision is held to account, while later decisions inherit
the rest.
The window is not a planning bucket; it is an accounting prin-
ciple. It answers two questions: from when do we start attributing
outcomes to today’s commitment, and until when does this respon-
sibility last before it is handed over to the next decision of the
same class?
For an overseas purchase order that must fill a container, the
natural start is not the calendar day the order is entered, but
the day the goods become usable—after production and trans-
port—because only then can the decision begin to earn or lose
coins. The end is the first later moment after which a subsequent
order can reasonably take over responsibility. In practice, a robust
convention is to end the window at the arrival of the next–next
container from the same source. The immediate “next” container
is still partly correcting or amplifying today’s choice; after the
“next–next”, responsibility has genuinely migrated. By contrast,
for daily store replenishment, the window typically runs from
348 CHAPTER 8. DECISIONS
tonight’s delivery to tomorrow night’s; the cadence itself supplies
the boundary.
Within the window, the decision is scored in money terms
consistent with the firm’s objective. Sales (or consumption) realized
during the window are credited; shortages that occur within the
window are charged according to an explicit, coin-denominated
penalty that reflects lost margin, goodwill, and any cascading
effects on adjacent items. Inventory left at the end of the window
must not be swept under the rug. Two treatments are practical
and, depending on the case, equivalent in spirit. The first is a
terminal valuation: treat the residual stock as if it were immediately
liquidated at its expected salvage value, less the expected carrying
and handling costs to reach that liquidation. The second—often
simpler and more conservative for long tails—is to charge the entire
expected stream of future holding and obsolescence costs to the
decision that created the stock. This “ratchet” treatment recognizes
that later choices cannot undo an over-large purchase without
paying for markdowns, storage, and write-offs; the load rightly
stays with the originator.
Defining the window’s bounds is a probabilistic exercise. Two
distributions usually suffice. First, a lead-time distribution places
mass on the dates when the commitment begins to matter. Second,
a distribution for the next opportunity to re
-
decide marks the
hand
-
over date. This second forecast is “impure” in the sense that
it projects our own future behavior; yet at the batching scales that
matter—container sailings, factory changeovers, weekly dispatch
slots—the cadence is largely governed by externalities (cutoffs,
vessel schedules, MOQs) and proves stable enough to forecast with
useful accuracy. Where cycles are rigid—nightly store dispatches,
end
-
of
-
shift production releases—the calendar itself supplies the
end.
Truncating the horizon obliges us to price what lies beyond
it. The stockout penalty already mentioned is one example: it
stands in for repeat purchase erosion and substitution dynamics
that we choose not to simulate across seasons. Similar shadow
valuations are sometimes needed elsewhere. A fashion brand that
8.5. SEQUENTIAL DECISIONS 349
discounts aggressively at season’s end may find that heavy rebates
teach customers to wait; the long
-
run demand damage is real,
yet difficult to model across future collections that are not even
designed. A small, per
-
unit discount malus—expressed in coins
and applied inside the window—keeps the engine honest without
turning the simulator into a multi
-
year saga. As always, prices
must be explicit, owned, versioned, and exposed; hidden penalties
do not become neutral—they become capricious.
A brief illustration fixes ideas. Suppose an importer orders
a full container today. The evaluation window starts when the
container clears and goods are available; it ends at the arrival of
the next–next container from the same lane. All sales made in
between are credited to today’s order; all stockouts in the same
period are charged according to the shortage penalty. Residual
units at the window’s end receive a terminal valuation—or, in the
ratchet approach, the expected future holding and markdown costs
are fully charged to today’s order. The next container is evaluated
under its own window, with the same rules; responsibility does not
double
-
count. This convention makes the daily engine tractable
and auditable while preserving economic realism where it matters:
in the tails and in the irreversible costs that later choices cannot
erase.
The window of responsibility harmonizes with the marginalist
stance developed earlier. It lets the solver reason one step ahead
with probabilistic inputs, attribute consequences cleanly, and move
on. Its validity is empirical, not theoretical: choose the boundaries,
run dual
-
mode against incumbents, measure the deltas in coins,
and adjust. In practice, this device succeeds across a wide range of
sequential settings—overseas buying, warehouse inbound smooth-
ing, store dispatches—because it keeps the focus where economics
lives: on priced options and fat
-
tailed futures, not on speculative
multi-year narratives.
350 CHAPTER 8. DECISIONS
The economics of waiting
Every sequential problem contains a silent option: do nothing yet.
Waiting preserves options and lets information accumulate; acting
now locks in a rate of return and erases alternatives. The engine
should price delay explicitly and adopt a cut-off rule. Act only
when the expected, risk-adjusted aperiodic rate of return of the
best admissible move exceeds the shadow cost of capital plus the
option value of waiting. Below that bar, defer.
Operationally, this becomes a simple, repeatable discipline.
Recompute, daily or at a reality-driven cadence, the best next
move within the current window of responsibility; if its rate of
return clears the threshold, take it and record the commitment;
otherwise, wait. After a container has just been ordered, the very
next day’s container is usually unprofitable: demand has not yet
accumulated, consolidation opportunities are thin, and inbound
congestion penalties loom. As days pass, volumes aggregate, co-
loading improves, and the expected return rises; once it crosses
the bar, the order is placed. The policy is not “weekly cycles by
tradition”; it is “whenever economics justifies it”.
The same logic governs downstream dispatch. Consider a
fashion DC holding a handful of units of a runaway bestseller.
Allocating one unit to a weak store today may look harmless under
a point forecast; tomorrow it becomes an expensive ratchet when
a strong store asks for it and none is left. Instead of attempting
to script the entire season, attach a shadow price to DC stock
under pressure—a coin
-
per
-
unit
-
per
-
day opportunity charge that
represents the option to serve a better request later. Release
to a store only when that store’s marginal return—computed
under the current probabilistic view of sell
-
through and shortage
penalty—overtakes the DC’s shadow price. Far from hurting
throughput, such patience tends to increase it where it pays most;
premature allocation often reduces realized throughput by freezing
stock where it will rotate slowly.
Waiting is not dithering; it is a priced decision like any other.
The cut-off embeds both the firm’s cost of capital and its tolerance
8.5. SEQUENTIAL DECISIONS 351
for tail risk. It also carries the halting heuristics discussed earlier:
when the engine’s confidence drops (miscalibrated tails, unusual
constraint binding), emissions pause until the frame, the prices, or
the semantics are repaired. In this way, the window of responsibility
and the economics of waiting combine to turn many sequential
problems into a one-step greedy accumulation that stays aligned
with profit. The next subsection, on permutation invariance, shows
why such greedy accumulation often works—and where order truly
does matter.
8.5.2 Permutation invariance
Many supply
-
chain allocations are enacted in batches: a container
is filled with a mix of SKUs; a line produces a run of units; a truck
leaves the DC loaded for several stores. In these settings, what
matters economically is the final multiset of marginal commitments
in the batch and the constraints it satisfies (weight, cube, slots,
cut-offs)—not the notional order in which the atomic commitments
were picked. When feasibility tests depend only on totals and
penalties and rewards are priced symmetrically for the items in
the batch, the objective function is—up to negligible bookkeeping
effects—invariant to any permutation of the micro-allocations that
compose the batch.
This observation is powerful when combined with the instru-
ments introduced earlier. The window of responsibility bounds the
horizon over which a batch is held to account; the economics of
waiting supplies a cut
-
off: act only when the best admissible move
clears the shadow cost of capital plus the option value of delay.
Given both, the batch can be assembled greedily: at each step,
recompute shadow prices for the scarce resources and pick the next
admissible unit with the highest expected, risk-adjusted aperiodic
rate of return; stop when capacity binds or when the cut-off fails.
Because the batch valuation is insensitive to the permutation of
its constituents, this stepwise construction targets the set that
matters (the content of the batch), not a ceremonial sequence that
has no economic standing.
352 CHAPTER 8. DECISIONS
Two caveats clarify what is—and is not—claimed. First, while
the final value of a batch is order
-
insensitive, the marginal value of
any given unit is path
-
dependent: early picks face laxer constraints
than late picks. This is precisely why the greedy accumulation must
recompute shadow prices after each addition; the path dependence
is handled by repricing, while the set that emerges remains the goal.
Second, permutation invariance is not a proof of global optimality;
it is the engineering rationale for a search that climbs quickly
toward high ground in problems dominated by diminishing returns
and priced penalties—exactly the geometry discussed earlier in
On optimality. The engine repeats at an operational cadence;
tomorrow’s run starts warmer and narrower, and the window
mechanism attributes consequences without simulating the entire
future.
Concrete cases make the point. When filling an overseas con-
tainer, feasibility cares about totals (weight/volume) and cut
-
off
dates, while value is earned by the mix that will sell within the eval-
uation window. It is sensible to add units—possibly across many
SKUs—in decreasing order of marginal return, pausing whenever
the economics of waiting says “not yet”, and resuming as demand
and co-loading opportunities accumulate. In DC
-
to
-
store dispatch,
the DC stock carries a shadow price that reflects the option to
serve a superior request tomorrow (cf. The economics of waiting).
A store receives a unit only when its marginal return overtakes
that DC shadow price; within a departure wave, units are then
added greedily until the truck’s priced penalties and capacities
advise stopping. The result is often a higher realized throughput
where it matters most. Premature allocations freeze stock in low-
yield locations, a bustle that depresses throughput once the season
reveals genuine winners.
Permutation invariance also helps keep models small and au-
ditable. Batch constraints—MOQs, full
-
truck incentives, dock
congestion—are better expressed as feasibility tests and priced
penalties over the set that ships than as procedural rules that
script a picking order. The former yield clear prices and clean
halts; the latter breed artifacts and exception handling. Because
8.5. SEQUENTIAL DECISIONS 353
the batch’s value does not depend on a ceremonial sequence, the
solver is free to propose the next best marginal commitment, log
its dossier, and repeat until the cut-off bites.
Order-sensitive allocations
Not all sequential problems admit permutation invariance. Some
classes are intrinsically order-sensitive: the route of a vehicle, the
sequence of jobs in a shop with changeover times and scarce skills,
the traversal of a picker through an automatic store. There, the
distance traveled, the queuing and blocking, or the changeover
losses depend directly on the ordering of tasks. Swapping two
stops in a milk
-
run can add an hour of driving; permuting two
steps in an MRO work
-
scope can lengthen turnaround time by
a day. In such cases, the sequence is economically real, and the
model must say so.
The methodology, however, remains consistent with this chap-
ter. We keep the frame explicit (admissible moves and feasibility
tests), keep the economics open (priced penalties for lateness, over-
time, missed windows, and aging), and ingest probabilistic views
of demand, travel times, repair durations, and reliabilities. The
solver’s job is then to propose a sequence, not merely a set. Win-
dows of responsibility still bound attribution—end of shift, end of
day, return
-
to
-
depot—and priced penalties bring unenforceable lim-
its into the objective. The principle remains: make order explicit
only where it changes money; elsewhere, treat it as an artifact and
let permutation invariance justify greedy accumulation.
Seen together, permutation invariance and the economics of
waiting encourage late binding. When the valuation of a batch
depends on its contents but not on the ceremony of their selection,
it is prudent to defer commitments until the marginal returns clear
the threshold, and then assemble the batch quickly by greedy accu-
mulation. The next section, the postponement principle, formalizes
this stance and shows how to design flows so that binding choices
are taken as late as practicable, preserving options and improving
returns under uncertainty.
354 CHAPTER 8. DECISIONS
8.5.3 Postponement principle
Permutation invariance and the economics of waiting point to a
general rule for sequential work: bind late. The firm earns its
edge by deferring irreversible choices until information arrives that
materially changes the ranking of options, while charging a fair
price for the delay. Under the teleological vision (Chapter The
Future), deferral is a nuisance that complicates plan compliance.
Under the rugged vision, deferral is a priced option: it preserves
alternatives and is exercised only when its expected, risk
-
adjusted
return clears the relevant bar.
Postponement—also called late binding—means arranging the
flow so that binding choices are made no sooner than the economics
justify. Keep goods, capacities, and commitments in generic, re-
assignable states until the expected risk-adjusted aperiodic rate of
return on binding exceeds (i) the shadow cost of capital, (ii) the
option value of waiting, and (iii) the postponement premium—the
incremental costs paid to keep options open.
The premium is tangible: carrying semifinished inventory that
can be specialized later; keeping retainers with alternate suppliers;
reserving dye-house or kitting capacity; paying extra handling to
keep stock pooled at the distribution center; and investing in more
sophisticated orchestration. Postponement is not “free flexibility”;
it is a bet that the value of information and optionality will more
than repay the premium and any aging risk within the window of
responsibility. Properly applied, postponement tends to shorten
the half-life of decisions (see Changing course below) by moving
commitments closer to the moment they start to pay back and by
reducing costly misallocations that would otherwise linger.
Late binding takes several recurring forms, all governed by the
same economic test.
Form postponement. A fashion importer may commit early
to greige fabric and trims (cheap, generic inputs) and defer the
expensive specializations—dye, wash, label, size mix, geographic
split—until early sell
-
through reveals which articles and colors
carry fat tails. The admissible options become explicit: finish
8.5. SEQUENTIAL DECISIONS 355
now and ship broadly, or wait a few weeks and finish what the
winners will actually absorb. The decision engine prices the salvage
value of the laggards, the shortage penalty on the winners, the
specialization costs, and the premium paid to keep the dye-house
on call; it then binds only when the expected aperiodic rate of
return clears the cut-off.
Allocation postponement. Shipping scarce units to stores too
early freezes them where they rotate slowly. Holding them at the
DC with a visible shadow price (coins per unit per day) preserves
the option to serve a superior request tomorrow. A store receives
a unit only when its marginal return—computed under the current
probabilistic view of sell
-
through and shortage penalty—overtakes
the DC’s hold price (see The economics of waiting). Throughput
typically increases where it matters most, because winners are fed
first and laggards do not hoard.
Finish/kitting postponement. Coloring, kitting, packaging, or
regionalizing variants early commits the firm to a demand split
not yet known. Keeping WIP generic and finishing late lets orders
reveal the mix. The premium is the extra handling and the capacity
retainer at the finishing step; the benefit is fewer write
-
offs on the
wrong variants and a shorter decision half
-
life as stock is specialized
only when a taker exists.
Temporal postponement. On inbound lanes, a container ordered
“because it’s Tuesday” fails the economic test more often than
not. As days pass, volumes accumulate, co
-
loading improves, and
inbound congestion penalties evolve. Recompute daily; issue the
order on the day the best admissible batch clears the threshold
defined by the shadow cost of capital, the option value of waiting,
and the postponement premium. The cadence becomes an outcome
of economics, not of the calendar.
Price postponement. Prices shape demand; postponing mark-
downs or promotions until uncertainty has narrowed can dominate
a scripted price path. Functional forecasts (Chapter The Future)
make the pricing policy explicit and let the engine compare “mark-
down now” with “wait one more week” under the same money
ledger. The premium here is the risk of being left with aged stock if
356 CHAPTER 8. DECISIONS
the window closes; the benefit is extracting more from the winners
without teaching customers to wait.
Operationally, postponement is the art of unbundling commit-
ments into decisions that can be bound late. Break the monolith
(“all articles, quantities, destinations, dates decided nine months
ahead”) into staged micro-decisions that match the flow’s physics.
Commit early to low-information, high-lead-time inputs; defer the
high-information, low-lead-time specializations; keep stock pooled
until assignment pays; and contract for flexible volumes and late
specifications where the premium is tolerable. Managers often
“simplify” sequential problems by rigidifying them—freezing assort-
ments, banning mid
-
season reallocations, locking price paths—not
because physics requires it but because attended processes cannot
cope. With unattended systems of intelligence, the bottleneck
should sit in atoms and hours, not in the decision stack; late
binding then becomes not only feasible but routine.
Like any heuristic, postponement has limits of jurisdiction.
Where order is intrinsically consequential—vehicle routing, job-
shop sequencing, picker traversal—binding early is the problem,
and sequence must be optimized rather than deferred. Regulatory
pre-clearance, scarce changeover skills, or perishable goods can also
set hard fronts beyond which waiting no longer pays. The remedy
is not to abandon postponement but to push the late-binding
frontier upstream as far as economics and feasibility allow, and
to price the remainder explicitly in the objective: penalties for
lateness, aging, overtime, or missed windows.
In practice, the rule is simple and repeatable. At the cadence
dictated by reality, recompute the best admissible binding move
under the current window of responsibility. Bind only when its
expected, risk-adjusted aperiodic rate of return clears the sum of
the shadow cost of capital, the option value of waiting, and the
postponement premium. Otherwise, wait. Where late binding
cannot be engineered or where latency must be kept in the tens of
milliseconds, precomputed, parameterized rules are warranted—the
topic of the next subsection, policy search.
8.5. SEQUENTIAL DECISIONS 357
8.5.4 Policy search
The previous pages handled sequential decisions without reifying
a “policy”. A window of responsibility bounded attribution; the
economics of waiting priced inaction; permutation invariance justi-
fied greedy accumulation; late binding preserved options. In many
flows this is sufficient and superior: it keeps the grain marginal,
the prices explicit, the reasoning legible. Yet there are cases where
we must compress computation into a rule that can be applied at
millisecond cadence, or where the physics of the problem makes
order intrinsic rather than ceremonial. In those cases we do not
abandon the economic stance; we codify it into a policy and search
its space.
A policy, as defined earlier, is an algorithm that maps states
to admissible actions under stated prices and probabilistic fore-
casts. Turning that definition into practice serves a single purpose:
to shift heavy computation upstream—in an offline, stochastic
search—so that inference at decision time is fast and dependable.
Where latency is generous (minutes to hours), recomputing the best
marginal commitment with fresh forecasts is usually better than
carrying a policy. Where latency is tight (tens of milliseconds),
or where thousands of independent agents must act concurrently
under local information—picker fleets in an automated warehouse,
dynamic slotting inside a shuttle store, order-sensitive job sequences
with scarce changeover skills—a well-designed policy becomes the
right vehicle for the economics already developed in this chapter.
Beware the lure of “stationary proofs”. The literature is rich in
analytical policies that are “optimal” only in toy worlds—stationary
demand, thin tails, additive costs, no ratchets. Our world is rugged:
distributions shift, tails dominate, and commitments erase options.
Optimality in symbols should never outrank measured uplift under
a faithful simulator and, later, in dual-run production.
When, then, is policy search warranted? Whenever order truly
changes money and cannot be recomputed fast enough; whenever
computation must be pushed off the critical path without sacri-
ficing economics; whenever thousands of small agents must act
358 CHAPTER 8. DECISIONS
coherently from local views; and whenever a simple, parameterized
rule can carry the spirit of postponement, marginal valuation, and
priced constraints with orders of magnitude less latency. When
these conditions do not hold, the lighter instruments already intro-
duced—windowed attribution, the economics of waiting, permuta-
tion invariance, and late binding—remain preferable: they move
the bottleneck back to atoms, keep options open until binding
pays, and avoid encasing yesterday’s beliefs in rules that will age.
Properly placed, policy search completes the picture rather
than displacing it outright. It is how a firm converts deep, slow
computation into fast, repeated action while preserving the same
economic ground: decisions remain marginal wagers, priced in
coins, justified under probabilistic futures, and revised as reality
reveals itself. The next section turns from rules to inertia and
quantifies how burdensome a commitment really is—its load and
its half-life—so that, between two equally profitable moves, the
firm prefers the one that lets it change course sooner.
8.6 Changing course
In a free market, the only constant is change. Being able to
change course is a matter of survival for any company. Yet any
supply chain decision comes with a ratchet effect: once resources
are allocated, they are no longer immediately available for other
uses. Some decisions appear final, such as a manufacturing process
consuming raw materials, while others appear reversible, such as
moving stock between warehouses. In any case, from an economic
perspective, those decisions can be undone, though the required
time and money vary.
When considering decisions with equal projected rates of return,
a company should favor those with smaller underlying commit-
ments, in time or in money. Avoiding unnecessarily large commit-
ments is essential to keeping the company agile and, through agility,
to withstanding the market shocks that will inevitably come.
Under the teleological view—forecast, plan, comply—there is no
8.6. CHANGING COURSE 359
operative notion of agility at all. The future is treated as a schedule
to be enforced, and the instruments gravitate toward plan-policing
indicators: forecast accuracy, service-level percentages, utilization,
inventory turns. Such gauges say little about the firm’s ability
to change course; they reward adherence to a guessed trajectory
and remain blind to the time required for commitments to unwind.
Agility, however, is the capacity to shed or reassign commitments
quickly when new information arrives, at tolerable cost.
To discuss agility in economic terms, we must replace plan-
compliance surrogates with measures that price commitments
over time. The classical trio—lead time, working capital, and
inventory turnover—remains in view, but we recast them through
two operational quantities that make duration explicit: the load
a decision imposes (money
×
time) and the half-life of that load
(the time required to release half the resources committed by that
decision).
Lead time, in a broad sense, is the delay between a decision
and the renewed availability of a resource. Working capital is the
difference between current assets and current liabilities
12
. Inventory
turnover measures the number of times inventory is sold or used
during a year. Taken together, these indicators tell us how much
capital is tied up but largely ignore for how long; the forthcoming
notions of load and half-life address that blind spot.
All else equal, shorter lead time, lower working capital, and
higher inventory turnover increase the capacity to change course.
Indeed, these concepts characterize the company’s general inertia:
its incapacity to change course immediately, as it must first liqui-
date the mass of assets presently flowing through while preserving
their value.
However, these concepts are of limited use in understanding
the company’s inertia. This is not to say these concepts are
wrong, or even misguided; rather, the matter—the characterization
12
In supply chain settings, raw materials, work in progress, and finished goods
held in stock frequently dominate short- and midterm variations in working
capital. The higher the working capital, the more financing the company requires
to operate.
360 CHAPTER 8. DECISIONS
of the company’s inertia—admits alternative concepts that are
operationally superior.
8.6.1 The load of a decision
The (financing) load of a flow decision quantifies the underlying
resource commitment as the product of a resource’s value and the
lifespan of the goods it creates, with units of money × time.
Consider a simple illustration. If the company buys a batch
worth 10 coins and resells it 5 days later, the load is
L
= 10
×
5 = 50.
Alternatively, if the company were to sell half the batch 5 days later
and the other half 20 days later, the load is
L
=
10
2
×
5 +
10
2
×
20 =
125.
The second situation yields a load more than twice as large as
the first because the company holds inventory for more than twice
as long.
Now suppose payment for the batch is deferred by 12 days.
The load of the first situation is effectively zero, as no financing is
required: the supplier is paid after the goods are sold. The load of
the second situation is L =
10
2
× (20 12) = 40.
Deferred payment can substantially reduce the financing pres-
sure that sustains the flow.
Load measures the working capital tied up over the entire life
of a decision, not just at a reporting date. Whereas ordinary
working-capital figures give a snapshot, load attaches a duration
to every coin committed. For a firm under financing pressure,
this time-weighted metric—read alongside expected return—is the
sharper guide for ranking options and steering scarce capital to its
best uses.
Many firms require managerial approval once a purchase order
passes a threshold. The rule checks size and potential liquidity
impact, yet it overlooks the goods’ lifecycle. A small order can
immobilize capital for months, while a large backorder may clear
overnight. Load captures this hidden, time-based burden directly.
Firms often plot inventory value against turnover to spot capital
traps, but load already fuses both into a single metric.
8.6. CHANGING COURSE 361
8.6.2 The half-life of a decision
The half-life of a decision is the elapsed time—typically measured in
days—until its load halves through sales, consumption, or service.
Consider an overseas order where delivery takes 10 weeks and
a full container lasts 20 weeks before half the stock is sold. Thus
the half-life is 7 (10 + 20) = 210 days.
Borrowed from physics, half-life measures the time for a deci-
sion’s load to halve; it supplies a time dimension that lead time
alone obscures. Indeed, lead time understates the consequential
duration of a decision whenever batching is involved. A similar
critique applies to cycle time and inventory turnover. Naturally,
the “half threshold is arbitrary; any quantile could be used instead.
Revisiting the example, the 75%-life of the purchase would be 280
days, assuming constant demand.
Half-life gauges how quickly a decision can be unwound; long
half-lives trap the company on its current course and erode agility.
Moreover, half-life resists manipulation better than lead time:
gaming an indicator—making the number look good without im-
proving reality—becomes easier when the metric ignores how long
capital remains locked. This often stems from misaligned incen-
tives that reward the indicator itself rather than the outcomes it is
meant to represent. Indeed, companies seeking agility often tackle
the problem by compressing lead times. However, a frequent pitfall
is reducing lead times at the expense of half-lives.
362 CHAPTER 8. DECISIONS
Chapter 9
Engineering
Modern supply chain practice operates with at least one degree
of indirection: people engineer decision-making processes, and
those processes emit myriad routine decisions that steer the flow.
The emitting artifacts are numerical recipes: programs that in-
gest authoritative records, carry explicit economics, and issue
commitments that can be enacted and audited
1
. In preceding
chapters—Information, Intelligence, and Decisions—we argued
that such engines must run unattended if the firm is to turn judg-
ment into a productive asset rather than an endless clerical burden.
This chapter turns to the engineering principles that let these
engines exist and compound.
The scope is narrow by design. We will not rehearse generic soft-
ware topics—UI
2
frameworks, authentication, CI/CD plumbing
3
,
control planes, message buses, or the wiring to and from systems
of records. There is ample literature on such auxiliary sciences.
Our concern is the junction where economic reasoning meets code:
how one frames admissible options, carries uncertainty without
flattening, prices trade-offs in money, and preserves auditability so
1
A second degree of indirection is now emerging as large language models
(LLMs) assist in writing and refactoring those recipes under the supervision of
human experts. The responsibility does not move; only the typing accelerates.
2
User Interface
3
Continuous Integration, Continuous Delivery
363
364 CHAPTER 9. ENGINEERING
a decision’s lineage remains legible months later. The tone remains
deliberately non-technical—no algebra, no code listings—so we
can be clear about the foundational choices that make a system
of intelligence emit unattended decisions safely, repeatedly, and
profitably.
Two clarifications anchor what follows. First, “numerical recipe”
denotes the whole apparatus that matters in practice: not only
predictors and optimizers, but also the instrumentation that re-
veals why a recommendation is sane, the halting conditions that
stop emissions when confidence falls, and the dossiers that let prac-
titioners understand, in money terms, what the engine believed at
the time. Second, while the application landscape of every firm
is idiosyncratic, the engineering stance advocated here is general:
prefer explicit frames over tacit conventions, probabilistic views
over point surrogates, priced penalties over hard edicts that cannot
be enforced, and reproducibility over heroics.
LLMs warrant a brief comment. Their role is increasing and
will become critical as they compress the cost of drafting, refac-
toring, testing, and documenting numerical recipes. Yet they are
not a substitute for technical proficiency. LLMs do not supply
economic framing, resolve data semantics, or enforce determinism
and auditability; they merely speed the hands that perform the
work. Treated as able scribes—instrumented, reviewed, and con-
strained—they shorten the iteration loop that experimental work
requires. Treated as oracles, they invite subtle, costly errors.
Finally, a call to action: competitive advantage no longer rests
on a thicker stack of dashboards—assuming it ever did—but on
owning a small, engineering-minded capability that writes the
firm’s economics into software and lets it run unattended. The
next section explains why this capability cannot be purchased as
a packaged “planning” layer and why, for systems of intelligence,
programmability is not a taste but a necessity for safety and profit.
9.1. WHY PROGRAM 365
9.1 Why program
Before proceeding, a scope note. In Chapter Information, we dis-
tinguished systems of records, systems of reports, and systems of
intelligence. What follows concerns the last class. Ledgers and
workflow engines—ERPs, WMSs, CRMs—are packaged success-
fully because they record discrete events and enforce rule-based
routes; entities are largely decoupled by construction. Configura-
tion can express the modest couplings that workflows introduce,
and COTS (Commercial off-the-shelf) suffices. What resists pack-
aging is the engine that composes those records into unattended,
economically priced decisions.
The emergence of enterprise software in the late 1970s extended
the postwar hope—nurtured by operations research—that the quan-
titative problems of supply chain would admit a closed, permanent
formulation. Vendors and academics aimed to crystallize those
resolutions into reusable frameworks so that decision-making could
be “configured” once and for all by non-technical users. This ambi-
tion, sensible for ledgers, was transplanted wholesale into systems
of intelligence.
Half a century on, this promise has not materialized for systems
of intelligence. COTS “planning” stacks take months or years to
install, mobilize armies of specialists, and still degenerate into
spreadsheet workarounds when they meet the grain of a living flow.
The failure is structural—a mismatch between generic templates
and business-specific couplings.
Closed analytical resolutions are brittle because they presup-
pose that what must be decided can be fully enumerated in ad-
vance. Reality refuses to comply. The only durable advantage
that spreadsheets enjoy—despite their many defects—is precisely
their programmability. They let practitioners encode idiosyncrasies
the moment those matter economically. Programmability is not a
convenience; it is the essential property of a system of intelligence.
Innocent
-
looking quantities expose the brittleness. Consider
bulk cable. A ledger that says “10,000 meters on hand” hides what
actually governs feasibility and value: composition and cut. Ten
366 CHAPTER 9. ENGINEERING
coils of one kilometer and a thousand coils of ten meters both sum
to 10,000, yet they are opposite economic assets once customer
length profiles, waste on cutting, and residuals are priced. The
unit—meter—obscures the state that decisions must reason about.
Nothing about cable is exotic. Similar, perfectly ordinary quirks
pervade every vertical. In aviation, rotables and repairables exhibit
asymmetric compatibilities—parts that substitute one
-
way but not
the reverse once serialized revisions and life
-
limited components
are considered. In fashion, explicit and implicit substitutions shift
with the merchandising calendar and size-color ladders. In home
improvement, items are bound by project
-
level dependencies: a
faucet without the matching trap kit or connectors is dead stock.
In groceries, short shelf
-
lives meet aggressive markdown policies,
secondary channels, and supplier rebates that reprice inventory
within days. E
-
commerce couples SKUs at dispatch through pack-
aging, carrier rules, and delivery cut
-
offs. Each domain’s “cables”
are commonplace inside the trade and fatal to generic templates.
Because supply chains are systems, such quirks do not remain
at the margins. They couple to the rest of the flow through shared
capacities, baskets, and customer expectations. Ring-fencing them
for manual treatment merely shifts cost and risk. If they could
be quarantined, a small, semi-autonomous process would do; they
cannot. A constraint unearthed in one corner—coil lengths, serial
compatibilities, size-color ladders—alters the feasible assortment,
reprices options, shifts inbound cadence, and reshapes order as-
sembly and the service promise.
Software vendors face a stark engineering law when they try to
absorb these couplings into a packaged product: interactions grow
faster than capabilities grow. Each new “core” capability that truly
interacts with the others multiplies test surfaces and edge cases; a
modest 25 % expansion in such capabilities can easily double the
engineering burden. The cost curve is super-linear because the
product must cover many firms’ idiosyncrasies at once, whereas
each firm only needs its own.
In contrast, the crude programmatic instruments at hand—above
all spreadsheets—let practitioners focus on the few complications
9.2. EXPERIMENTAL OPTIMIZATION 367
that actually exist in their context. By narrowing scope to their
own economics, they sidestep the diseconomies that sink generic
resolutions. This, more than habit, explains why teams fall back
to spreadsheets after expensive “planning” deployments.
None of this indicts systems of records. CRUD applications that
store authoritative events and enforce rule
-
based workflows can be
packaged; their entities are largely decoupled by design and their
couplings are explicit, local, and auditable. The trouble begins
only when the same packaging posture is extended to a system
of intelligence, whose very purpose is to navigate firm
-
specific
couplings under uncertainty.
Packaged decision engines would be a comforting prospect for
non-technical managers; they would demote programming to a
second-class skill set. Unfortunately, the nature of systems of
intelligence rules this out. Short of general artificial intelligence, no
finite catalog of templates can anticipate the couplings that matter
to a given firm at a given time. The quest for configurability in
place of programmability is wishful thinking.
Thus, we are left with the prospect of leveraging ad hoc pro-
grammatic resolutions for supply chain. The task is less daunting
than it might look because spreadsheets are programmable, and
this aspect has not prevented spreadsheets from enjoying consid-
erable success within supply chain circles. However, it raises the
question of the adequacy of the tools we have for this undertaking,
starting with the choice of programming paradigms. Thousands
of programming languages have been devised since the 1950s. It
is unreasonable to assume that they are all equally suitable for
tackling supply chain challenges. In particular, the popularity a
language or tool enjoys beyond supply chain circles has little to do
with its suitability for addressing supply chain challenges.
9.2 Experimental optimization
Supply chain challenges are at the very least opaque, and occa-
sionally wicked as well. An opaque challenge is characterized by
368 CHAPTER 9. ENGINEERING
high epistemic uncertainty: we face known unknowns, and often
unknown unknowns. A wicked challenge is worse: the challenge
shifts as one attempts to address it—even when the actor is a third
party, typically a competitor. Wicked challenges do not abide by
any static formal resolution.
Opacity is an inherent property of the systems of records that
underlie the execution of the flow. As discussed in Chapter Infor-
mation, merely establishing correct semantics for all the relevant
data represents a vast and error-prone undertaking. In practice,
even a modest numerical recipe cherry-picks hundreds of fields
from among tens of thousands of fields spread over hundreds of
tables.
Wickedness arises whenever the adequacy of the response de-
pends on what competitors are doing or not doing. For example,
a premium-price, high-availability stance can thrive when most
rivals chase low prices and thin stocks; it may falter as soon as
they converge on the same premium, high-service posture.
Experimental optimization
4
is the cornerstone of numerical
engineering for supply chain. It addresses, head-on, both the opac-
ity—and the occasional wickedness—of supply chain challenges.
Experimental optimization leverages “real-world” feedback to it-
eratively improve the numerical recipe. First, it weeds out the
mundane coding mistakes of economic import. Those mistakes
are both varied and frequent, and cannot be avoided by merely
scrutinizing the code. Second, it paves the way for discovering
the adequate optimization criteria. Without experimental opti-
mization, attempts at deploying superior software technologies for
supply chain invariably fail, and practitioners end up reverting to
their spreadsheets.
4
This methodology progressively emerged at Lokad in the early 2010s.
Joannes Vermorel coined the term “experimental optimization” in 2021 to
formalize the approach.
9.2. EXPERIMENTAL OPTIMIZATION 369
9.2.1 Cartesianism
Supply chain authors—and, by extension, most enterprise software
vendors—treat numerical recipes as a top-down, purely intellectual
undertaking. They begin by positing the situation—SKU counts,
network topology, constraints, metrics to be optimized, and simi-
lar particulars. From this baggage, the expert derives a solution.
The solution typically takes the form of a framework for compos-
ing the final numerical recipes. The framework is introduced to
accommodate variation and to be reusable across many situations.
This approach is known as the Cartesian perspective, named for
the 17
th
-century French philosopher René Descartes. Cartesianism
places heavy emphasis on pure reason and the power of the mind.
It holds that arriving at truth is, at bottom, a strictly intellectual
exercise. From a scant set of self-evident truths, careful, methodical
reasoning should reach deeper, more complex conclusions.
While reason is certainly preferable
5
to authorities, mysticism,
corrupt incentives, or drug-induced visions, Cartesianism is not
without defects—especially in supply chain. Indeed, nothing is self-
evident about the initial baggage that purportedly characterizes
a given supply chain situation. In practice, this baggage is a
haphazard, inordinate assortment of items of questionable validity.
Authors usually know that a purely intellectual framework
is an unsatisfying proposition. Accordingly, they implement the
framework and run numerous in silico experiments on it. These
take the form of benchmarks, back-tests, and other simulations.
These “experiments” are intended to demonstrate the framework’s
soundness and performance under plausible conditions.
Unfortunately, subjecting a framework to a “plausible” bat-
tery of tests proves little—certainly not its soundness in operating
a real-world supply chain. At best, such tests catch gross is-
sues—scalability woes, for instance—that would otherwise impair
operations. Yet these in silico validations are nowhere near the
5
“Strange women lying in ponds distributing swords is no basis for a system
of government. comes to mind as a famous line from the British comedy film
“Monty Python and the Holy Grail” (1975).
370 CHAPTER 9. ENGINEERING
proof we seek. We seek strong guarantees that deploying the
resulting numerical recipe will be net positive for the company.
Such guarantees come not from added formalism but from
falsifiability in the Popperian sense (see Chapter Epistemology).
Each numerical recipe is a conjecture about how the firm should
commit scarce resources; its premises must be exposed to failure
in the world it intends to steer. Benchmarks and back-tests are
useful pre-flight checks—they probe internal consistency and scala-
bility—but they do not confront adversarial incentives, semantic
drift, and fat-tailed events that govern live flow. The appropriate
remedy for the Cartesian posture is, therefore, methodological, not
algebraic: engineer the environment so the recipe meets reality
early and often, and make refutation the main source of progress.
9.2.2 Mundane issues
When a supply chain practitioner implements a numerical recipe,
the first batch of generated decisions is all but guaranteed to be un-
satisfactory. Many figures the recipe produces will be “insane”—a
term we revisit shortly. The root causes of those insane decisions
are usually simple. Five classes commonly undermine numerical
recipes: mundane bugs, data semantics, constraints, economic
drivers, and strategy.
First, mundane bugs. A numerical recipe worthy of a real-
world supply chain spans thousands of lines of code—to say nothing
of the millions in generic dependencies. Seven decades of software
engineering have taught one lesson: the only code without bugs
is the code unwritten. Suitable paradigms may improve matters,
but complete elimination is nowhere in sight—especially when
combining software written by many hands, where issues emerge
at the boundaries.
Second, data semantics woes. This class has already been
discussed in Chapter Information. Fundamentally, the semantics
of any database column—say, “Order Date”—are in the eye of the
operator. If an operator takes “Order Date” to mean the date the
supplier acknowledged his willingness to serve the order, then that
9.2. EXPERIMENTAL OPTIMIZATION 371
is the semantics, whatever the system-of-records documentation
says. Given that a modern system of records has tens of thou-
sands of columns, many of them interpretable in multiple ways,
misunderstandings are unavoidable. Such misinterpretations can
severely distort the generated decisions.
Third, constraints are often either untold or invented. An
untold constraint is one not properly documented. For example,
Bob, who handles warehouse replenishments, knows that above
5,000 inbound units per day the team struggles and must resort
to expensive overtime to keep up. Bob learned this a year ago
while spending a few months on the warehousing team. Yet this
throughput limit was never documented, so a numerical recipe
ignores it entirely. Conversely, invented constraints arise whenever
the real constraint is deemed too convoluted for semi-manual
resolution. For example, price breaks offered by a supplier are
often replaced with an MOQ (minimum order quantity) reflecting
whatever per-unit price the purchasing team deems acceptable.
Yet the constraint is fabricated: it is possible to order below the
so-called MOQ, provided the company accepts a higher per-unit
price. A recipe that misses this nuance will often overstock by
forcibly reaching for the MOQ.
Fourth, economic drivers can be incorrectly assessed. As
seen in Chapter Economics, valuation is hard. Analyzing the flow
economically requires numerous heuristics for spending and rev-
enue attribution. Moreover, many long-term effects can only be
estimated crudely. For example, discounts are likely to erode the
customer’s willingness to pay, making him increasingly opportunis-
tic about waiting for the next promotion. Quantifying this effect
is challenging, as the relevant time spans are measured in years.
A recipe driven by incorrect economics generates decisions with
the optics of prudence that, over time, fail to deliver the expected
returns.
Fifth, the company’s strategy may look good on paper but
not in execution. For example, leadership may prioritize “ultimate
customer satisfaction”, only to learn that leniencies—liberal returns
or overnight shipments—are abused by bad actors. More generally,
372 CHAPTER 9. ENGINEERING
strategy usually reflects, to a large degree, an intuitive grasp of the
company’s underlying economics. Thus, the understanding can be
directionally correct yet miss the fine print, which may demand
selective use of alternative strategies. More broadly, the nature
of markets—and their unrelenting change—guarantees that, with
time, any given strategy will be undermined by competitors.
The odds of obtaining suitable decisions from a newly drafted
numerical recipe are vanishingly small. Chasing “First-Time-
Right”—a perspective common in manufacturing—is wishful think-
ing in supply chain. Ambient complexity and opacity are too high
for this. It is, however, very much possible to improve this original
draft. It boils down to identifying—and then eliminating—the
“insane” decisions.
9.2.3 Insane decisions
A supply chain decision is “insane” when it will, most likely, gener-
ate unintended losses for the company. A freshly drafted numerical
recipe invariably generates numerous insane decisions. Beyond
immediate financial damage, insane decisions rightly undermine
practitioners’ trust in the automation itself. Left unaddressed,
their very presence in the outputs guarantees that practitioners
will ditch the recipe and revert to spreadsheets.
Note first that while identifying insane decisions is a problem of
general intelligence (cf. Chapter Intelligence), it is not a problem
of great intelligence. A careful manual review—one figure at a
time—by a practitioner suffices to catch most offenders. The
primary hurdle is not the practitioner’s capacity but the sheer time
required to perform such a review at scale.
The difficulty is akin to identifying implausible plot twists in
a novel. Guidelines may expedite the task, but there is no hope
of fully addressing this challenge within the confines of a narrow
formalization. Yet most readers can readily tell when a story
ventures beyond what can be considered “plausible”.
Counterintuitively, insane decisions are often the only way to
surface flaws in the numerical recipe. Confronted with an insane
9.2. EXPERIMENTAL OPTIMIZATION 373
decision, one must reverse-engineer the sequence of calculations
that led to it. In software, a defect almost invariably reflects a
mistaken belief. Something is believed true when, in fact, it isn’t.
By its very existence, the insane decision helps the practitioner
pinpoint which belief is incorrect.
9.2.4 Iterative falsification
Experimental optimization is a methodology that seeks to falsify
the numerical recipe as it presently stands. It recognizes that
supply chain decisions must be subjected to repeated trials against
reality, and that these trials play a fundamental role in exposing
hidden mistakes in even the most carefully conceived solution.
Rather than pretending the problem is fully pinned down in ad-
vance, experimental optimization seeks to uncover and fix problems
through an iterative, hands-on loop.
The process unfolds as follows:
1.
Propose a numerical recipe—a software routine or script—
that converts available data (forecasts, costs, constraints) into
explicit supply chain decisions (purchase orders, production
runs, allocations).
2.
Run the recipe in an environment stable enough to generate
real or near-real decisions, forcing it to face the raw com-
plexity of a live dataset—with no pretense that “difficult”
products or sites can be conveniently excluded.
3.
Identify “insane” decisions; this task typically demands hu-
man scrutiny: managers or analysts inspect outliers, suspi-
cious allocations, or jarringly large orders until they locate
the root cause. The cause may lie in incorrect data semantics,
an overlooked cost driver, or a newly emerged constraint.
4.
Refine the instrumentation and revise the recipe: missing
data or features (e.g., lead-time variance, new categories of
supplier constraints) drive the next revision. The explicit
374 CHAPTER 9. ENGINEERING
cost structure may also be adjusted if the iteration reveals
that the penalty for stockouts is unrealistically low or induces
wasteful overstock.
5.
Repeat until each loop ceases to generate new “insane” re-
sults; over time, glaring defects diminish and solutions be-
come more robust and more aligned with the economic reality
of the supply chain.
A distinctive strength of this iterative approach is that it
invites direct falsification. If the recipe’s proposed decisions look
problematic—even for a few SKUs or sites—some assumption
must be challenged and corrected. This contrasts with the purely
mathematical optimization view, where proofs of optimality offer
little recourse when the real world fails to match the original model.
Experimental optimization welcomes such mismatches: each defect
or “insane” recommendation is an opportunity to expose deeper
errors.
9.2.5 Instrumentation
Experimental optimization depends heavily on instrumentation.
The data pipeline must allow swift preparation and execution of
each new iteration. If each revision takes weeks to deploy, the
process cannot keep pace with its environment. Hence the need for
carefully versioned data, reproducible workflows, and the ability
to “time-travel” by inspecting past states and results.
One might expect metrics—inventory turns, fill rates, forecast
accuracy—to be the final arbiters of success or failure. These mea-
sures matter, but they lag events and can be warped by changing
conditions. Moreover, metrics can be gamed. Companies under
pressure to boost service levels, for example, may simply over-
stock across the board. A few months later, the inflated stock
triggers cost overruns. In response, new constraints are layered
atop the original system. This back-and-forth between metrics and
behaviors can devolve into a tangle of shifting rules that obscure
performance.
9.2. EXPERIMENTAL OPTIMIZATION 375
In experimental optimization, the clearest signal of progress
is the absence of glaring missteps. Each day the software’s pro-
posed decisions appear sane without manual fixes or last-minute
overrides. If the supply chain teams no longer need to “firefight”
for half their working hours, the business is already capturing
better performance: fewer disruptions, fewer frantic reshipments,
fewer odd workarounds on the warehouse floor. Over time, these
operational gains reflect deeper alignment of decisions with correct
cost structures and more reliable data semantics—improvements
that will eventually show up in standard metrics as well. Even if
top-line figures take time to adjust, the disappearance of “insane”
decisions is an unmistakable sign of success.
9.2.6 Continuous improvement
Unlike a closed-form approach that purports to “solve” the supply
chain once and for all, experimental optimization does not claim
convergence to a stable optimum. Supply chains evolve, constraints
change, the product mix shifts, new manufacturing plants open,
promotions or price wars erupt. Thus the loop never closes. A
well-tuned recipe running with minimal insanity today may re-
quire substantial updates tomorrow if a global pandemic, a new
competitor, or a shift in the firm’s strategic posture arises. Even
without external shocks, fresh insights—such as newly discovered
cost drivers—warrant further adjustments.
This perpetual flux is no defect. On the contrary, it highlights
that supply chain mastery is about orchestrating changes before
they cascade out of control. By structuring each change as a small,
verifiable experiment, the organization gains both resilience and
clarity of purpose. Instead of proclaiming a final formula, supply
chain specialists become guardians of a living process—one that
recognizes inevitable disruptions as fresh rounds of input.
Experimental optimization merges two ideas that are often kept
separate. First, it embraces quantitative ambition: a supply chain
is best guided by numbers, systematically handled and recombined
by algorithms. Second, it acknowledges falsifiability: the surest
376 CHAPTER 9. ENGINEERING
way to confirm or refute a method is to subject its output to
real conditions. Whether a given plan “makes sense” is known
only once the plan confronts raw data on the shop floor, in the
warehouse, or at the retail shelf.
By insisting that every proposal runs the gauntlet of real-
ity, experimental optimization transforms errors into catalysts for
learning. The cyclical process—instrument, decide, spot insanity,
correct, and rerun—does not guarantee that all hidden constraints
and cost drivers will be discovered at once. Rather, it ensures
that the entire approach, however incomplete or provisional, stays
moored to the mission of supply chain: the best possible decisions
within irreducibly complex flows. Through repeated trials, the do-
main’s richness (and unruliness) becomes an engine for incremental
progress rather than an insurmountable barrier.
9.3 Whiteboxing
An opaque numerical recipe produces outputs—prices, reorder
quantities, replenishment plans—yet remains impenetrable to the
very practitioners who depend on them. In modern supply chains,
any nontrivial data pipeline tends to exhibit this “black box”
behavior, in which the numerical recipe involves layers of automa-
tion or obscure mathematical models. Even solutions based on
straightforward formulas—such as safety stocks derived from “nor-
mal demand” assumptions—become black boxes in practice once
they handle thousands of items, each with its own nuances and
constraints.
Whiteboxing counters this opacity. It recognizes that no ad-
vanced data pipeline is intrinsically transparent. Without deliber-
ate effort, the logic of a numerical recipe remains buried within the
code, forcing practitioners to choose between two unsatisfactory
options: blindly trust the system, or fall back on spreadsheets they
can fully control that lack the power or scalability to address the
actual complexity of the flow. Whiteboxing provides a third path,
preserving strong automation while revealing the “why” behind
9.3. WHITEBOXING 377
each recommendation.
A core principle of whiteboxing is that clarity must serve the
people on the front line of supply chain decisions. Obfuscation can
take many forms—massive data tables, unverified constraints, or
advanced machine learning algorithms that only a handful of data
scientists can interpret. The response is not to simplify all calcula-
tions into toy formulas, which would squander the computational
capabilities needed to handle intricate flows. Rather, whiteboxing
takes the more challenging route of exposing each decision’s main
economic drivers. In addition to raw allocation figures, the recipe
should display a short list of maximally relevant economic indica-
tors. For an order recommendation, these indicators could include
current inventory, lead time, projected stockout risk (in monetary
terms), and the cost of overage if the proposed quantity proves
excessive. This handful of columns offers two essential benefits:
first, they serve as quick sanity checks for the practitioner; second,
they pinpoint where the decision might go awry if any assumption
or piece of data proves incorrect.
Such transparency relies on bringing the data pipeline close
to the user. Practitioners must be able to add, remove, or ad-
just the explanatory metrics without launching a months-long IT
project. Just as experimental optimization requires rapid iteration
to expose and eliminate “insane” decisions, whiteboxing requires
rapid iteration in the dashboards themselves. If a decision looks
off, the team refines or adds an explanatory indicator, reruns the
pipeline with time-travel reproducibility, and studies whether the
newly revealed driver explains the decision or exposes a bug. This
approach means that every new indicator—be it a lost-sales penalty
or a container-fill cost—can be swiftly tested and adjusted. Over
time, these indicators cohere into a compact suite of cost and
risk measures that shed light on each recommendation without
inundating users with trivial metrics.
Crucially, whiteboxing guarantees neither universal consensus
nor perfect modeling of all constraints. Instead, it seeks to ensure
that decisions do not trigger abrupt disbelief or “panic overrides” by
practitioners who cannot see the logic behind them. Transparency
378 CHAPTER 9. ENGINEERING
lowers the threshold of trust needed to adopt automation. When
supply chain teams can tie a proposed production run or reorder
quantity to a visible trade-off—say, the cost of a potential line
stoppage versus the holding cost of extra parts—they more readily
accept the numeric result, even if it contrasts with past rules
of thumb. Conversely, when an outcome defies intuitive checks,
whiteboxing directs attention to which driver may be missing or
mislabeled, guiding the next improvement.
Another vital dimension of whiteboxing is its capacity to reduce
organizational friction. In many companies, forecasts or replenish-
ment plans move from data science teams to supply chain managers,
then to finance or warehouse operations, each group harboring
its own metrics, data definitions, and constraints. This complex
handover often multiplies confusion: a manager seeing “buy 1,200
units” in one system might not know that a quantity-based dis-
count was factored in elsewhere. By revealing the principal costs
and constraints directly in the final decisions, whiteboxing bridges
these handoffs. Stakeholders can see, in monetary terms, how each
driver was weighed. They can correct or refine those values without
dismantling the entire pipeline.
Far from a one-off task, whiteboxing serves as an ongoing design
principle. As soon as a supply chain evolves—through new suppli-
ers, changed lead times, or adjusted strategic goals—so must the
transparency mechanisms. If not, hidden or outdated assumptions
resurface, creating new black-box zones. Sustaining whiteboxing
demands regular collaboration between domain experts and the
teams responsible for engineering the numerical recipes, ensuring
that whenever an aspect of the flow changes, the explanatory
metrics change as well.
Whiteboxing thus stands at the intersection of reliability and
credibility. It addresses the practical realities of modern supply
chains, where advanced numerical methods are indispensable, yet so
is the trust of the people who put those methods to work. Without
whiteboxing, the best-engineered system remains unusable; with
it, even sophisticated processes become comprehensible enough for
practitioners to adopt, refine, and operate day after day.
9.4. FERMI-IZATION OF THE ELUSIVE 379
9.4 Fermi-ization of the elusive
No matter how one approaches the numerical recipe, supply chain
teams inevitably encounter elusive parameters. These are quanti-
ties or forces that do not readily appear in any system of record and
thus elude direct measurement. They can be strategic—for exam-
ple, the expected release date of a promising, still-in-development
technology on which a product roadmap implicitly depends. Or
they can be operational—such as the probability that a large but
financially stressed supplier will file for bankruptcy within the next
few quarters. When confronted with such factors, the traditional
approach is to bypass the problem. Practitioners might leave
these parameters out of the automation altogether and revert to a
semi-manual process guided by a mixture of opinion, guesswork,
and cautionary hedges. That workaround dooms the numerical
recipe to dependence on continuous human intervention, which
undermines any prospect of genuine automation.
A better approach treats these “unmeasurables” as prime tar-
gets for a Fermi-style analysis, akin to what Philip E. Tetlock and
Dan Gardner describe in “Superforecasting: The Art and Science
of Prediction” (2015). The key insight, popularized by the physi-
cist Enrico Fermi, is that a seemingly impossible quantity can be
broken down into a series of more tractable estimates that, when
multiplied or added, produce a credible estimate of the original
unknown. Fermi himself famously asked students to estimate how
many piano tuners worked in Chicago—a figure most would deem
unknowable absent an official listing. Yet by dividing the problem
into small, separate pieces (such as the approximate population of
the city, the fraction of households with pianos, and how often a
piano gets tuned), the final estimate can land surprisingly close to
the truth.
In supply-chain practice, a more telling illustration is the prob-
ability that import tariffs on goods from Ruritania will double
over the next five years. This quantity appears in no ledger, yet
it directly governs buying cadence, sourcing mix, and pricing. A
Fermi decomposition renders the estimate tractable. Start with an
380 CHAPTER 9. ENGINEERING
outside view: tally, over the last century of Westonia–Ruritania
relations, how often tariffs of comparable magnitude were raised or
lowered, and over what horizons. This historical base rate anchors
the prior firmly. Then incorporate the inside view: read the present
political discourse in Westonia; judge whether the party in power
argues for higher or lower tariffs and whether the topic is salient or
peripheral to its platform, and adjust the prior accordingly. Next,
weigh plausible election outcomes over the relevant horizon—say,
two years ahead—using polling or market odds. Conditional on
each outcome, assess the likelihood that the winning party would
pursue a doubling. Finally, consider retaliation pathways: a tariff
initiated by Ruritania may trigger reciprocal measures by West-
onia. Mirror the same reasoning from the Ruritanian side—its
domestic politics, its historical propensity to use tariffs, and the
salience of trade tensions—to price the chance that Westonia’s
move is reactive rather than proactive. The end product is not a
point estimate but a calibrated probability that can be revised as
information accrues.
Once this probability exists, it can be translated into money-
valued decisions: expected duty uplift on affected SKUs; the option
value of accelerating or deferring imports ahead of cutoffs; the
premium justified for alternative origins with lower exposure; and
the merit of postponement (late binding) to keep tariff-sensitive
transformations offshore. The estimate can be rough; what matters
is that it is explicit, auditable, and placed on the same footing as
purchase and holding costs, enabling unattended software to weigh
it alongside other drivers.
Crucially, this decomposition of the elusive parameter should
not stop once the initial guess has been coded. Under experimental
optimization, these newly introduced parameters remain subject to
the same falsification loops as everything else. The fact that “tariff-
doubling probability” was not in the system of records does not
exempt it from scrutiny; on the contrary, its novelty calls for vigilant
watchfulness. If an insane decision emerges—for instance, one that
systematically rushes oversized inbound orders and starves working
capital to preempt a shock that never materializes—this signals that
9.4. FERMI-IZATION OF THE ELUSIVE 381
the probability (or its economic translation) is inflated. Conversely,
if duties do jump and the numerical recipe shows no sign of prior
hedging—no pulled-forward shipments, no origin diversification, no
postponement—then the probability was understated. Refinement
is iterative: each pass, guided by real outcomes or near misses,
tightens the logic until no obviously insane decisions arise.
In practice, the Fermi approach also becomes a powerful guardrail
against the cognitive biases that attend any guess. These guesses
are not conjured from a vague impression but anchored in a chain
of smaller, more tangible estimates. The process fosters “active
open-mindedness”, a trait documented among Tetlock’s superfore-
casters, who methodically break down a problem and then weigh
outside perspectives to calibrate each piece. The entire supply
chain team can see—directly in the code or in a white-boxed dash-
board—how this tariff-doubling probability was computed, which
fosters alignment and invites collaborative fine-tuning rather than
unilateral overrides. If an import manager judges that the election
weights, or the salience of trade policy in a manifesto, is off, the
team can revise that single sub-estimate rather than tear apart
the method.
Although it is laborious, Fermi-ization is a straightforward
extension of the techniques already needed to keep a numerical
recipe fully automated. Once supply chain practitioners accept
that no parameter is too intangible to be broken down—even if it
must be done with rough heuristics—the path to including those
elusive elements in the system becomes clear. Just as with “normal”
parameters, the guess is best introduced under rigorous instru-
mentation, so that each production run can confirm or challenge
the initial logic. Supply chain mastery thrives on iteration rather
than one-off determinations. When confronted with the intangible,
the biggest mistake is not misestimating initially; it is refusing to
produce any estimate at all, thus dooming the recipe to partial
manual operation.
By fully embracing the Fermi principle, supply chain practition-
ers can tackle exactly those domains previously deemed “beyond
382 CHAPTER 9. ENGINEERING
measurement”
6
. Instead of bowing to intangible elements that
incapacitate automation, they transform them into a set of smaller,
more tractable pieces that can be tested and refined. As with all
parts of a living numerical recipe, each approximation is linked to
the real-world results it produces, allowing supply chain to integrate
what it learns over time. The alternative—silently discarding such
elusive parameters as hopeless—ensures that automation remains
little more than an aspiration. Fermi-ization offers a resolutely
practical path to making even the most elusive parameters explicit,
ensuring that all critical drivers of the flow, intangible or not, can
be folded into a holistic automated system.
9.5 On programmability
In previous chapters, we argued that modern supply chains must be
steered by unattended software that issues auditable, economically
priced commitments. Once we accept a programmatic resolution to
supply-chain problems, a question that is mundane in appearance
but foundational in fact arises: in which language should these
decisions be expressed? This is no tooling whim. A language fixes
which ideas can be made explicit, which options can be enumerated
without contortion, how uncertainty is priced, and how resulting
commitments are audited years later.
The field has mostly dodged this question since the end of the
6
Recent progress indicates that much of the clerical work behind Fermi-
style decompositions can be delegated to large language models operating in
“deep research” mode: the model proposes factorizations, retrieves order-of-
magnitude anchors with citations, and drafts calibrated ranges for human vet-
ting. Benchmarks centered on decision-oriented forecasting—e.g., ForecastBench
(2024), a continuously updated evaluation of models on probability-bearing
questions—report competitive performance by modern models; in parallel,
methodological studies show that careful prompting materially improves cal-
ibration and prediction intervals. Economics remains the binding constraint:
producing millions of such Fermi-ized ranges per month is not yet sensible for
most firms, yet generating a few thousand well-documented ranges monthly is
already operationally straightforward with today’s APIs and agentic tooling.
9.5. ON PROGRAMMABILITY 383
4GL episode.
7
The vacuum has been filled by spreadsheets, rule
engines glued to ERPs, and scripting in general
-
purpose languages.
Each of these instruments has its place, but none is adequate to
serve as the native medium for unattended decision-making at
supply-chain scale.
Systems of records speak SQL because they must preserve
facts, not invent them. SQL excels at stating what is inside a
ledger—rows and their relations—not what should be done next
in the face of variability and trade
-
offs. Rule engines mounted
on top of transactional systems try to bridge the gap with a drip
of if
-
then clauses; they age into brittle accretions that no one
fully understands, least of all when incentives change. Scripting
languages—today most often Python, yesterday VBA
8
, earlier
still proprietary macros—are powerful prostheses for analysts, but
their very freedom undermines the properties we need most in
production: determinism, reproducibility, and strict control of
side effects. They produce artisanal tooling; unattended decision
engines must be industrial.
A suitable language for supply-chain decision automation must
align with flow economics and the physics of computers. Its ob-
jects should be the ones practitioners actually manipulate: typed
tables keyed by SKUs, locations, and dates; arrays over those
tables; and—crucially—random variables that carry demand, lead
time, and return distributions. Uncertainty must be first-class,
together with an algebra that converts distributions into currency-
denominated payoffs so that the expected, risk-adjusted return
of any move can be written as a short formula rather than an
after-the-fact simulation. Prices must be explicit: currency con-
versions and rounding as built-in functions. Time must also be
first-class: calendars and calendar algebra. When these primitives
exist, the engine expresses the decision directly; when they are
7
“Fourth-generation languages” promised, from the late 1970s onward, to
let analysts describe business logic declaratively—report writers and data-
manipulation languages tethered to a database. The label faded by the early
2000s, but the underlying need did not.
8
Visual Basic for Applications
384 CHAPTER 9. ENGINEERING
missing, practitioners smuggle them in piecemeal and the codebase
degenerates into workarounds.
Because unattended engines live or die by auditability, execu-
tion must be deterministic. The same inputs must yield the same
outputs—down to the last cent—even when computation is paral-
lelized. Randomness, when needed for stochastic simulation, must
be explicit and reproducible. Side effects must be quarantined at
the boundary: read authoritative records, compute decisions in a
pure core, then write the resulting commitments back to the ledger.
This separation of concerns is not academic tidiness; it is the only
way to make ex post audits and dual
-
run comparisons meaningful.
Decision logic over millions of SKUs, sites, and dates cannot
be written as a procession of row-by-row loops without buckling
under its own weight. The language therefore needs array or table
semantics—vectorized operations over large, typed collections—so
most programs read as algebra over sets rather than choreography
over iterators. Such designs encourage mechanical sympathy: they
map well to caches and parallel units and make large problems
cheap enough to solve often.
The question declarative or imperative? is ill-posed in the
abstract. We need an imperative shell to stage the flow—ingest,
compute, emit—and a mostly declarative core to express valuations
and constraints as formulas over tables and distributions. In
practice, this means feasibility tests, penalties, and prices are
written as expressions the engine can analyze, reorder, and partially
recompute. The resulting transparency enables halting heuristics:
when confidence drops, the engine stops on its own terms instead
of pestering humans with exceptions it cannot resolve.
None of the above is about syntactic taste. Braces versus inden-
tation is trivial. What matters is semantics: first-class uncertainty,
money and time as citizens rather than afterthoughts, side-effect
isolation, determinism, vectorized evaluation, and a runtime that
produces dossiers fit for audit. General-purpose languages can
be bent toward these aims via conventions and discipline; expe-
rience shows that at scale, and over time, conventions decay. A
supply
-
chain language worthy of the name bakes these properties
9.5. ON PROGRAMMABILITY 385
into its very grammar.
The remainder of this section operationalizes the selection cri-
teria. We examine how language semantics bear on unattended
execution (determinism and reproducibility), on economics (native
money, prices, and penalties), on uncertainty (probabilistic prim-
itives and stochastic simulation), and on maintainability (small
cores, legible programs, diff-friendly artifacts). The goal is simple:
choose a language that lets the firm write its economics down once,
run it at machine speed, and defend every commitment ex post in
coins.
9.5.1 Programming paradigms
A programming paradigm is the framework that shapes how soft-
ware is conceptualized, structured, and executed. It fixes the
principles for organizing data, defining operations, and determin-
ing control flow, guiding both design and execution.
Developing suitable programming paradigms for supply chain
is arguably the single most important technical contribution to
the field. Indeed, a suitable programming paradigm facilitates the
resolution of most—if not all—supply-chain challenges, irrespec-
tive of their specific complications. In this regard, programming
paradigms are wholly dissimilar to “models” or any other kind
of closed analytical solutions, which are, by design, brittle when
confronted with unplanned complications. While many techniques
are of broad interest—stochastic gradient descent would be a
perfect example—programming paradigms are unrivaled in this
regard. The short list below presents some of the most relevant
programming paradigms for supply chain.
Vertical integration names an environment that provides
the entire stack—from data storage, through processing and logic,
to presentation—inside a single, coherent medium. The canonical
exemplar is the spreadsheet: a monolithic application where the
grid is at once the datastore, the formula engine, and the user
interface, with live recalculation binding the three. In such an
environment there is no impedance mismatch between “where
386 CHAPTER 9. ENGINEERING
the facts live”, “how they are transformed”, and “how they are
inspected”; one artifact carries them all.
Vertical integration matters because it collapses handoffs. It
lets a small number of practitioners who are not software specialists
author, run, and audit numerical recipes end-to-end. The same
person can ingest records, express the decision logic, and visualize
the resulting commitments without waiting for another team to
expose an API or wire a dashboard. Absent vertical integration,
composing the recipe becomes a technical project that cannot be
left to anyone but software experts. The effort metastasizes into
the usual procession—supply-chain experts, consultants, project
managers, database administrators, software engineers, data an-
alysts—whose coordination creates an interface-design problem
larger than the original economic problem. Spreadsheets demon-
strate both the attraction and the limits of this pattern: they make
vertical integration immediately tangible, yet they also reveal why,
beyond a trivial scale, a different substrate is required (see the
section “Spreadsheets” below).
Time-travel reproducibility means that the environment
preserves the exact code and data as they existed at any point in
history. Users can step back in time and rerun the computations
as they were. This type of end-to-end versioning (code + data)
ensures that results are not only reproducible in principle but also
automatically verifiable at any historical point.
Indeed, in supply-chain decision-making, even small glitches
can prove extremely costly if bad decisions are generated
9
. Too
often the problematic behavior cannot be reproduced because, by
the time the issue is identified, the input data has changed. As
a result, glitches remain unaddressed far longer than desirable.
Time-travel reproducibility is critical for eliminating those defects
before supply chain teams lose faith in the viability of the numerical
recipe and revert to their spreadsheets.
9
In computer programming jargon, a heisenbug is a software bug that seems
to disappear or alter its behavior when one attempts to study it. The presence
of heisenbugs usually reflects the lack of time-travel reproducibility.
9.5. ON PROGRAMMABILITY 387
Relational algebra operates over tabular data—typically
stored across multiple tables—through a concise set of composable
operations such as selection, projection, join, and aggregation.
These operations, formalized in the 1970s,
10
gave rise to the SQL
(Structured Query Language) family of technologies that remains
ubiquitous in modern enterprise systems. The chief virtue of
relational algebra is its declarative nature: rather than spelling out
how to perform operations step by step, developers and analysts
specify what results they want, and let the execution engine handle
the mechanics.
Relational algebra is paramount for supply chain because the
overwhelming majority of the operational data—sales transactions,
purchase orders, stock levels, transportation events—resides in
relational databases. The relational model offers a systematic
way to link these pieces, ensuring that joins on common keys
yield a coherent whole. Without relational algebra to elegantly
unify these disparate sources, practitioners fall back on a tangle of
ad hoc workarounds—typically unwieldy macros in spreadsheets—
which are not only error-prone but also computationally slow. In
practice, a relational algebra is indispensable for any numerical
recipe supporting a modern supply chain, owing to supply chain’s
inherent complexity.
Array programming is a paradigm in which operations apply
uniformly to entire collections of data—known as arrays—rather
than requiring developers to write loops over individual elements
11
.
By treating data in aggregate, array languages and libraries
12
en-
able concise expressions that mirror the mathematical formulation
of the problem. Instead of writing error-prone loops, the practi-
tioner specifies higher-level transformations—such as arithmetic,
logical, or statistical operations—that automatically broadcast
10
Edgar F. Codd introduced the relational model of data in 1970. His landmark
paper “A Relational Model of Data for Large Shared Data Banks” remains the
foundational reference for relational databases.
11
Kenneth E. Iverson is credited with developing array-based approaches to
programming, beginning with APL in the 1960s.
12
APL, MATLAB, NumPy, etc.
388 CHAPTER 9. ENGINEERING
over each element of the array. By eliminating the need for explicit
indexing, array programming significantly reduces off-by-one errors
and other subtle defects. Moreover, because the underlying execu-
tion engines can optimize bulk operations, array code frequently
outperforms equivalent loop-based routines, particularly on large
datasets.
From a supply chain perspective, array programming is critical
because nearly all relevant decisions must be computed for thou-
sands (if not millions) of SKUs, store locations, or time steps at
once. Iterative or loop-based approaches make this process slower
to write, harder to debug, and more prone to silent mistakes,
especially when coping with edge cases or intricate constraints.
In contrast, array programming’s vectorized style yields compact,
expressive code that is not only easier to review but also sim-
pler to extend as new product lines, locations, or supply chain
complexities emerge. This heightened clarity reduces the risk of
subtle logic bugs going undetected, while the performance gains of
vectorized operations enable faster iteration. In practice, these two
attributes—correctness and agility—greatly improve a company’s
ability to prototype, refine, and confidently deploy its decision-
making processes.
Distributed dataflow is a programming paradigm that auto-
matically distributes computations across multiple machines, ef-
fectively partitioning and parallelizing workloads to accommodate
vast volumes of data. Rather than forcing developers to orchestrate
concurrency on a step-by-step basis, distributed dataflow systems
typically represent data transformations as Directed Acyclic Graphs
(DAGs), which can then be executed in parallel by a cluster of
machines
13
. This approach promotes resilience to partial failures—
faulty nodes can be bypassed or replaced without halting the entire
job—and naturally leverages modern cloud infrastructures. By
abstracting away most low-level details of network communication
13
Early industrial uses of distributed data processing can be traced back
to mainframe clustering in the 1980s, but frameworks like MapReduce (2004)
and Apache Spark (2014) popularized more accessible paradigms for large-scale
dataflow computations.
9.5. ON PROGRAMMABILITY 389
and process synchronization, distributed dataflow enables practi-
tioners to focus on the logical flow of their calculations instead of
wrestling with concurrency primitives.
For supply chain, the significance of distributed dataflow is
twofold. First, the sheer scale of many operations—spanning
SKUs, warehouses, and transportation routes—requires ingesting
and transforming massive datasets. A centralized or sequential
system would be prohibitively slow, leaving decision-makers with
stale outputs or forcing them to limit the scope of their analyses.
Second, it facilitates pooling of computing resources that are only
intermittently needed. For example, if a forecast need only be
refreshed once per day, the corresponding computing resources can
be acquired and released on demand, vastly lowering infrastruc-
ture costs. More generally, distributed dataflows are critical to
maintaining stable performance under highly variable workloads.
Differential execution is a programming paradigm in which
the system automatically tracks dependencies between calculations
and re-executes only those portions of the workflow affected by
a given change. Rather than re-running the entire pipeline from
scratch, the engine inspects a graph of computation—representing
data transformations as nodes and edges—and evaluates a “diff
between the current graph and the one previously executed. Inter-
mediate outputs that remain valid are preserved, while only the
new or modified nodes are recalculated. Additionally, the system
can intelligently cache expensive or time-consuming intermediate
results that yield relatively small outputs. By doing so, it acceler-
ates subsequent runs that involve the same partial computations.
This paradigm stands apart from both standard batch processing
and purely incremental approaches, as it combines the breadth of
a full dataflow engine with the selectivity of fine-grained updates.
From a supply chain perspective, differential execution is critical
because numerical recipes are inherently iterative. Practitioners
typically edit a few lines and then rerun the recipe—sanity-checking
the results computed so far. Given that the final recipe often spans
thousands of lines, its development involves hundreds of small
iterations. Without differential execution, each small tweak to
390 CHAPTER 9. ENGINEERING
the script would require reprocessing all datasets in full. This is
prohibitively expensive and discourages rapid iteration, especially
when dealing with gigabyte-sized datasets flowing in from enterprise
systems. By zeroing in on the minimal set of computations that
need refreshing, differential execution preserves resources, reduces
turnaround time, and fosters fast feedback loops.
Algebra of random variables is a programming paradigm
where uncertain quantities are modeled and manipulated as random
variables—not merely as numerical scalars or point estimates. In
this environment, addition, multiplication, and other arithmetic or
logical operations can be applied directly to random variables, yield-
ing new random variables. Rather than bundling all uncertainty
into error-prone ad hoc simulations or unstructured data tables,
practitioners can define each uncertain factor—demand, lead time,
buy prices, production yields, or returns—as its own distribution.
When these factors interact, the language seamlessly combines their
distributions through a coherent set of probabilistic operations.
The resulting code closely mirrors the underlying stochastic phe-
nomena, bypassing complex looping or separate Monte Carlo steps.
By treating random variables as first-class citizens, this paradigm
enables precise and transparent transformations of uncertainty
throughout the entire decision-making pipeline.
This algebraic treatment of uncertainty proves vital for supply
chain, given the pervasive role of probabilistic forecasts in risk
analysis. In real-world operations, it is never sufficient to track
a single best guess: demand often spikes without warning, parts
arrive late for reasons no one fully understands, and exchange rates
fluctuate in ways that mock simple predictions. Without an explicit
“algebra” to combine these distributions, simplistic point-based
analytics are almost invariably used, which then silently discard
information as soon as averages or worst-case assumptions are taken.
By contrast, an algebra of random variables preserves the entire
probability distribution from upstream (e.g., supplier lead time) to
downstream (e.g., inventory placement), ensuring that uncertainties
remain visible, consistent, and mathematically grounded. This
approach not only reduces the likelihood of costly miscalculations—
9.5. ON PROGRAMMABILITY 391
where a small underestimation can lead to stockouts or a slight
overestimation triggers inflated holding costs—but also accelerates
experimentation.
Differentiable programming is a paradigm that fuses the
flexibility of general-purpose programming with the computational
techniques of automatic differentiation and gradient-based opti-
mization. At its core, this approach treats every step in a data
transformation pipeline as a function amenable to differentiation,
allowing parameters to be tuned via stochastic gradient descent (or
related optimizers) directly within the code that implements the
logic. Rather than relegating learning to an isolated environment
(e.g., exporting data to a specialized machine-learning toolkit), dif-
ferentiable programming makes “trainable code” a first-class citizen.
Each transformation—from input parsing to final forecast—can be
designed, annotated with parameters, and automatically updated
to minimize a chosen loss function. This paradigm goes beyond
purely numerical procedures: it weaves domain insights straight
into the parametric structure of the models, ensuring that these
structures remain interpretable and aligned with the underlying
business processes.
In supply chain, differentiable programming is particularly po-
tent because it bridges several critical gaps simultaneously. First,
supply chain data is notoriously sparse and heterogeneous; many
SKUs, locations, or time segments offer only limited observations,
yet share higher-level patterns (e.g., weekly cycles, promotional
lifts). A differentiable program can explicitly model these pat-
terns through shared parameters—thereby improving statistical
efficiency compared to off-the-shelf black-box approaches. Second,
the very shape of the supply chain problem—its mix of relational
queries, array transformations, and random variables—can be en-
coded directly in a parametric, learnable script, rather than forcing
practitioners to patch together a “language-in-language” design.
This alignment not only streamlines iteration but also yields more
transparent models whose named parameters (e.g., “seasonal am-
plitude”, “port congestion shift”, “Black-Friday uplift”) resonate
with domain experts. Above all, by integrating learning loops into
392 CHAPTER 9. ENGINEERING
the normal flow of data and decisions, differentiable programming
offers a unified environment in which forecasting and optimization
can co-evolve with the actual business constraints—rather than
stand apart as a one-off or purely academic exercise.
Deterministic Distributed RNG is a programming paradigm
designed to reconcile two seemingly conflicting goals: leverag-
ing large-scale Monte Carlo simulations across multiple machines
while ensuring that every run—even across a distributed cluster—
produces bit-for-bit identical results given the same inputs. Put
differently, the approach guarantees that random draws, once
seeded, unfold in a predictable pattern throughout the dataflow
graph, irrespective of shifting workloads or dynamic machine as-
signments. Monte Carlo techniques often surpass a direct algebra
of random variables when the phenomenon in question resists an-
alytical decomposition; however, naive parallelization can easily
render the process non-deterministic, making it impossible to repro-
duce outcomes precisely from one run to the next. Deterministic
distributed RNG averts this pitfall through a carefully orchestrated
mechanism of pseudo-random seed management and partitioning,
thereby combining the flexibility of simulation-based approaches
with the reproducibility vital for rigorous engineering.
This paradigm is indispensable in supply chain for two main
reasons. First, the domain is already rife with genuine uncertainty:
demand shocks, supplier delays, currency swings, and more. If the
software layer itself introduces its own randomness—generating dif-
ferent outputs on each re-run despite unchanged inputs—managers
cannot reliably diagnose issues or trust the decision-making pipeline.
Second, supply chain initiatives live or die by continuous iteration,
as practitioners frequently refine their numerical recipes, replay
historical scenarios, and compare new results against old baselines.
Strict determinism spares them from chasing “heisenbugs” that
vanish when retested and ensures that any anomaly discovered a
week or a year later can be traced to the precise combination of
data and code that caused it. By preserving exact reproducibil-
ity in their simulations and forecasts, supply chain teams gain
auditability and confidence needed to safely automate decisions.
9.5. ON PROGRAMMABILITY 393
9.5.2 Spreadsheets
The enduring success of spreadsheets in supply chain is no coinci-
dence. Spreadsheets offer, within a single grid, a form of functional
reactive programming tightly coupled with vertical integration:
data storage, formula-driven logic, and a visual user interface coex-
ist in a unified environment. The result is a near-perfect feedback
loop—akin to a rapid REPL
14
—that fosters a gentle learning curve.
Supply chain practitioners can start with a few cells, tweak a for-
mula, and instantly see the outcome of their adjustments. This
immediacy contrasts sharply with classical software engineering
workflows, which require code to be written, packaged, and then
deployed before any results can be observed.
Spreadsheets’ programmatic versatility should not be underes-
timated. Within a matter of minutes, a user can assemble ad hoc
logic for forecasting, inventory consolidation, or what-if analyses.
The barrier to entry is low; supply chain managers do not need to
become database administrators or software engineers to handle
many day-to-day calculations. Even as vendors have repeatedly
attempted to encapsulate forecasting or optimization in “ready-to-
use” analytical modules, spreadsheets have consistently remained
popular. This phenomenon arises because the spreadsheet’s func-
tional reactive grid can accommodate edge cases and “cable-like”
complications
15
more flexibly than rigid off-the-shelf solutions.
Despite these strengths, spreadsheets are fundamentally at odds
with the true automation of supply chain decision-making. Practi-
tioners often cite a “lack of scalability” as the reason spreadsheets
fail beyond a certain data volume—especially when a workbook
is shared via local hosting on a single machine. Although hosting
spreadsheets remotely can alleviate some performance burdens,
14
REPL stands for Read-Eval-Print-Loop, a programming environment in
which code can be typed, immediately executed, and the output returned in
real time.
15
See earlier discussion of how materials such as cables already defeat most
off-the-shelf inventory tools, indicating how minor-looking quirks often break
rigid analytical models.
394 CHAPTER 9. ENGINEERING
it does not address the deeper, structural limitations. Two such
limitations, in particular, thwart the prospect of robust supply
chain automation.
First, over time, spreadsheets become riddled with extensively
duplicated logic scattered across dozens—sometimes hundreds—
of tabs and workbooks. Every time a user copies a formula or
re-creates a similar worksheet for a slightly different SKU or loca-
tion, the structure diverges a little more. Eventually, no one can
confidently trace which cells drive the final outcome. Even minor
modifications become hazardous, potentially breaking dependen-
cies hidden in cell references. Once multiple people collaborate,
these problems multiply. Worse, spreadsheets typically lack the
robust versioning, modularity, and auditing features needed to
keep track of a fast-evolving codebase.
Second, supply chain intricacies—multi-sourcing, container-
filling constraints, lead-time variability, partial bills of materials,
multi-echelon hierarchies—rarely flatten into neat two-dimensional
grids. Spreadsheets forcibly “slice” each dimension down to rows
and columns, complicating even moderate joins or cross-references.
For instance, a single SKU that depends on two suppliers with
different cost structures and lead times may require contrived cell
expansions, plus numerous “auxiliary” tabs that try to simulate
a relational model. The more a supply chain requires branching
logic or multi-table queries, the more unwieldy the spreadsheet
becomes.
Beyond these immediate barriers, spreadsheets also fare poorly
against the programming paradigms most conducive to modern
supply chain engineering:
Time-travel reproducibility becomes nearly impossible with
large spreadsheets: while file versions can be saved, each
user action modifies the shared document, often without
precise control over which cells changed and why. Debugging
“heisenbugs” after the fact is nearly hopeless.
Algebra of random variables, differentiable programming,
or deterministic distributed RNG cannot be practically em-
9.5. ON PROGRAMMABILITY 395
bedded in typical grid formulas. At best, practitioners tack
on external macros or third-party add-ins, but these hacks
seldom achieve robust, production-grade workflows.
LLM instrumentation—allowing Large Language Models to
generate or refactor code automatically—falters with the
inherent two-dimensional logic scatter. Most LLM-based
approaches assume a more structured textual environment
where logical relationships are explicit, not buried in cell
references and ephemeral macros.
In principle, spreadsheets could attempt to address some of
these shortcomings—for instance, by layering more powerful com-
putational engines underneath the grid. However, the spreadsheet
paradigm itself remains unaltered: it is designed for human interac-
tivity and ad hoc exploration, not for systematically handling the
ever-shifting complexities of large-scale flows. The moment a com-
pany commits to continuous, unattended decision-making—over
thousands or millions of SKUs, across hundreds of locations—the
cracks show. The more these cracks are patched by short-term
fixes, the more convoluted the workbook becomes, until no one
can trust the fragile edifice it is.
As a result, spreadsheets, while undeniably central to supply
chain practice for decades, represent a technological dead-end.
Their downfall is not merely a matter of “more software” or “faster
hardware”. Rather, their entire design—grid-based formulas, man-
ual duplication of logic, minimal auditing support, and an implicit
focus on local user edits—runs counter to the paradigms needed
for large-scale automation. Precisely because spreadsheets excel
at one-off or small-scope tasks, they fail to pave the way toward
robust, fully automated supply chain processes.
9.5.3 Generic languages
For all their limits, spreadsheets point to a broader truth: supply
chain requires ad hoc programming that no static, “packaged”
system can satisfy. In principle, any general-purpose (“generic”)
396 CHAPTER 9. ENGINEERING
language could meet the need, especially now that modern lan-
guages are largely multi-paradigm. By stacking libraries, a team
might reproduce the essentials for supply chain: time-travel repro-
ducibility, array programming, differential execution, and more.
Yet in practice, the path consumes more effort than it returns.
First, generic languages breed accidental complexity. Object-
oriented patterns, for example, become needlessly elaborate when
modeling mostly tabular supply-chain data. Left unrestricted,
network or system calls invite security issues and so-called “supply-
chain attacks”—infiltrations not in the company’s code but in a
third-party package it depends on. Even purely mathematical
operations fracture into a thicket of tiny modules. The deeper
a team sinks into this ecosystem, the more time it spends on
intricacies that have little to do with allocating scarce supply-chain
resources.
Second, relying on a generic environment effectively commits
the company to a full-scale software project. The storage layer
must be selected or built and integrated; the compute layer must
scale for large volumes or SKUs; the presentation layer must be
coded or adapted for supply-chain audiences. This sequence of
choices, each anchored to a different technology, makes the solution
harder to maintain. Any small shift—a new database version, an
unmaintained library—can cascade into incompatibilities. Over a
few years, “software rot” sets in and siphons resources from actual
supply-chain improvements.
Third, the overhead in such a project almost always fragments
expertise. A small cadre of software specialists—typically with little
supply-chain background—ends up implementing and maintaining
the code. Meanwhile, the actual practitioners—those who grasp the
knotty quirks and countless other domain-specific complications—
are sidelined. They must propose changes indirectly, submitting
“tickets” and routing requests through layers of mediation. Agility
evaporates as the lag from discovery to deployment widens. In
extremis, the solution grows so specialized that only its original au-
thors can patch or extend it—a brittle arrangement when personnel
changes or reorganizations occur.
9.5. ON PROGRAMMABILITY 397
Fourth, while generic languages can in theory support advanced
paradigms such as differentiable programming or deterministic
Monte Carlo, they seldom do so in a cohesive fashion. The relevant
libraries effectively become “languages within the language”. One
library manages distributed dataflows; another handles random
variables; yet another attempts time-travel data versioning. Each
of these subsystems demands its own configuration, data struc-
tures, and usage patterns, compounding the cognitive overhead.
What was meant as a single decision-making pipeline splinters into
separate toolchains, any of which can split at the seams.
None of these challenges is insurmountable: with enough talent,
money, and time, the right team can craft a workable solution atop
a generic language. Some of the largest retailers and manufactur-
ers do exactly that. Yet this route is so resource-intensive that
only firms with tremendous scale can sustain it, and even then it
remains fragile. Enhancements in large language models may au-
tomate fragments of these workflows—for instance, by generating
scaffolding or refactoring older libraries—but they do not resolve
the mismatch between a free-form language and the specialized
needs of supply chain. Paradoxically, better LLMs strengthen
the case for domain-specific environments: the more adept they
become at writing and reviewing code, the more a streamlined,
supply-chain-oriented language benefits from their assistance.
In short, no absolute technical barrier prevents a generic lan-
guage from powering a large-scale, fully automated supply-chain
process. Yet mounting an initiative this way imposes overhead and
complexity ill-suited to most organizations. Because supply chain
exists to harness optionality rather than drown in it, layering mul-
tiple paradigms and libraries—none of which natively address the
field’s pressing needs—usually proves a detour, not a destination.
398 CHAPTER 9. ENGINEERING
Chapter 10
Deployment
Supply-chain deployment is when the previous chapters—economics,
information, intelligence, engineering, and decisions—become a
machine that makes sound choices daily. The machine is a system
of intelligence that converts authoritative records into unattended,
auditable flow commitments. The objective is neither “better
visibility” nor a nicer forecast: it is to issue purchase orders, trans-
fers, schedules, and price moves that are economically sound by
construction, and to improve them in place.
The deliverable is operational: a working deployment ingests
raw extracts from systems of record, carries a money-denominated
objective under probabilistic uncertainty, and writes decisions
back into the ledgers. Its presence is marked by emissions—lines
created in purchase-order tables, stock transfers booked, prices
posted—without a planner shepherding each line through a spread-
sheet. Anything that stops short of unattended emissions consti-
tutes project theater.
Three properties make this system safe and productive in
practice. First, ownership: one small team—ideally a named
individual—holds end-to-end responsibility for the numerical recipe,
the code that frames options, prices trade-offs, and produces daily
commitments. Second, transparency: every emission comes with a
dossier that exposes the main economic drivers, so practitioners
399
400 CHAPTER 10. DEPLOYMENT
can see why the decision is sane and finance can audit it in coins.
Third, reversibility: the engine is stateless and guarded by halting
heuristics; it can be dual-run, promoted, or rolled back within
hours, and it halts when confidence drops.
The path to that state is not mysterious, but it is unforgiving.
It begins with raw daily extracts—unfiltered, schema-faithful, and
stable; continues with a first recipe that runs daily next to the
manual routine until no insane lines remain; and culminates in a
progressive rollout that retires spreadsheets rather than wrapping
the engine in them. Scope is set to capture flows that compete
for the same scarce resources; roles are trimmed to what the work
requires; governance is economic, not ritual.
Most of what derails deployments is already familiar: pilots
kept in read-only mode that never write even a single flow com-
mitment back into the ledgers, partial datasets that hide scale
effects, cargo-cult parameters copied from legacy tools, committees
that debate forecasts instead of decisions, and procurement rituals
that privilege the vendor best at paperwork over the one best at
decisions. Avoiding these traps requires the same stance advocated
throughout this book: frame the problem in money, surface options
explicitly, carry uncertainty faithfully, and let unattended software
do the clerical work.
This chapter is a manual for that stance in production: it shows
how to pick a partner when one is needed; how to frame scope so
the engine touches real constraints quickly; how to structure the
data pipeline; how to write, instrument, and harden the recipe;
how to dual-run and promote to unattended; and how to keep the
system healthy once live, while roles and incentives adjust. The
aim is not to add another reporting layer, but to replace manual
routines with a capital asset that issues, audits, and improves
decisions at machine pace.
Because many firms will not build everything in-house, we
begin with the practicalities of selecting a vendor. The method
recommended here—adversarial market research—fits the opacity
and incentives of software better than RFP theater, and it aligns
with the economics-first posture defended in earlier chapters. Once
10.1. SELECTING A VENDOR 401
the partner is chosen, we turn to setup and scope, the numerical
recipe, system effects, roles, timeline, and the mechanics of rollout
and maintenance.
For brevity, the following terminology is used consistently. A
system of intelligence denotes the deployable asset that converts
authoritative records (plus a few vetted external feeds) into unat-
tended, auditable flow commitments written back to the ledgers.
Its logical core—the program that frames options, prices trade-offs,
carries uncertainty, and emits lines—is the numerical recipe.
10.1 Selecting a vendor
The complexity of modern supply chains far exceeds what can
be done without resorting to software. Most lack the means or
the inclination to develop those instruments in-house—even if the
open-source landscape greatly diminishes the financial burden of
such an undertaking. Thus, supply-chain software vendors abound,
and most companies, past a certain size, regularly face the prospect
of selecting one.
Yet the software industry exhibits extraordinary variety while
remaining relatively opaque. The opacity is multidimensional:
capabilities, performance, licensing fees, operating costs, reliability,
maintainability, evolvability, security, and more.
The open-source landscape should, in principle, be more trans-
parent, since the source code can be freely inspected, but in prac-
tice there are two major caveats. First, the technical expertise
to reliably assess a given open-source technology is seldom found.
Assuming that an open-source technology is sound only because it
originates from a well-known source—e.g., a big software vendor—is
a time-tested recipe for disaster. Vendors abandon open-source
projects all the time because they have little to lose. Second, the
open-source landscape deals chiefly in components rather than so-
lutions—especially in enterprise software. This leaves the company,
if it wants to leverage those components, rolling out what amounts
to an in-house software project.
402 CHAPTER 10. DEPLOYMENT
Thus, for better or worse, companies end up picking software
vendors under high uncertainty. Yet software, like most intellectual
undertakings, exhibits a wildly uneven range of contributions,
from abysmally dumb to peak human ingenuity. Combine the
inherent opacity of software with a vendor gamut that includes a
thick “dangerously—if charmingly—incompetent” segment, and
it becomes imperative for companies to pay close attention to the
vendors they pick.
To their credit, most large companies now acknowledge the
problem and spend considerable effort on market research for soft-
ware vendors. No other area attracts as much market-research
spend as software. By way of anecdotal evidence, a casual obser-
vation of market research firms reveals that software verticals are
literally dwarfing all other verticals put together.
Yes, most software market-research efforts borrow methods
from other verticals; those methods are dysfunctional for software.
Case studies and Request For Proposals are the chief offenders.
Software requires a different approach: adversarial market research.
The pitfalls of case studies—often presented as evidence in
vendor selection—were examined in Chapter Epistemology. This
section focuses on RFPs and a more reliable alternative: adversarial
market research.
10.1.1 Request For Proposals
Request For Proposals
1
(RFPs) are a bureaucratic power play
within large organizations. For software, RFPs are neither rational
nor truthful nor informative. RFPs epitomize the bureaucratic
mindset: fixation on process at the expense of outcome. In systems
of intelligence, the defects are fatal: checklists cannot certify unat-
tended decision quality, halting behavior, or rewritability under
pressure. An RFP steers the company toward incompetence. RFPs
1
Companies frequently distinguish RFIs (Request For Information), RFPs
(Request For Proposal) and RFQs (Request For Quotation). For concision here,
“RFP” refers collectively to RFI, RFP, and RFQ—all three equally dysfunctional
for the same reasons.
10.1. SELECTING A VENDOR 403
are too often treated as an inevitable plague in any organization
past a certain size. Yet nothing about this self-inflicted plague is
inevitable. Eliminating RFPs requires only modest fortitude from
top management.
Consider a company whose management faces a software chal-
lenge—or opportunity—beyond internal resources. After a few
informal exchanges with one or two vendors, management decides
a third party will address the challenge.
The RFP mythos purportedly unfolds in five steps. (1) The
challenge is first put down in writing. (2) Employees then craft a
supposedly high-quality list of questions meant to elicit adequate
answers to that challenge. (3) The relevant vendors are identified
and the compiled RFP document is dispatched to them. (4)
Each vendor dutifully provides truthful, exhaustive answers, which
are gathered and collated. (5) Finally, management performs an
objective, accountable evaluation of the responses in order to pick
the best vendor.
Unfortunately, every step of this mythos is wrong—or delu-
sional.
(1) Writing, with clarity, the precise problem the software
should address is harder than writing the software itself. For the
uninitiated, this may sound puzzling. Yet most successful software
ventures arise not from superior solutions to known problems, but
from identifying a new problem altogether. Framing the problem
is an open-ended task. Formal processes rigidify thinking and
reduce the quality of the findings. Out of the near-infinite ways
to look at a situation, one must pick the angle that lends itself to
implementation. Hundreds—possibly thousands—of factors must
then be technically integrated.
Corporate incentives further undermine the effort. The growing
complacency of an aging executive may well be the crux of the
company’s problem, but no one will write this in a semi-public
memo to be shared with third parties. Such a memo will carefully
steer clear of anything that could be construed as an insult to
anyone’s professional capabilities within the company. Inevitably,
the memo beats around the bush—protecting egos rather than the
404 CHAPTER 10. DEPLOYMENT
company. Far from mere etiquette, this guarantees the selected
vendor will chase a solution that has little to do with the actual
problem.
(2) Having employees enumerate RFP questions invariably
produces material worthy of sketch comedy. Consultants add extra
hilarity points. While the goal should be questions that elicit
discriminative answers, the undertaking devolves into something
else entirely. Questions are produced to impress peers. Those who
lack smart questions compensate with volume. Questions from
past RFPs are recycled ad infinitum.
Inane questions are bad enough; hundreds of them can derail
a nascent initiative. Worse, they warp vendor offerings. Most
such questions are requirements in disguise. Far from open-minded
inquiry, those questions frame and severely restrict what counts
as a “solution”. Those questions usually read, “Can the vendor
guarantee that end users will be able to do X while interacting
with Y?” Answering no guarantees immediate disqualification, even
when “doing X while interacting with Y” is provably a terrible
idea.
More generally, the there are no stupid questions mindset be-
longs in primary schools. Every question steers the initiative.
Inane questions yield inane answers. Moreover, every question
consumes a fraction of the mental bandwidth that the management
can dedicate to the initiative. As executives’ attention is spread
thin, it takes only a few bad questions to have them pick a vendor
on irrelevant trivia.
(3) Shortlisting vendors lets someone—often not even an em-
ployee (e.g., a consultant)—make the most momentous decision
of the RFP with zero accountability. In practice, the person who
shortlists the vendors implicitly picks the winner. The rest is
merely decorum, dictated by corporate etiquette.
While vendors may be numerous, shortlisting feels necessary
only because the question list is unreasonably long. Indeed, pro-
cessing that many answers for more than a handful of vendors
is a daunting task. That burden is a runaway consequence of a
dysfunctional approach.
10.1. SELECTING A VENDOR 405
(4) Vendors answering RFPs have no incentive to be truthful.
The vast majority of questions beg for a specific answer: Can your
solution do X? Any vendor worth his salt knows exactly what
answers are expected and responds accordingly. Expecting the
vendor to stick to “the truth” is delusional. Software is complex;
enterprise software even more so. Nearly all questions admit a
near-infinite number of interpretations. The vendor need not lie; a
little imagination suffices to invent an interpretation that makes
any answer “truthful”.
Even making answers legally binding will not improve their
truthfulness. Enterprise software is obscure enough that almost any
statement can be construed as truthful—especially with capable
lawyers involved.
Such agreements—and other vexations imposed on software
vendors—make the situation worse for the company. Good vendors,
to some extent, pick their clients. Good vendors walk away when
a prospect makes unreasonable requests. Bad vendors—desperate
because their technology is found wanting—gladly embrace what-
ever bureaucratic insanity is pushed onto them; it is their only
competitive edge. Thus, companies that take RFPs “seriously”
practice adverse selection. Good vendors bow out; bad vendors
persevere.
(5) Finally, a software vendor must be picked. By then, a
huge pile—slanted answers meeting irrelevant questions—has been
collected. All this assumes the original problem statement was cor-
rect—a premise of dubious merit. The RFP perspective supposes
upper management can, by sheer intellect, pick the best vendor.
Nonsense. In practice, it is usually worse.
Often, positive selection never happens. Vendors are vetoed
out one by one—until only a single (viable) vendor remains. A
veto requires only a sufficiently influential manager to declare
himself “concerned”. Whether the concern is reasoned or merely
a preference matters little. So long as he doesn’t veto multiple
vendors at once (perceived as abusive interference), his objection
grants him de facto veto power. Larger companies are risk-averse.
Backing an initiative flagged as risky by a peer puts one’s career
406 CHAPTER 10. DEPLOYMENT
in jeopardy if it fails.
This final selection offers no guarantee of a good vendor. On
the contrary, mediocrity is favored. A good vendor is likely trans-
formative. After all, substantial returns on investment require
departing from the status quo. Such departure invariably antag-
onizes barons whose fiefdoms are challenged. Those barons will
vehemently oppose such vendors. A mediocre vendor is anything
but transformative: he embraces the status quo and ruffles no feath-
ers. Unfortunately, this guarantees that, at best, such a vendor
delivers little or no return.
Thus, RFPs are deeply dysfunctional. They also generate con-
siderable busywork: hundreds of questions to write and thousands
of answers to survey. Hence consultants are hired to drive RFPs
2
.
Invariably, consultants make things worse: the company’s excess
busywork is their source of income. Inflating the busywork is sold
as “process excellence”. Obviously irrelevant questions are retained
to ensure “end-to-end coverage. Shortcuts that would expedite
the process are dismissed as “unsafe” or “untested”. At the end,
consultants almost invariably lean toward the vendor offering the
richest consulting prospects.
10.1.2 Adversarial market research
Adversarial market research directly addresses two fundamental
issues in software projects: adverse incentives in vendor–client
dynamics, and massive information asymmetry favoring vendors.
It is less bureaucratic—and thus leaner—and more rational—and
thus likelier to match the right vendor to the right challenge.
The method works as follows
3
. As the manager recognizes
a software opportunity beyond the company’s resources, he asks
one vendor—typically the one that triggered the realization—to
produce a short problem statement on his behalf. The problem
2
As a litmus test, any consultant agreeing to support an RFP for enterprise
software should be dismissed outright for dangerous incompetence.
3
Loosely inspired by Warren Buffett’s “silver bullet” question: If you could
get rid of one competitor, who would it be?
10.1. SELECTING A VENDOR 407
statement should focus exclusively on framing the challenge and
the company’s specifics—not the vendor’s solution.
If the vendor cannot resist teasing his offering, the manager
should simply edit out those parts; the document is short anyway.
If specifics—such as figures characterizing its supply chain—are
too confidential, make them vaguer, e.g., by rounding numbers.
The goal is to preserve the essence of the challenge, not its fine
print. The resulting memo neither names the originating vendor
nor contains overly sensitive information.
With this short document in hand, the manager reaches out to a
few of the vendor’s competitors. Guesswork is acceptable in picking
those peers, provided they are plausible alternatives. For each
competitor, the manager sends the memo, noting that it is a rough
draft, that he may have misframed the challenge, and that he seeks
expert guidance to reformulate the problem statement. He also
requests two further items from the vendors—including the original.
First, an informal back-of-the-envelope proposal aligned with the
vendor’s revised problem statement. Second, a list of at least three
relevant rivals, ranked by the vendor’s own assessment. Each listed
rival must be accompanied by a few paragraphs explaining its
perceived strengths and weaknesses.
As rivals’ shortlists reveal new competitors, those are added
to the survey. Any vendor that fails to deliver these modest items
within two weeks is removed from the survey. The survey ends
when the manager circles back to the same vendors and no stone
seems left unturned.
Once the survey has ended, the manager collates the results
and creates a short file for every vendor. Each file includes what
peers have said, together with the vendor’s problem statement and
its associated back-of-the-envelope proposal.
Finally, with at most two colleagues, the manager reviews the
results to pick a vendor. The selection is a matter of judgment and
cannot be rigidly formalized. Naturally, peer opinion should play
a major role in assessing each vendor. Furthermore, the quality
of its problem statement and of its rival descriptions should be
treated as telltale signs of competence—or incompetence if they are
408 CHAPTER 10. DEPLOYMENT
poor. Back-of-the-envelope proposals are used as tiebreakers. For
auditability, the survey result are archived alongside a short memo
explaining the reasoning behind the final selection. Afterward, the
company resumes its regular purchasing process, which will likely
entail obtaining a detailed offer and a full-fledged contract from
this one vendor.
As stated previously, adversarial market research addresses the
shortcomings of traditional methods such as RFPs (requests for
proposals). Let us examine this claim, starting with information
asymmetry.
Letting the company compose its own problem statement, as
with RFPs, is a recipe for getting vendors to lie through their teeth
to make their solutions appear to fit the problem statement. As
enterprise software vendors know far more than their clients about
their real capabilities, it is almost impossible to sort truth from
pretense. It is unreasonable to expect any vendor to fundamentally
change its technology to suit a client. Good software technology
takes years to build
4
. With each client, the vendor may do a little
better than in past initiatives, but the process is tediously slow
and incremental.
This is why it is critical to let the vendor compose the problem
statement. When the vendor holds the pen, it has no incentive
to craft a problem statement that misfits its technology. On the
contrary, it will shape the problem statement as close as possible
to the solution it possesses. Furthermore, vendors—except perhaps
the most incompetent—are vastly more knowledgeable than the
client in framing the problem. Indeed, what the client sees as one
problem may be a series of subproblems better addressed separately
with distinct technologies. Conversely, this “one problem” may be
merely the tip of the iceberg, requiring a broader perspective and
an entirely different technological approach. A competent vendor
will demonstrate mastery by presenting a convincing rationale for
4
Joel Spolsky, a famous American Software Entrepreneur, published an
article “Good Software Takes Ten Years. Get Used To It. (2001). While this
very broad statement is debatable, one must recognize that, a quarter of a
century later, it remains essentially true.
10.1. SELECTING A VENDOR 409
viewing the challenge through the right lenses.
Letting vendors expose their peers addresses the selection bias
that plagues RFPs. The initial vendor selection is one of the most
important decisions in the whole initiative. Early elimination of a
few vendors all but guarantees a privileged vendor will be selected.
So long as the survey includes half a dozen vendors or more, few
will question the adequacy of the selection. This fragility is an
open secret and is abundantly exploited in real-world RFPs.
By contrast, it is much harder to game the aggregate list
produced by the vendors. Each vendor has its own biases and
may intentionally omit a peer deemed too threatening. Yet the
lack of coordination makes it difficult to single out any one vendor.
Moreover, every vendor knows that its projected expertise is at
stake. It will be assessed by the quality of the peers it provides.
Returning to the client with a shortlist of pseudo-rivals—either
incompetent or irrelevant—marks the vendor itself as incompetent.
Some vendors will produce all sorts of excuses for why they
cannot provide a list of peers, let alone qualify them. Yet in a
free market, a company cannot plausibly be “state of the art”
while wholly ignoring what its rivals are doing. This is doubly
true in software, where progress is extremely incremental. Vendors
oblivious to their peers are common; it is a telltale sign of utter
incompetence and should be treated accordingly.
Bypassing that entire question-and-answer game is a feature,
not a bug. Those questionnaires invariably devolve into a bureau-
cratic monster. Irrelevant questions and requirements in disguise
end up dominating the process. They frustrate the good vendors
and empower the bad ones. There is simply no value in playing
this game. Incentives are wrong, and neither the inputs (client’s
questions) nor the outputs (vendor’s answers) can be trusted.
Similarly, the insistence on back-of-the-envelope estimates is
intentional. Enterprise software is not an exact science. A compe-
tent vendor does not need a lengthy audit to draft a reasonable
estimate of the total cost of ownership for a solution of its own
making. Inability to provide such an estimate should be seen as a
telltale sign of incompetence. At a minimum, the estimate should
410 CHAPTER 10. DEPLOYMENT
cover the vendor’s fees, fees to supporting parties, and the cost
of internal resources dedicated to the solution. One of the most
frequent shenanigans in mis-estimating costs is turning a blind
eye to ancillary costs beyond the vendor’s fee. While this process
cannot prevent a vendor from omitting a few lines, the lack of
cooperation between vendors ensures fairly extensive coverage of
expected expenditures.
By contrast, traditional RFPs coerce vendors into mind-boggling
lists of inane requirements. Good vendors know that weeding this
nonsense will take considerable effort and that, more often than
not, they will end up implementing numerous capabilities of dubi-
ous merit. As a result, vendors invariably inflate their fees when
confronted with a traditional RFP. They will also gladly turn a
blind eye to every cost not directly attributable to them unless it
is explicitly mentioned in the RFP.
Finally, peer assessment is the one piece of information that
truly separates wheat from chaff. Every piece of software has
strengths and weaknesses, but no vendor—except perhaps the
most incompetent—will willingly expose the worst flaws of its own
solution. When pressed, minor concerns are put forward as a show
of fake humility. Misdirection is straightforward when dealing with
something as complex and opaque as enterprise software. Rivals,
by contrast, will not hesitate to point out the bad and the ugly.
Given the nature of enterprise software, accusations or disparaging
statements are unnecessary. Instead, a vendor can simply nudge
the client to examine a few well-chosen aspects of a rival’s offering.
Moreover, bringing relevant items to the client’s attention is, in
itself, a demonstration of competence.
Adversarial market research empowers good vendors. They can
train the client in the art of assessing vendors. They can frame
the challenge to maximize returns when deploying the solution.
They can offer lower prices, as they do not have to wade through
overbearing bureaucratic insanity. The only downside is that it
often demands more fortitude than complacent managers possess,
as they are unwilling to ruffle the feathers of their own bureaucratic
apparatus.
10.2. SCOPING THE DEPLOYMENT 411
Once a vendor passes this adversarial survey and is selected,
work returns to deployment proper: setting up raw extracts, au-
thoring the first numerical recipe, and dual-running until no insane
lines remain.
10.2 Scoping the deployment
In practice, “the same scarce resources” means a shared inventory
pool; shared upstream capacity (molds, MOQs, rebate ladders,
credit lines); a shared transport bottleneck (lane, port pair, vehicle
class); and—routinely forgotten—a shared pocket of managerial
attention. When embarking on a new supply chain initiative, the
first task is to clarify its ultimate purpose and the boundaries
within which it will operate. A company might, for instance, be
interested in improving production scheduling or reorder quanti-
ties—or it might wish to unify several decision processes that have
historically run on fragmented spreadsheets. Yet in all cases, the
transformation must be made explicit at the outset: which deci-
sions will be altered, who will own them, and how these decisions
will be carried into daily operations?
One critical insight: an initiative aimed only at improving “vis-
ibility” or producing incidental metrics cannot, by itself, capture
lasting returns. All those visibility efforts—even the most sophis-
ticated dashboards—remain hollow if people in the organization
carry on with the same step-by-step, manual decisions that pre-
dated the initiative. Conversely, once the goal is stated as a system
of intelligence—one that regularly outputs and enacts tangible
actions like purchase orders or capacity allocations—every require-
ment becomes simpler to articulate. This focus on producing—and
trusting—a daily or weekly decision in a live environment forces
the company to align roles and resources around a deliverable
anchored from the start in the economic realities of the business.
A workable rule of thumb follows: If multiple stores draw from
one distribution center, include all those stores in one go; otherwise
the engine will arbitrage across spreadsheets rather than across
412 CHAPTER 10. DEPLOYMENT
options. If two factories or two DCs (Distribution Centers) are
truly disjoint—no common inventory pool upstream, no shared
trunk lanes downstream, no shared buying team—stage them in
separate waves. If a supplier’s mold, MOQ, or rebate ladder binds
items together, bring the whole family into scope at once. If a
trunk lane or port pair is the bottleneck, scope every flow that
rides it. If the same planners must arbitrate the same budget or
credit line, keep their domain together: managerial attention is as
finite as pallets and hours.
Deciding how broad or narrow the scope should be is no less im-
portant. An initiative might succeed in automating replenishment
for a single plant but inadvertently shift problems elsewhere—such
as creating bottlenecks downstream or mismatched pricing up-
stream. When the objective is sound daily decisions, looking at a
single product line or a handful of warehouses often proves mislead-
ing. While it may appear “safer” to start small, deep structural
interdependencies frequently render these localized pilot runs non-
representative. A narrowly defined project can be so constrained
that the company never sees how the initiative performs under the
true complexity of its operations. On the other hand, an overly ex-
pansive scope can sink the project under years of data-integration
delays. The right balance, in practice, captures all significant flows
that compete for the same resources—inventory, transport, or pro-
duction facilities—while avoiding optional detours into functions
with little impact on supply chain decisions.
Equally crucial is establishing a minimal set of well-defined
roles. Despite the myriad job titles found in a large supply chain,
only a handful of positions need to be responsible for supporting an
automated decision process. The Data Officer maintains the steady
extraction of raw transactional data—ideally an unfiltered dump—
so subsequent preparation can remain flexible and transparent. A
Scientist then takes ownership of the technical and analytical stack
leading from raw data to the final recommended decision. A day-to-
day Flow Manager—formerly a planner in pre-automation routines—
contributes contextual knowledge, challenging outputs that seem
suspect and clarifying on-the-ground constraints. Finally, a supply
10.2. SCOPING THE DEPLOYMENT 413
chain executive stands ready to arbitrate when the initiative meets
roadblocks—ensuring that if a new distribution center or software
tool is introduced, the modeling behind the decisions is revised
accordingly rather than left to drift.
All these elements—explicit decision focus, well-chosen scope,
properly defined roles, and a finite but firm onboarding schedule—
pave the way for a smooth initiative. They also make it evident
that the object of interest is not a formula or a theoretical optimum,
but a living recipe, updated and rerun in a shifting environment
of suppliers, customers, and operational changes. Once that foun-
dation is set, the next step is to examine its core: the numerical
recipe at the heart of the system of intelligence, translating every
relevant piece of data into the concrete, everyday decisions that
steer the flow of goods.
This section revisits how to frame a supply chain initiative to
produce the daily (or weekly) decisions needed to run a supply
chain while it secures the “soft” elements (roles and leadership)
that ensure alignment with corporate goals.
10.2.1 The numerical recipe
For deployment—and only for scope—the numerical recipe is best
seen as a contract at the boundary of the initiative. It states in
code three items scoping must settle upfront: the decisions the
engine may emit (and the ledgers they write), the options it is
allowed to consider (and those kept exogenous), and the shared
resources it prices internally versus reads as external shadow prices.
Everything else—forecasting stance, decision theory, debugging
“insane” lines—has been treated in earlier chapters and is not
repeated here.
First, the decisions: a scoped recipe names its emissions pre-
cisely. For replenishment: lines inserted into the purchase-order or
transfer-order tables; for production: start times and quantities
on a finite set of work orders; for pricing: posted price changes on
an agreed cadence. Emissions must be idempotent and stateless:
the same inputs produce the same outputs. The write-back can be
414 CHAPTER 10. DEPLOYMENT
reconciled unambiguously by a decision identifier and a timestamp.
This narrow declaration anchors scope in reality: if a decision
cannot be written back to the system of record, it is out of scope.
Second, the options: scope fixes what the engine may vary and
what it must take as given. If the initiative covers DC-to-store
dispatch while pricing remains out of scope, the price is read,
not chosen; its effects enter the objective via the firm’s money
ledger. If overseas ordering is in scope but carrier selection is
not, transport modes arrive as costs and lead-time distributions,
not as choices. Stating admissible moves prevents a relapse into
spreadsheet improvisations that expand scope informally and erase
accountability.
Third, the shared resources: scope is sound only if every bind-
ing resource inside the perimeter is either decided here or priced
here. Inventory pools, mold families under an MOQ ladder, dock
throughput, port-pair capacity, buyer credit lines, and even man-
agerial attention are typical binders. When a binder straddles the
boundary, the perimeter must ingest and honor a shadow price
supplied by the neighboring domain. Absent that price, the engine
arbitrages across spreadsheets rather than across options. Con-
versely, once a binder is brought inside, the external shadow price
is retired; the coupling is internalized.
A scoped recipe also carries attribution. Its window of respon-
sibility—from the instant a commitment immobilizes a resource
until the next–next opportunity to reassign or liquidate it—is ex-
plicit, with terminal valuations and ratchets stated. Within that
window, the engine bears coin-denominated consequences (lost-
sales penalties, overage costs, congestion, markdowns) for its own
emissions; beyond it, costs are priced as terminal values rather
than chased indefinitely. This boundary prevents double counting
across initiatives and the familiar blame game between neighboring
teams.
Scoping requires halting and reversibility at the perimeter. The
engine must stop itself when inputs are stale, when exogenous
shadow prices go missing, or when dossier-level sanity checks fail.
Promotion to unattended mode depends on these halts being rare
10.2. SCOPING THE DEPLOYMENT 415
and diagnosable. Reversibility is achieved through statelessness
and deterministic replays: yesterday’s run can be reconstructed
from versioned code and versioned extracts without pleading “data
drift” or “user overrides”.
The data diet follows from scope. Tables are mirrored, in full
and schema-faithful, on a daily cadence (or faster if the decision
cadence demands it). No “helpful” upstream massaging is per-
mitted: filters, imputations, and joins that matter to the decision
live inside the recipe, where they can be audited alongside the
economics. Minimal whiteboxing—columns that expose the main
drivers of each emission in coins and constraints—also belongs
to the contract. This dossier lets finance and operations see at a
glance why a line is sane.
Two short examples make the boundary concrete. A DC-to-
store replenishment recipe scoped to a single DC emits transfer
orders for all stores fed by that DC, varies quantities and release
times, reads prices and planograms as exogenous, and internalizes
the DC’s labor and dock capacity. If another DC draws from the
same upstream pool, the recipe must either ingest a shadow price
for that pool or expand the perimeter to include the sibling DC;
anything in between invites cross-DC arbitrage through spread-
sheets. An overseas buying recipe scoped to a supplier family emits
purchase orders with lot construction, internalizes MOQ/rebate
ladders and the shared mold, reads carrier choices as priced in-
puts, and attributes downstream costs through a terminal value at
handoff to the import DC. Pricing or store allocation remains out
of scope until a subsequent wave.
Finally, scope is expected to evolve. The contract provides
the seam: as neighboring domains are brought inside, exogenous
shadow prices are replaced by endogenous decisions without rewrit-
ing the core. In deployment practice, this is how scope “clicks
together”: the border is not a rhetorical slide but executable code
that names emissions, admissible moves, binders, prices, and halts.
The next subsection examines how these borders interact with the
rest of the system.
416 CHAPTER 10. DEPLOYMENT
10.2.2 The data pipeline
Scope becomes real only where it meets the applicative landscape.
Modern firms run on silos—ERP for transactions, WMS for lo-
cations, TMS for carriage, CRM for orders—each with its own
schema, cadence, and folklore. Scoping an initiative is therefore
not an abstract drawing of boundaries; it is the decision to cut a
clean, durable seam through those silos and to uphold it with a
data pipeline that is thin, honest, and reproducible.
The numerical recipe defined in the previous subsection sup-
plies the logical perimeter (emissions, admissible options, shared
binders). The pipeline is its informational counterpart. It should
mirror the tables required by that perimeter on the cadence implied
by the decisions. In practice, a daily closing bell—an explicit cutoff
(in UTC) after which extracts are taken—beats vague “near-real-
time” ambitions that blur provenance. Freshness is an economic
choice: accelerate only when the expected uplift in coins justifies
the added friction; otherwise, favor a clean, once-a-day snapshot
that everyone can replay identically. Partial extracts and “pilot
slices” are counterproductive: they mask scale effects, conceal se-
mantic quirks, and create a false sense of correctness on toy subsets.
The perimeter you intend to optimize is the perimeter you mirror.
Each extract must be immutable and self-describing. Package
tables with a small manifest that records the as-of watermark,
per-table row counts, min/max business timestamps, and a schema
fingerprint. Stores are append-only: yesterday’s files are never
edited; retroactive corrections appear as new facts in today’s dump.
This discipline—together with time-travel—guarantees bit-for-bit
reproducibility: a run can be replayed months later without plead-
ing “the data moved”.
Deletions and late corrections demand explicit treatment. Pre-
fer full snapshots over clever deltas; when that is infeasible, normal-
ize any upstream change-data-capture into an append-only ledger
inside the perimeter (tombstones for hard deletes; versioned rows
for updates). The goal is uniformity: the recipe reasons over a sin-
gle, growing history of events, not over a moving target of in-place
10.2. SCOPING THE DEPLOYMENT 417
edits.
Access follows least privilege: a read-only service account pulls
extracts; the initiative never writes back into systems of record
through the pipeline. Minimize personally identifiable data; most
supply-chain decisions carry no legitimate need for it. Keep the
seam thin: extracts are facts, not opinions; transformations that
change decisions belong inside the recipe, where they are versioned
with the economics.
Historical depth should span at least one full replenishment
cycle and, whenever seasonality matters, two seasonal peaks. More
is better, but “enough to learn” suffices: an honest three to six
quarters is enough to let the recipe converge; backfills can extend
depth later without derailing deployment. External feeds—public
holidays, price indices, weather for short-haul perishables—enter as
first-class tables with their own cadence and provenance. Exploit
internal records first; externalities pay only when their signal is
demonstrably stronger than the noise they introduce.
“Helpful” upstream massaging—bespoke filters, silent imputa-
tions, pre-joined views—turns the seam into fog. Transformations
that matter to decisions must live inside the recipe, where they are
versioned alongside economics and can be audited with the result-
ing emissions. Records remain hypotheses about reality; keeping
them raw preserves provenance and lets defects be traced back
to their source system rather than to an undocumented cleansing
script.
Because applicative silos will not be re
-
platformed to suit the
initiative, scoping must respect their couplings without attempting
to dissolve them in advance. When a binding resource sits inside
the perimeter, its constraints are internalized; when it straddles
the boundary, its influence arrives as a shadow price from the
neighboring domain. Attempting to “simplify” by deleting incon-
venient data—or by isolating a sliver that no longer competes for
shared inventory, capacity, lanes, or attention—reintroduces the
very arbitrage across spreadsheets the initiative is meant to retire.
Technologically, the pipeline should be boring: append-only
snapshots, immutable files, manifest-guarded extracts, and time-
418 CHAPTER 10. DEPLOYMENT
travel to replay any past run bit-for-bit. This modest discipline
makes halting and reversibility—required by the recipe’s con-
tract—effective in practice. Storage is cheap; premature sum-
marization is not. The only worthwhile complexity sits where coins
change hands: inside the recipe, not in the plumbing that feeds it.
Once this seam is cut—faithful mirroring on one side, auditable
transformations inside the recipe on the other—scope can expand
without upheaval. New domains are brought inside by replacing
external shadow prices with endogenous decisions; nothing else in
the pipeline needs to be reinvented. The next subsection assigns
the few roles needed to keep this seam alive day after day.
10.2.3 Roles in the initiative
Once scope is clarified and focused on a daily production of de-
cisions, the next critical step is to define responsibility for each
part of the process. Although companies often maintain many
specialized roles across supply chain, only a few are necessary to
support an automated decision-making workflow. Too many roles
simply dilute the ownership of the final outcome. By contrast, a
small, well-defined set of roles ensures accountability and keeps
the flow from raw data to final decisions clear and traceable.
The Data Officer sets up and maintains the data pipeline.
This role sits at the intersection of IT and supply chain but re-
mains firmly rooted in IT’s discipline. The Data Officer’s main
objective is to deliver raw, unfiltered extractions from the systems
of record, such as the ERP, WMS, CRM, or other legacy applica-
tions. Delivering these datasets daily or weekly in reliable form is
harder than it looks: minor schema changes, partial upgrades, or
usage quirks routinely break extraction jobs. Once stabilized, the
pipeline requires minimal ongoing effort. Critically, this role should
refrain from attempting to “fix” or “interpret” the data. Whenever
a serious anomaly appears—like negative on-hand inventory or
missing transactions—the Officer should notify the relevant teams
but not overwrite or massage the underlying records. Thus the
initiative confronts the real data, not an unknown transformation.
10.2. SCOPING THE DEPLOYMENT 419
The Supply Chain Scientist owns the entire recipe from
raw data to daily decisions. This role is the logical core of the
initiative. While the Data Officer concentrates on extracting data,
the Scientist focuses on translating every cost driver, operational
constraint, or strategic preference into the recipe itself. Unlike a
typical data analyst or “data scientist”, the Supply Chain Scientist
is personally committed to making the process production-grade.
If a recommendation is blatantly incorrect, it is the Scientist’s
responsibility to track down the source of the error—be it a mis-
taken modeling assumption, an overlooked lead-time constraint,
or a mismatch with the company’s current financial objectives.
Over time, the Scientist codes new insights directly into the recipe,
keeping it aligned with shifting operational realities. Although
the Scientist uses statistical tools, optimization engines, or sim-
ulation techniques, his guiding principle is to deliver workable
daily decisions. The difference in mindset is crucial: dashboards or
performance metrics may drift out of relevance; daily, automated
decisions cannot.
Next comes the Executive Sponsor, who provides top-level
guidance and arbitration. Supply chain decisions inevitably reflect
the company’s broader economic calculus, and the Scientist will
sometimes need to make modeling assumptions—about costs, mar-
gins, trade-offs, or vendor terms—that only senior management
can validate. A single warehouse might request premium freight for
faster restock, for instance, but doing so may conflict with the goal
of reducing air shipments company-wide. The Sponsor clarifies
how trade-offs should be prioritized if there is disagreement. He
also ensures the project has the necessary political and financial
support. If the Scientist uncovers a new factor (like leftover prod-
uct warranties or legal constraints on cross-border shipments) that
calls for a deeper process change, the Sponsor’s direct involvement
helps clear organizational hurdles promptly.
Lastly, a Flow Manager (formerly a planner or inventory
manager in older processes) remains essential. Although the daily
decisions eventually become automated, real-world complexities
never vanish. The Flow Manager provides the day-to-day opera-
420 CHAPTER 10. DEPLOYMENT
tional insights that data tables cannot capture—for example, which
supplier tends to deliver partial shipments, or which packaging
lines show subtle quality issues during peak months. In the first
months of dual-running the old spreadsheet and the new recipe side
by side, the Flow Manager spots decisions that look “insane” and
relays these observations. Over time, as automation proves stable,
the Flow Manager’s job transforms. Instead of micro-checking each
stock level, he focuses on broader relationships with customers
and suppliers, handles unexpected disruptions such as a flooded
warehouse or an overnight regulatory shift, and channels that
knowledge back to the Scientist. Free from endless spreadsheet
updates, he acts as a network-wide steward of the flow, heightening
resilience and responsiveness across supply chain.
In brief, ownership reduces to four verbs: the Data Officer
sources facts; the Supply Chain Scientist emits the daily commit-
ments—and halts the engine when confidence dips; the Executive
Sponsor arbitrates trade-offs and boundary disputes; the Flow
Manager monitors field flow and escalates anomalies.
These four roles fit within a surprisingly compact structure,
even in large corporations. One person may serve as Data Officer
across multiple projects, and a single Scientist can manage hun-
dreds of thousands of SKUs. Clarifying at the outset that each
responsibility maps to a single, readily identifiable domain reduces
cross-department friction and avoids the endless hand-offs and
partial accountabilities that often paralyze more ambitious (and
less focused) projects.
Once these roles are assigned, the path is set for a tangible,
roughly six-month onboarding cycle—from data extraction to a
stable, production-grade process.
10.3 Onboarding
Onboarding is the passage from scoped intent to unattended emis-
sions. Once the perimeter, the roles, and the seam of daily extracts
are in place, what remains is execution with explicit gates: give
10.3. ONBOARDING 421
the numerical recipe a daily rendezvous with reality; proceed only
when readiness criteria are met; and retire manual routines without
interrupting the flow.
This section does not revisit scope, roles, or economics; those
have been settled above. Its concern is cadence and control: what
must become true, in what order, for the system of intelligence to
take charge safely. Dates follow from criteria—the calendar is a
consequence, not a plan.
The pages that follow take this path: a brief timeline naming
the gates; the daily practice that gets us there; the mechanics of
progressive rollout and reversibility; and the stance for long-term
upkeep. With these pieces, the transition from spreadsheets to
unattended decisions becomes routine rather than heroic.
10.3.1 Timeline overview
Once scope and roles are in place, onboarding proceeds through
three stages that, in most firms, fit within roughly six months. This
horizon is an empirical expectation, not a promise: the calendar
reflects organizational cadence more than compute, and a slip of a
few weeks typically signals governance or scope issues rather than
“hard” technical limits.
The first stage ends only when daily data extraction is complete
and reliable. The perimeter of tables required by the numerical
recipe is mirrored schema-faithfully at a clear daily cutoff; extracts
are immutable and self-describing; past runs can be replayed bit-for-
bit; and upstream quirks are handled without “helpful” massaging
that would blur provenance. Halting heuristics are wired so that the
engine stops itself when inputs are stale or missing. In practice, this
stage is complete once the seam has run cleanly and reproducibly
for a sustained period, with defects traced and fixed at the seam
rather than patched downstream.
The second stage ends when the recipe ceases to produce insane
decisions. With the pipeline stable, the numerical recipe runs daily
next to the incumbent routine. Each emission is accompanied
by a dossier that whiteboxes its main economic drivers in coins.
422 CHAPTER 10. DEPLOYMENT
Day after day, surprises are fed back into semantics, constraints,
or prices until glaring missteps vanish. The criterion here is not
perfection but the absence of insanity: the dual-run must sustain
zero insane lines, and any remaining differences from the manual
routine are matters of marginal economics, not logic errors.
The third stage ends when trust suffices for production use.
The very same recipe now writes back within a bounded perimeter,
guarded by halting heuristics and reversible by design: emissions
are stateless and idempotent; rollback and replay are routine; and
documentation enables another Scientist to diagnose and revert
within hours. Instrumentation is convincing enough for finance
and operations to audit decisions without meetings: dossiers are
legible, attributions are clear, and the window of responsibility is
explicit. After a short, clean unattended rollout over part of the
perimeter, promotion ceases to be a leap of faith and becomes a
formality.
These gates exist to prevent pilot purgatory. If a stage lingers,
escalate the cause—unfinished extracts, scope creep, or unowned
prices—instead of stretching the calendar. Conversely, rushing
past a gate without meeting its conditions merely displaces risk
into production. The pages that follow unpack the day-to-day
work inside each stage without rehearsing scope or roles already
settled.
10.3.2 Iterative deployment
Deployment operationalizes the experimental optimization method
introduced in Chapter Engineering: expose the numerical recipe
to the real seam every day; let falsification drive the edits; and
progress only once the resulting decisions are sane. The aim here is
practical: how this stance is conducted in production, not a reprise
of its theory (cf. Chapter Engineering, Experimental optimization,
and Whiteboxing).
10.3. ONBOARDING 423
Daily dual-run
Each day at the closing bell, run the recipe end-to-end on the
full perimeter, with the same cutoff and the same admissible
moves it will have in production. Emit decisions but do not write
them; archive both emissions and their dossiers; compare them
to the incumbent routine. The run is stateless and reproducible:
given the extract and the code version, the outputs are identical.
Halting heuristics remain active; any halt is treated as a seam
defect (missing tables, stale shadow prices, semantic drift) to be
removed before the next run. Dual
-
run is not a demo; it is the daily
laboratory where the recipe meets reality until it stops producing
surprises.
Zero insane decisions
“Insane” names any emission that violates a hard physical or legal
constraint; contradicts an elementary business fact; or carries a
manifestly negative expected return under the stated prices. The
absence of insanity—sustained over the whole perimeter—is the
gate out of dual-run and matches the second stage in the timeline
above. Marginal disagreements with the manual routine do not
block promotion; they are adjudicated by the ledger and settled in
the next section’s rollout.
Evolving instrumentation
To make falsification fast, every emission carries a short dossier
that exposes its main economic drivers—in coins—together with
the few constraints that bind it. The dossier schema belongs to the
recipe and evolves with it; it is versioned, replayable, and written
by the same code that produces the lines. In dual-run, dossiers
let practitioners pinpoint the cause of a surprise in minutes; in
production, the same dossiers let finance and operations audit
decisions without meetings. The mechanics of whiteboxing were
developed earlier; here we keep only what accelerates deployment.
424 CHAPTER 10. DEPLOYMENT
When dual-run has held for a while with no insane lines and halt-
ing is rare and explained, the same recipe can begin to write back
within a bounded perimeter. The mechanics of that cutover—scope
selection, safety valves, and reversibility—are addressed next under
Progressive rollout.
10.3.3 Progressive rollout
Moving from daily parallel checks to active reliance on the system
of intelligence is an exercise in risk management. The transforma-
tion must be decisive enough that the company at large no longer
falls back on older manual methods at the first sign of trouble, yet
cautious enough to avoid large-scale disruptions. Many organiza-
tions hesitate at this juncture—maintaining the new system as a
perpetual pilot—but an orderly, firm transition keeps the initiative
out of limbo.
The first scope
A common way to begin is to select a tightly knit segment of the
supply chain and let automated decisions operate for that subset
alone. For instance, a single distribution center might follow the
daily recommendations for a specific category, or a manufacturing
facility might run the automated production schedule for a single
product line. Ideally, these products or facilities already have
stable data and well-understood constraints. By adopting the
recipe’s outputs in live operations (e.g., sending the suggested
purchase orders directly to suppliers), the organization can verify
that no “insane lines” appear under genuine operating conditions.
This initial transition typically lasts a week or two, long enough
to confirm that basic cost, inventory, and lead-time assumptions
translate accurately into daily actions.
Once performance stabilizes, a broader portion of the network
can be brought in. For instance, if the first wave covered one dis-
tribution center, the next may encompass all distribution centers
of the same region. With every extension, the Scientist and the
10.3. ONBOARDING 425
Data Officer must ensure that the same data-pipeline architecture
scales to the added locations or product lines without new com-
plexities. In parallel, the Flow Manager confirms that no unique
local rule (such as a vendor’s strict full-pallet requirement) has
been overlooked. This pattern—select, confirm stability, then ex-
pand—repeats until all relevant corners of the original scope are
covered.
Manual fallback phase-out
The harder task is deciding how—and when—to retire the old
manual steps. Many companies have attempted “semi-automation”,
in which planners review every system-generated recommendation
and adjust parameters by hand at the first hint of a shortfall.
Prudent in appearance, indefinite double-checks quietly erode the
gains. Nominal automation devolves into a glorified suggestion
engine, ignored the moment numbers contradict intuition. Every
hour spent second-guessing or retyping recommended orders is
an hour not invested in tasks a machine cannot handle—such as
forging supplier relationships or analyzing fresh market signals.
Truly capitalizing on automation requires retiring the older
process from active use. This decision may seem to require a
leap of faith. But by then, the daily comparisons should have
shown that the new method rarely, if ever, produces glaring errors.
When anomalies occur—supplier lead times suddenly spiking, an
unannounced change to a packaging standard—correcting them in
the recipe proves more systematic and faster than the spreadsheet
patchwork. Because the logic is centralized in code, a single fix
corrects potential errors across SKUs and locations. This clarity
ultimately convinces planners and middle managers that adoption
saves time without ceding control.
Handling exceptions
A dynamic supply chain inevitably encounters cases the recipe did
not fully anticipate. Natural disasters might flood a warehouse,
426 CHAPTER 10. DEPLOYMENT
or a supplier might default. The same automation that works
under normal conditions now needs a quick adjustment—either a
short-term override or a modeled constraint. Immediately after
the old manual process is switched off, everything that breaks feels
urgent. Surges in recommended purchase orders or an inability to
meet large batch sizes have to be tackled without the comfort of
the old spreadsheets.
The Scientist bears primary responsibility here. Because the
recipe is stateless—recomputing decisions fresh each day from
the entire dataset—changes can be introduced as soon as the
assumptions are clear. If a water-damaged facility must run at half
capacity, the Scientist can encode that limit the next morning. The
key is to integrate those exceptions into the unified code, rather
than let them reappear as local overrides in scattered spreadsheets.
Even after the crisis passes, the recipe retains that constraint,
ready if it recurs.
Measuring outcomes
As mechanical decisions roll out across the network, instrumenta-
tion expands in scope. Previously, instrumentation helped catch
erratic or incorrect lines; now it also assesses whether the predicted
gains in service, cost, or reactivity are materializing. The Scien-
tist—typically with finance and operations leads—compares actual
demand to forecasts and tallies realized margins and lead times
against modeled counterparts. That 360-degree view shows how
well the daily decisions hold up to real-world variability. Crucially,
these performance artifacts—such as accuracy measures or cost
comparisons—are intended primarily for the Scientist’s iterative
improvements, rather than fueling new layers of oversight. A new
gap between forecast and demand might signal a subtle seasonality
shift or a change in competitor pricing—an insight best folded back
into the recipe’s code. If those artifacts are shared widely across
departments, the organization risks returning to forecast-tweaking
by committee, diluting accountability.
The mechanical rollout rests on two concurrent forms of mea-
10.3. ONBOARDING 427
surement. First, the zero-insanity threshold: does the recom-
mended order or schedule remain obviously correct for the day?
Second, the longer-term actual-versus-projected checks: does the
approach reflect the business’s cost realities and revenue objectives?
Keeping both forms of instrumentation in the code base—visible
and revisable by the Scientist—lets the Scientist locate faulty
assumptions swiftly. The company retains a stable end-to-end
perspective, free of the confusion arising from partial trackers or
conflicting spreadsheets in separate departments.
Solidifying autonomy
Over several months of phased rollout and refined instrumentation,
the initiative coalesces into a daily routine. Staff who once scram-
bled through spreadsheets redirect attention to more nuanced tasks
or to immediate exceptions that inevitably arise in fluid markets.
Confidence in the automated system grows steadily. By the time
the entire scope runs under the new recipe, the manual approach
is not missed. Employees see that re-litigating every reorder deci-
sion is unnecessary, while managers note that changes in demand,
supply constraints, or even cost structures can be adjusted at the
recipe level swiftly and consistently.
Successful automation ignites a cultural shift. Once staff real-
ize that one piece of supply chain has truly become mechanical,
they begin identifying new areas where a similar approach applies.
Even if those expansions fall outside the original scope, they can
piggyback on the pipeline, instrumentation, and iterative process
that have already proven themselves. Such momentum spares
the organization from the stagnation common after conventional
software rollouts, where the system is “frozen” as soon as it goes
live. Here the system’s daily outputs remain open to refinement,
precisely because the capacity for quick iteration was baked into
the initiative’s structure from the beginning.
Transitioning to full automation is not the final word. The next
question is how to sustain and improve this new normal, ensuring
the recipe keeps pace with shifts in corporate strategy or market
428 CHAPTER 10. DEPLOYMENT
conditions. While no one wishes to revert to spreadsheets once a
well-tested mechanical approach is in place, ongoing vigilance is
needed to prevent complacency. The following section explains how
to keep the initiative relevant amid continuous evolution, turning
each event—from everyday changes to black-swan disruptions—into
an opportunity to strengthen the automated process.
10.3.4 Long-term maintenance
A deployed system of intelligence is never static; however polished
its numerical recipe may appear, business conditions and opera-
tional realities keep evolving. New regulations may arise, pricing
structures may change, and lead times or service constraints may
shift in ways the automated recipe did not anticipate. An initiative
that stops adapting soon becomes as obsolete as the system it
replaced.
Maintenance, in this context, is not limited to software patches
or data extraction fixes. Above all, it is about preserving the
correctness and relevance of the economic logic encoded in the
recipe. If the company opens a new distribution center or negotiates
a new supplier contract, the costs and constraints underpinning
daily decisions must be revised to reflect these developments. In
practice, failing to maintain the recipe means either reverting to a
litany of manual overrides that undermine automation or letting
it produce decisions increasingly divorced from the company’s
objectives.
Relentless evolution
One critical reason for systematic upkeep is that every link is
constantly evolving—new SKUs are introduced, old SKUs retired,
marketing campaigns appear, and unexpected disruptions force
alternate routes. If the recipe stays locked in last year’s assump-
tions, it will miss fresh opportunities and fail to mitigate newly
introduced risks.
10.3. ONBOARDING 429
This variability is not a flaw of automation but a feature of real-
world markets. A robust initiative embraces change by scheduling
regular reviews of the recipe’s assumptions and surfacing anomalies
whenever the data pipeline detects large deviations from normal
conditions. A surge in freight costs might call for deeper modeling
of container capacities or a reevaluation of the favored transport
mode. Similarly, adding advanced manufacturing lines can alter
trade-offs: batch sizes once optimized can become wasteful if new
lines offer cheaper, faster setups.
Finance and operations
Coordinating updates to the recipe demands ongoing dialogue
between the Scientist and senior leaders in finance and operations.
Day-to-day maintenance can address minor issues—a new vendor
code or a recalibrated reorder policy—whereas significant changes,
such as a new discount structure or a different approach to working
capital, are inherently strategic.
For the recipe to remain aligned with the company’s current
financial realities, it must reflect any shifting cost allocations or
budgetary constraints. Because finance controls how capital is
valued, the Scientist looks there for clarity on how aggressively or
conservatively to scale inventory, how to treat intangible overhead
(such as brand damage from a missed shipment), and how to
weigh shipping delays against the expense of urgent overtime. An
annual or semiannual joint review ensures the code continues to
reflect the company’s strategic direction rather than drift into stale
patterns. If upper management wants to prioritize expansion into
new markets or slash air-freight usage by a given percentage, the
Scientist cannot guess these priorities and must be informed early
and precisely.
Stateless implementation
An enduring design principle for effective maintenance is to keep
the recipe stateless. Each day or week it recomputes every rec-
430 CHAPTER 10. DEPLOYMENT
ommendation from raw data, carrying no hidden state forward.
This approach prevents errors from compounding over time. If an
upstream glitch inflates today’s stock levels, tomorrow’s run—once
the data is corrected—naturally reverts to normal decisions. The
system can “snap back” as soon as the data pipeline recovers,
without lingering ghosts of old calculations.
Stateless designs also make long-term maintenance easier. When-
ever the Scientist updates a cost function or introduces new con-
straints, the change takes effect immediately on the next run,
touching every SKU or site that needs adjustment. There is no
need to weed out older stored states or to worry that a previously
trained model might have internalized outdated signals. Recomput-
ing each decision may look demanding, but modern infrastructures
handle these volumes for supply chains of any size, especially once
data extraction is under control.
This stateless pattern shows its worth when the system must
handle surges in demand or abrupt policy shifts. If an entire region
experiences an unusual spike in orders, the next daily run can
incorporate it without being anchored to the prior day’s misjudg-
ment. The recipe is flexible: it adapts as soon as conditions evolve,
eliminating the temptation to rely on stale manual buffer rules
once used to shore up a fragile process.
Black-swan events
Beyond everyday variability lies the risk of severe, fringe disrup-
tions—warehouse fires, floods, pandemics, or sudden regulatory
embargoes. Such black-swan events upend normal cost structures,
capacity constraints, and distribution paths. While no daily recipe
can anticipate events far outside typical parameters, the organiza-
tion can rely on the iterative approach used during the initiative’s
creation. When disaster strikes, the Scientist encodes the new or
temporary constraints directly into the recipe: a site might run at
half capacity or be bypassed entirely; an emergency logistics route
might be assigned ten times the usual shipping cost. Once these
short-term adjustments are made, the system runs again, giving
10.3. ONBOARDING 431
an immediate view of how the new constraints reshape allocations
or reorder lines.
A black-swan event might also require deeper changes—building
alternative flows or drastically revising lead times for an entire
region. The principle remains: the recipe is the single point of
integration, so the newly discovered constraints or extreme costs
are introduced there. This approach is more resilient than ad hoc
firefighting through manual overrides scattered across spreadsheets
and departments. Each fix, once validated, becomes part of the
code base and ceases to rely on ephemeral human vigilance.
Crucially, an extraordinary crisis can become an opportunity to
deepen the company’s risk awareness. If a supplier default reveals
that 90% of a key component was single-sourced, the recipe can
be permanently updated with rules that prevent overexposure to a
single vendor. The same instrumentation used for daily checks can
surface such vulnerabilities. In time, lessons from turmoil elevate
the baseline, ensuring the system emerges more robust.
Long-term maintenance, therefore, is not only vigilance against
normal drift but also a structured response to sudden shocks. Both
daily updates and urgent fixes tie back to a single stateless recipe
that remains the core of an automated supply chain. Once a
company becomes accustomed to this cycle—small increments,
periodic recalibrations, and crisis-driven overhauls—it finds that
the automated flow stays current. The initiative proves more stable
and flexible than any patchwork of spreadsheets because it has a
built-in mechanism for continual renewal.
At this juncture, the initiative has a stable architecture, daily
operations run under automated decisions, and the company is
ready for both routine changes and big surprises. What remains is
to assess how this redefines roles, organizational structures, and
job descriptions: a well-maintained recipe does not merely replace
old processes; it transforms the supply chain workforce.
432 CHAPTER 10. DEPLOYMENT
10.4 Organizational changes
Organizational shifts are unavoidable once automated decisions flow
daily through the company. Though the mechanics of a well-defined
scope and an iterative rollout may feel purely technical, people
across the business soon realize that many tasks—once handled
by planners or middle managers—now proceed with little human
intervention. Employees accustomed to gathering data, reconciling
conflicting indicators, and juggling last-minute overrides discover
that the new process leaves fewer gaps to patch. Some welcome
the relief and focus on thornier supply chain problems that resist
automation. Others sense a loss of relevance and resist the change,
citing alleged “exceptions” or insisting the system “cannot capture
every detail”.
Yet once laid out explicitly, these details can be encoded into
the recipe. If, for instance, a planner tweaked reorder quantities
for a vendor that consistently shipped partial pallets, that nuance
belongs in the code. Once added, the recipe no longer relies on
last-minute fixes. Freed from perpetual firefighting, employees
can extend the scope of the initiative or strengthen relationships
with partners. Over time, the company must acknowledge that
many day-to-day routines have changed. Traditional planners,
accustomed to scanning spreadsheets for hundreds of SKUs each
morning, find such manual checks redundant. Finance, which once
saw forecasts as loose “best guesses” that demanded caution in
budgeting, is now confronted with a system that commits capital
daily unless told otherwise. The IT department, historically over-
loaded with backlog requests for reports or mini-applications, finds
that a single well-engineered pipeline supplants many scattered
customizations.
Overcoming these frictions requires a clear articulation of why
the initiative demands not only a shift in software but also a
transformation in who does what. If the automated decisions are
trusted, the company no longer benefits from employing multiple
teams to replicate or override them. Rather than stewarding endless
spreadsheets, planning staff should enhance how each product is
10.4. ORGANIZATIONAL CHANGES 433
sourced or stored, refine cost assumptions, and capture new signals
that would otherwise remain invisible to the algorithm. Similarly,
finance executives need not revisit the same line-by-line details if
the economic drivers in the recipe faithfully reflect the intended
constraints. Instead, their role is to adjust those drivers when
strategy changes or new resource limits apply.
This shift also calls for new forms of cooperation. A mechanical
system is both transparent and inflexible: it cannot quietly paper
over a mismatch between two departments. If a warehouse con-
trol rule contradicts a recent marketing push, the conflict appears
quickly in the daily outputs. Ultimately, resolving such contra-
dictions requires more than a polite meeting or an ad hoc fix. It
requires the Scientist—or whichever role now owns the recipe—to
incorporate the updated guidelines and then rerun the logic. For
that to happen without chaos, the organization must unite behind
a single automated process rather than let each team revive a
personal spreadsheet whenever it dislikes the result. Those who
cling to local workarounds will dilute the benefits of automation
for everyone.
The most visible change unfolds in the day-to-day planning
workforce. Previously, they rotated across SKUs or sites, adjusting
inventory levels, double-checking lead times, and manually inter-
vening when supply or demand changed abruptly. Now they can
engage more strategically. Instead of verifying hundreds of reorder
lines, they might confirm that new supplier contracts are coded
correctly or investigate an unexpected surge that the recipe flags
as an outlier. Their daily work becomes less about mechanical
data entry and more about relationship-building, discovering new
cost drivers, and anticipating shifts in the market. That evolution
can feel unsettling at first. Some employees may assume their
original role—manually handling each step—remains irreplaceable.
They may try to keep partial manual processes alive. But once the
system runs smoothly for a small subset and expands to a broader
scope, the recipe’s consistency soon outperforms scattered human
vigilance.
Although organizational changes can trigger discomfort, they
434 CHAPTER 10. DEPLOYMENT
pave the way for the company to become more agile and more
aligned with genuine priorities. If a strategic initiative aims to lower
transport costs or reduce overall lead times, it can be implemented
directly in the recipe, then tested and measured on real orders—not
just debated in a meeting. Planners gain opportunities to shape
how constraints are modeled, rather than playing catch-up each
time demand shifts. Executives see a clearer cause-and-effect
between constraints they approve and the subsequent outcomes in
inventory or fulfillment metrics. As unproductive routines dissolve,
staff once absorbed in repetitive tasks can focus on forging stronger
ties with suppliers, gleaning meaningful signals from the market,
and ensuring that new product introductions or transitions align
with the daily mechanical approach.
These organizational adjustments form the backbone of a sup-
ply chain that thrives on rigorous, automated decisions. In the
sections that follow, we turn to the most direct impact on plan-
ners themselves, who evolve into “Flow Managers” within this
new framework. We then examine how the Scientist role becomes
central to sustaining and improving these decisions long after the
initial deployment.
10.4.1 Planner vs. Flow Manager
When daily decisions become fully automated, the traditional
role of the planner—poring over spreadsheets, adjusting reorder
quantities, and reconciling conflicting priorities—gradually loses
much of its daily purpose. In many ways, this shift is liberating.
Freed from the burden of chasing small data inconsistencies or
applying repetitive heuristics, the same person can step into the
higher-value role of Flow Manager. Rather than micromanaging
every SKU, they now survey the entire supply network, identify real-
world exceptions, and cement collaborative ties among suppliers,
carriers, and internal stakeholders.
10.4. ORGANIZATIONAL CHANGES 435
Repetitive adjustments
Planners were typically tasked with aligning moving pieces—forecast
data, supplier lead times, and local constraints—by manually rec-
onciling spreadsheets and system-generated figures. They were the
“glue” holding processes together whenever the formal software
or the official forecast fell short. In an automated setting, how-
ever, the logic that once filled these spreadsheets is embedded in
the recipe that produces each day’s decisions. This recipe does
not merely generate theoretical numbers: it proposes or enacts
reorder lines, allocations, or capacity bookings, each grounded in
the company’s financial costs and constraints. Whereas before
a planner might spend hours each week scouring for suspicious
stock levels or last-minute shipping mistakes, the process now flags
outliers automatically, prompting targeted interventions rather
than universal manual checks.
As a result, the focus of the role rises to a higher level. Instead
of scanning hundreds or thousands of SKUs for reorder miscalcu-
lations, the ex-planner can invest his energy in more strategic or
more relational tasks. He can track subtle signals that numeric
models might miss, such as a supplier’s newly acquired certification
or early warnings that a carrier is struggling in certain regions. He
can also assess areas where the recipe might benefit from deeper in-
sights—such as incorporating real-time capacity data for a critical
production line or capturing new discount schedules from a primary
vendor. By supplying these discoveries back to the Scientist, the
Flow Manager ensures that the automated recipe evolves without
daily guesswork.
The state of consensus
This transformation often does away with the repetitive alignment
sessions—commonly known as Sales and Operations Planning
(S&OP)—in which teams formerly debated how to adjust forecasts
or handle uncertain capacity. In many companies, these sessions
devolve into monthly or quarterly rounds of nudging numbers, pro-
436 CHAPTER 10. DEPLOYMENT
ducing a single consensus figure still disconnected from operational
realities. Such a process, at best, repeats known constraints and
departmental viewpoints; at worst, it masks conflicts and leads to
feeble compromises.
With a numerical recipe driving daily allocations or production
schedules, there is no protracted debate about which forecast is
“right”. All relevant constraints—lead times, cost structures, stock-
out penalties—are either in the system or not. When a department
raises a concern (for instance, a marketing plan that will push
demand spikes on short notice), the response is concrete: the recipe
must be amended to reflect the true economic stakes. One might
encode higher penalty costs for stockouts of certain promotional
items, or factor in premium freight for sudden surges. Rather
than negotiating a middle-ground forecast in weekly meetings, the
Flow Manager and the Scientist ensure that operational decisions
align automatically with updated models of costs and constraints.
This renders the old forecasting debates largely moot—every “fi-
nal figure” that shapes daily orders is anchored to the company’s
economic logic, not a compromise hammered out in a conference
room.
Real exceptions
Because the numerical recipe self-corrects in response to new data,
daily overrides for individual SKUs become unnecessary. The
Flow Manager’s vigilance instead focuses on genuinely abnormal
events—cases so far beyond the recipe’s rules that they would
produce decisions at odds with reality. For example, if a key
supplier closes a plant for two weeks after an unforeseen technical
failure, the manager identifies a major disruption. Instead of
overriding hundreds of reorder lines, he elevates the matter: “Our
lead time from Supplier A is effectively doubling this month, with
partial shipments only. The Scientist codes the change, the recipe
reruns, and the entire system readjusts accordingly—from revised
reorder quantities to different carriers, if needed. In the former
manual approach, such an event would trigger days of frantic email
10.4. ORGANIZATIONAL CHANGES 437
threads and one-off spreadsheet changes in multiple regions. Now
it is handled as a single change in the logic.
This shift in focus reduces friction across the organization.
The Flow Manager no longer needs to defend, line by line, how
he reached certain reorder amounts. Instead, he can examine
the broader shape of demand spikes or supply constraints. If
certain items systematically underperform service expectations,
the question is not “why did you reorder too little last week?”
but rather “how can we reframe the cost drivers or lead-time
assumptions so that the recipe captures this item’s volatility?”
True issues get resolved swiftly; old patterns of “we’ll just push up
the forecast” fade away.
Strategist, liaison, and auditor
Over time, the rebranded role of Flow Manager stands at the inter-
section of three core missions. First, he becomes a strategist—no
longer stuck in day-to-day reorder monotony, he refines the under-
lying financial logic, highlights future expansions (new warehouses
or product lines), and proposes broadening the automated scope.
Second, he serves as a liaison—communicating with sales, mar-
keting, and procurement to gather developments the automated
system has not yet captured. Finally, he acts as an auditor—spot-
checking the system’s outputs for outliers, investigating anomalies,
and verifying that the cost or lead-time assumptions still match
real-world conditions.
None of these missions resembles the classic routine of adjusting
safety stock in a spreadsheet every morning. That old pattern is
replaced with continuous improvement cycles. Each cycle expands
either the coverage of the automation (taking on new product
families, channels, or constraints) or the depth of the model (in-
troducing better cost approximations for shipping, new rules for
batch production, or more nuanced penalty structures for delayed
deliveries). The ex-planner, freed from clerical tasks, is key to
unearthing which expansions matter most.
438 CHAPTER 10. DEPLOYMENT
The wider team
When the Flow Manager devotes energy to strategic liaison work,
the former roles of “senior planners” or “production coordinators”
must likewise adapt. In organizations that cling to older structures,
multiple staff may still attempt to replicate daily decisions offline.
This duplication only reintroduces confusion—no two spreadsheets
mirror each other perfectly. But in organizations that embrace
the shift, employees who once specialized in manual reordering
can become domain experts. They may concentrate on specific
complexities, such as ensuring hazardous-materials shipping follows
precise regulations, or forging stronger lead-time improvement deals
with logistics partners.
Moreover, while S&OP or forecast-collaboration meetings might
vanish in their traditional form, some communication channels re-
main. The difference is that these channels no longer revolve
around “tweaking” forecasts. Instead, they focus on bridging the
recipe’s logic with the front lines of the business—discussing new
store expansions, pending product line retirements, unexpected
raw material surcharges, or brand-protection strategies that, if
not coded, lead to suboptimal daily outputs. Each conversation
is grounded in the same integrated logic that drives every deci-
sion, and participants talk about adjusting real drivers instead of
haggling over whose forecast is correct.
In sum, as automation takes hold, the planner’s role rises from
daily micromanagement to that of an empowered Flow Manager.
By assuming accountability for capturing operational truths—both
subtle and dramatic—the ex-planner-turned-manager ensures that
the mechanical decision process remains accurate, timely, and ever-
improving. Freed from spreadsheet chores, his insights expand
to serve the entire network, from supply partners to in-house
manufacturing, forging deeper synergies than the old routines ever
permitted.
10.4. ORGANIZATIONAL CHANGES 439
10.4.2 The Supply Chain Scientist
The Supply Chain Scientist carries a unique responsibility: translat-
ing the organization’s evolving economic realities into a numerical
recipe that steers the daily flow of goods. Once the company
commits to replacing large swaths of manual planning with auto-
mated decisions, the Scientist becomes the process’s linchpin. This
role stands apart from traditional business analytics or forecast-
ing positions because it mandates both end-to-end ownership of
the recipe and direct accountability for profitability. Instead of
merely improving visibility or producing numerical artifacts, the
Scientist aims to deliver correct, self-adjusting decisions, trusted
in production day after day.
Bridging with IT
From a technical standpoint, the Scientist sits at the frontier of the
company’s systems and data. Yet it would be a mistake to treat the
Scientist as part of the IT function. The Scientist must be free to
focus on modeling supply chain decisions, not on wrestling with a
litany of data-integration chores that IT already specializes in. The
Scientist does rely on a stable pipeline of raw records—unfiltered
extracts of transactions, inventories, supplier terms, and the like.
However, once that pipeline is set up, the Scientist’s mission pivots
to coding new insights and constraints that reflect supply chain
conditions. The IT group, for its part, remains the caretaker of
data extractions and infrastructure.
This division of responsibilities benefits both sides. The Sci-
entist does not worsen the IT backlog with endless demands for
specialized transformations; the data officer in IT delivers neutral,
daily snapshots of the source systems, letting the Scientist handle
all refinements in the recipe. Meanwhile, the IT department can
concentrate on its broader operational mandate—maintaining the
stability of key applications and ensuring that every upstream
database and data feed remains healthy. Once a problem surfaces,
such as a table rename in the ERP or a failed overnight export, the
440 CHAPTER 10. DEPLOYMENT
two sides cooperate quickly: IT resolves the pipeline glitch, and the
Scientist resumes normal operations with the updated data. There
is no friction or confusion about who “owns” the transformation
logic or which changes were introduced midstream.
On finance
A corollary of the Scientist’s mandate is the need to integrate finan-
cial constraints deeply into daily decisions. Cost drivers—whether
freight rates, working-capital limits, or margin targets—cannot
remain abstract ideas floating in slide decks. They require explicit
numerical treatment so that every allocation roots the trade-offs
in genuine monetary realities. The Scientist thus cooperates with
finance not out of formality but out of necessity: if upcoming
promotional campaigns alter expected margins, or if new capital
constraints preclude particular inventory levels, the recipe must
be updated accordingly.
These exchanges with finance take place more regularly than
might be assumed. During the initiative’s rollout, the Scientist
typically holds a series of short meetings where he presents how
current cost assumptions—such as storage overhead or penalties
for expedited freight—translate into daily decisions. Finance might
spot hidden inaccuracies, such as an overlooked handling fee for
bulk deliveries or a missed opportunity to factor discount tiers
more finely. Correcting these details can shift the solution from a
naive local optimum to decisions that better reflect the company’s
real P&L impact. This is how the automation remains financially
“sharp”—no budget line or profit center is left to guesswork.
Over time, even after the recipe matures, major operational or
strategic shifts still prompt fresh rounds of modeling. A high-level
directive—such as lowering the cost of capital or diminishing expo-
sure to a class of risks—flows through finance to the Scientist, who
translates it into revised penalty functions, discounts, or sourcing
rules. The effect appears in the daily outputs soon after, keeping
the company’s finances and day-to-day supply chain harmonized.
10.4. ORGANIZATIONAL CHANGES 441
On leadership
Although the Scientist is neither the CEO nor the Executive Spon-
sor, he does hold a pivotal seat at the table whenever top leadership
alters strategic priorities. As soon as the new direction crystal-
lizes—entering a new country, switching to nearshoring, favoring
premium service for certain product lines—the numerical recipe
must encode those priorities. Without this prompt alignment, the
mechanical decisions remain stuck in outdated logic.
A direct, brief conversation with top leadership once or twice
a year can suffice. Its purpose is not to pore over micro-level
code changes. Rather, it confirms that the Scientist understands
the larger picture. If the company aims for faster expansions yet
stricter capital usage, the Scientist might reflect this by tightening
inventory budgets on slower-moving items or by introducing steeper
penalties for using emergency freight. The recipe’s daily outputs
thus shift where they will influence the bottom line. Because
the code is stateless, no states linger to hamper the transition.
Within a week of leadership’s new directive, the supply chain might
already be producing purchase orders aligned with those strategic
imperatives.
This pattern of swift adaptation demonstrates how strategic
coherence emerges from the supply chain’s mechanical foundation:
once the code’s assumptions change, every relevant decision shifts as
well. Where a manual process once required months of incremental
retraining for dozens of planners, a single update here instantly
scales across the entire product range or site network.
Beyond two Scientists
In many midsize companies, a single Supply Chain Scientist—plus
a second for redundancy—can cover entire decision streams. Yet as
the initiative grows—adding more product lines or diving deeper
into advanced production or distribution problems—additional
Scientists may be needed. Managing a larger team requires a more
rigorous approach to the recipe’s structure.
442 CHAPTER 10. DEPLOYMENT
The first principle is to keep as much shared logic as possi-
ble in a unified codebase. Each Scientist might handle distinct
sub-areas—one focuses on procurement constraints, another on
production scheduling—yet all collaborate on a single blueprint.
Just as software engineers rely on version control and code reviews,
Scientists adopt similar practices to coordinate daily changes and
ensure an improvement in one domain does not inadvertently break
constraints in another.
Crucially, Scientists remain supply chain experts first and fore-
most. They are not hired to pursue abstract machine-learning
innovations. If they set up code reviews, the goal is not to create a
specialized IT pipeline but to keep daily decisions consistent and
error-free. Each Scientist learns the rationale behind major lines
of code—the “why” as well as the “what”. A master manual
5
can
prove invaluable, tracking the strategic motivations for each rule.
When one Scientist departs or a new hire arrives, knowledge trans-
fer is smoother and spares the group from rediscovering intricacies
of cost assumptions or corner-case constraints.
By handling expansions in this manner, the Scientist’s team
can scale gracefully while preserving the daily iteration pace. No
matter how large the business becomes, every aspect—from new
discount structures to refined lead-time predictions—can be tackled
by the relevant Scientist and then merged, in an orderly way, into
the master recipe.
Strict focus
Although they code extensively, Scientists are not subordinate
software developers. Their success is measured by the real impact
of daily decisions—lower inventory misalignments, higher margins,
or faster adaptation to abnormal conditions. They do not vanish
into IT’s backlog, and they are not a generic “data science” team
chasing ephemeral accuracy benchmarks. Instead, they retain a
sharp focus on actionable changes that reorder the flow of goods
5
Called the Joint Procedure Manual internally at Lokad.
10.4. ORGANIZATIONAL CHANGES 443
to better serve strategic and financial goals.
This grounding in the tangible separates them from depart-
ments that traffic in dashboards or once-a-quarter analyses. The
Scientist must repeatedly take small, purposeful steps with code
whenever the environment shifts, verifying each day’s results in
production. The synergy with IT is essential but bounded at
the data-extraction boundary. The synergy with finance persists,
ensuring that cost drivers retain credibility even as the market
moves. And the synergy with senior leadership tightens, allowing
the automated process to reflect any new direction the company
adopts swiftly.
By internalizing these realities, companies realize that the Scien-
tist’s role is neither ephemeral nor disposable after the “automation
project” ends. Just as the Flow Manager replaces the manual plan-
ner in day-to-day tasks, the Scientist replaces a tangled web of
partial solutions with an evolving, codified approach. The necessity
of strategic updates never ceases, and the Scientist stands ready
to embed each shift into the mechanical logic. This continuous
interplay is how a supply chain remains perpetually relevant and
linked to real economic drivers.
The Scientist, in short, is a new breed of contributor to corpo-
rate operations—one whose orientation is neither purely technical
nor purely managerial, but a binding of the two. Their authority
is drawn not from any formal managerial hierarchy but from the
everyday accuracy and adaptability of their code. In the next
section, we examine in greater depth the common pitfalls that be-
fall “data science” in corporate contexts, and why genuine Supply
Chain Scientists must go beyond superficial number-crunching to
attain lasting success.
10.4.3 The greater company
Every initiative that reshapes supply chain decisions, no matter
how precisely it is scoped at the outset, inevitably presses on the
corporate fabric as a whole. The numerical recipe cannot operate in
isolation—its daily outputs must reflect the constraints, objectives,
444 CHAPTER 10. DEPLOYMENT
and cost structures of multiple departments, many of which have
traditionally maintained local processes disconnected from broader
concerns. While the earlier sections showed how to structure the
initiative around a Supply Chain Scientist, the Data Officer, the
Flow Manager, and the Executive Sponsor, the rest of the company
must also learn to interact with—and ultimately trust—this new
decision-making process. In the absence of a unifying framework,
local, short-term pressures from sales, marketing, procurement, or
other areas risk overriding or undermining the mechanical output.
Yet the real strength of the recipe lies precisely in gathering
every relevant driver and expressing it daily, without hidden bias
or fragmented oversight. For that to happen, each department
must relinquish the temptation to maintain its own confidential
spreadsheets and shadow processes. It must, instead, state its
constraints and cost structures in the open so the Scientist can
encode them directly. This is where upper management, and in
particular the top executive or CEO, must intervene decisively. The
CEO is not, and cannot be, the final arbiter of economic drivers;
such details are too fine-grained for the top office. Rather, the
CEO is the ultimate arbiter of which department owns each driver,
and of ensuring that these owners cooperate with the Scientist
rather than attempt to circumvent the system. If finance asserts
that certain capital costs must be set at a specific rate, those costs
become the recognized standard. If procurement stipulates that
a particular strategic vendor is to be spared from cuts in order
volumes, that choice is modeled explicitly. The policy must come
from the relevant department, but once confirmed, it ceases to
exist as an unwritten rule and is anchored in the daily decision
logic.
This arrangement can feel jarring to departments that previ-
ously adjusted their numbers on a whim. When a brand manager
finds that a popular product does not appear in the upcoming
order plan, he may fear losing revenue or market share. In the
old approach, he could scramble to override the reorder quantity
in a spreadsheet, whether or not it truly benefited the company’s
bottom line. In the new system, if the brand manager believes
10.4. ORGANIZATIONAL CHANGES 445
that omitting the product causes reputational damage that far
outweighs any short-term profit concerns, the correct path is to
escalate this factor so its financial cost or penalty is coded into the
recipe. The goal is not to “blame” the mechanical process for ignor-
ing a brand imperative; it is to embed that imperative—expressed
as a penalty for stockouts or a premium for brand-building avail-
ability—in the economic drivers. From then on, the daily decisions
that once led to friction will naturally align with the brand man-
ager’s priority, so long as that priority remains truly important to
the company’s overall strategy.
Similar tensions arise with marketing when a promotion is
omitted from the final plan, or when a featured product proves
marginally profitable and is assigned a low reorder priority. If the
marketing team has evidence that a promotion will raise cross-
selling of other items, it must show how the additional revenue or
brand equity translates into an economic payoff. Once the added
payoff is stated explicitly, the Scientist incorporates it into the
code. Those marketing concerns no longer get handled through last-
minute overrides or “final forecast adjustments”; they become part
of the daily mechanical process—an open, documented variable
that can be revised if market feedback shows the promotion did
not generate the expected cross-selling. This open articulation of
what were once tribal or intangible factors forces each department
to decide whether a local desire is a passing hunch or a vital
economic truth. It is, in effect, a game-theoretic dynamic: every
department sees that partial cooperation (such as whispering an
inflated demand to secure inventory) no longer yields an advantage.
The final recipe checks all constraints daily; if a cost or benefit
is not declared, it is not recognized. Top management’s role is
to ensure each domain states its costs and gains honestly, rather
than manipulating demand figures to secure resources at others’
expense.
Procurement, likewise, may initially feel threatened when the
daily recipe dictates exact reorder lines and timing, effectively
replacing the manual negotiations some buyers believed were their
primary value. Yet the procurement function remains indispensable
446 CHAPTER 10. DEPLOYMENT
in revealing which vendors have reliable quality, which suppliers
impose hidden conditions, and which may merit sacrifice in favor
of new sourcing strategies. The procurement team does not need
to overshadow the system or revert to side deals; it gains more
influence by stating vendor constraints clearly. Are there mini-
mum batch sizes, special inbound tariffs for partial shipments, or
vendor loyalty discounts? Each nuance is a piece of the economic
puzzle. Once coded, it shapes the recipe’s outputs, so procurement
influences daily decisions through stable, rational rules rather than
repeated manual rework.
The same logic applies to engineering teams and product de-
signers who want to incorporate new components or adjust bills
of materials. In the past, a design modification could blindside
planning, causing last-minute shortages when the newly specified
part was overlooked. Now the mechanical daily process immedi-
ately flags any unknown part, revealing that the new item lacks
cost references or lead times. The design team can no longer slip
changes through unnoticed; it must collaborate with the Scientist
to define how the new component alters production constraints or
overhead. This enforced collaboration spares the company from
silent supply mismatches that once appeared “inevitable”.
Even finance—long accustomed to viewing supply chain as a
cost center apart from brand or marketing priorities—is pressed
to clarify how liquidity constraints and working-capital targets
feed into daily reorder levels. If finance wants certain lines to run
with minimal buffer stock, or signals that inventory valuations are
changing, those updates translate into immediate code changes.
The daily outputs no longer merely reflect a forecast and a stan-
dard margin; they express finance’s up-to-date stance on capital
usage. When a top executive wonders why certain SKUs have
been cut back, there is a direct line from finance’s constraints to
the automated decisions—and from the Scientist’s code back to
finance’s constraints. No arbitrary overrides are needed and no
endless forecast meetings are held; everyone sees the cause and
effect.
In such an environment, the question of blame rarely arises.
10.4. ORGANIZATIONAL CHANGES 447
If an unexpected shortfall in a promoted SKU hits the shelves,
marketing will challenge the numerical recipe—not to undermine
it, but to check whether the promotion’s impact was correctly
included. The Scientist, in turn, checks whether marketing truly
disclosed all relevant information—such as incremental brand risk
or cross-selling gains. If not, the remedy is a new or revised cost
assumption. Over time, these cross-department interactions evolve
from friction-laden skirmishes into constructive updates to the
recipe’s parameters. This model of collaboration thrives only if
top management, from the CEO downward, endorses the process
and stands firmly against reintroducing manual manipulations
whenever a department is dissatisfied with an outcome.
Once each department sees that alignment comes from adjusting
shared assumptions—not from ignoring or overriding mechanical
outputs—it realizes the mechanical decisions have become the
single source of truth for daily operations. That single source is
not rigid or unwieldy; it adapts as soon as a cost, constraint, or
margin driver proves relevant. It refuses to incorporate half-baked
ephemeral guesses. This dynamic can be disconcerting in the
initial months, as employees discover they can no longer quietly
fix an inconvenient number in their spreadsheets. Yet after a
full cycle—often a quarter or two—people recognize that fewer
emergencies arise, fewer turf wars erupt, and the supply chain feels
more coherent.
The initiative becomes more than a supply chain improvement;
it is a mechanism for continuous cross-department synergy. Every
domain within the company—marketing, procurement, engineering,
finance, and operations—sees that the best way to secure its
priorities in the daily flows is to articulate them openly and then
to trust the mechanical process to execute them consistently. The
CEO’s role is to ensure each domain truly owns the costs or benefits
it claims and to endorse the Scientist’s final code as the channel
through which those claims reshape decisions. Anything less breeds
duplication or pushback that dilutes the new system’s gains.
Over time, a culture emerges in which no one fights the recipe’s
daily numbers so long as they reflect properly stated drivers. If
448 CHAPTER 10. DEPLOYMENT
marketing finds brand costs understated, it raises the issue for-
mally. If procurement sees a vendor’s reliability has improved, it
updates either the penalty for stockouts or the expected lead times.
Everyone gradually grasps that game-like maneuvers—such as in-
flating forecasts—no longer yield an advantage, because the recipe
cross-checks everything with actual data. The shared impetus is
to refine numerical assumptions, not sabotage them.
This shift toward a single, daily, mechanical orchestration may
appear dramatic in companies long reliant on departmental auton-
omy. The benefits include sharp reductions in firefighting, deeper
synergy across brand, marketing, and operations, and a stead-
ier link between strategic ambitions and on-the-ground execution.
The moment each department accepts that it must shape the in-
puts—rather than override the outputs—the initiative achieves the
company-wide collaboration essential to long-term success.
Chapter 11
Stagnation
Supply chain, as treated in this book, is an intent executed through
decisions; the narrow claim of this chapter is that, as a discipline,
supply chain has largely stagnated. Not because goods fail to
move—they do—but because firms’ choices of what to move, when,
where, and how much have not kept pace with four decades of
computation. The dominant posture remains clerical: people
reconcile exceptions produced by software that records facts and
narrates the past but rarely places bets.
By stagnation we mean operational stasis masked by motion.
Ledgers are digital and ubiquitous; reports refresh by the minute;
data warehouses swell; acronyms and dashboards proliferate. Yet
the core mechanics of allocation are much as they were in the mid-
1980s: point forecasts treated as certainty, buffers tied to arbitrary
service targets, reorder surrogates wired into weekly cadences, and
a daily swirl of approvals. Computers serve as systems of records
and reports, and where “automation” appears, it is usually alert
theater—lists that hand the hard cases back to humans—rather
than a system of intelligence that issues unattended, auditable
commitments.
This stasis exacts a cost. It systematically suppresses what
moves money: fat-tailed variability, couplings across time and
space, and the option value of waiting. Averages stand in for
449
450 CHAPTER 11. STAGNATION
distributions; local performance indicators stand in for economics;
Goodhart’s law quietly turns measures into targets. The result is
a clerical equilibrium that delivers continuity through buffers and
overtime while letting coins leak in ordinary times but faltering
under stress.
None of this is inevitable. Earlier chapters have assembled the
missing instruments: probabilistic views of the future instead of
point surrogates; coin-denominated valuations at firm scope; and
systems of intelligence that recompute within frames, price trade-
offs, and write decisions back unattended—guarded by halting
heuristics, whiteboxed dossiers, and experimental optimization
under dual-run. When these tools are adopted, the same assets
produce better flows because the decisions change.
What follows is both synthesis and indictment. The field did not
merely fail to accelerate with modern computation; it settled into
a stable, mediocre equilibrium sustained by incentives that reward
narratives over emissions: academic puzzles, vendor rebranding
of records and reports as “planning”, consulting ceremonies, and
internal “data science” that stops at insights. The organizational
trap—supply chain’s second
-
class status and subordination to
central IT—anchors the arrangement.
This double image—a flow that keeps delivering and a discipline
that stands still—sets up the paradox that follows.
11.1 The paradox of success
Supply chains are celebrated as dependable. Goods cross borders
daily; store shelves are rarely bare for long; companies post solid
results while expanding their networks. Judged by visible outcomes,
the field appears vigorous and resilient.
Yet this outward success masks a deeper stalemate. Over
decades of extraordinary gains in computing, most organizations
have treated computers as ledgers and dashboards rather than
decision engines. What passes for “automation” is usually a cascade
of exceptions for humans to reconcile; what passes for “planning”
11.1. THE PARADOX OF SUCCESS 451
remains a brittle blend of deterministic forecasts, reorder surrogates,
and manual overrides—merely reskinned with modern interfaces.
Continuity is bought less with engineered intelligence than with
buffers, redundancy, and clerical vigilance.
This is the paradox: reliable delivery at scale coexists with
intellectual stasis. Other domains have converted cheap compu-
tation into unattended decision making; supply chain has largely
repackaged yesterday’s routines. The much-touted innovations
are cosmetic—new acronyms for old artifacts, new screens for old
habits—while the core remains a daily, human-mediated reconcili-
ation of numbers.
The cost of this stasis seldom appears in a day’s snapshot
because the flow does “deliver”. It becomes conspicuous dur-
ing shocks—abrupt demand shifts, transport disruptions—when
processes designed for compliance rather than adaptation to un-
certainty falter. More damaging is the quiet, compounding bleed
in ordinary times: overgrown safety stocks, underused options,
and thousands of hours spent curating spreadsheets instead of
improving decisions. Expensive software is acquired, but it rarely
elevates decision quality; its “automation” devolves into alert the-
ater that assigns the hard parts back to people. More insidious
than any visible miss is the mounting opportunity cost: every day
that variability is treated as a nuisance rather than priced and
exploited is a day when surplus that modern computation could
harvest simply evaporates.
Earlier chapters outlined remedies that already exist: proba-
bilistic views of the future, economics
-
anchored valuations, and
systems of intelligence that emit unattended, auditable decisions
daily—continuously improved through experimental optimization.
The aim here is not to re-argue those instruments but to explain
why a field that could adopt them largely has not. The next section
traces the roots of this equilibrium—academic fashions, vendor
economics, consulting theater, and corporate governance—that
keep a demonstrably central function stable and mediocre.
452 CHAPTER 11. STAGNATION
11.2 Real-world complexity
Invoking “complexity” has become ritual apology for mediocre prac-
tice. The problem is not that supply chains are complicated—our
first chapter already distinguished the essential from the acciden-
tal—but that most organizations flatten what matters economically
into a handful of surrogates. Point forecasts stand in for uncer-
tainty; a single “service level” impersonates profit; average lead
times erase the tails that do the damage. The result is not a model
of the flow but a tranquilized caricature of it.
Simplification is not a vice when it preserves the asymmetries
that move money; it is a fault when it erases them. Variability and
optionality are the raw materials of profit. When a representation
suppresses fat tails, correlations, batching, or ratchets that bind
tomorrow’s options, it no longer guides decisions—it conceals their
stakes.
What follows is not another taxonomy of failure but a con-
crete vignette. A small, operational episode makes it sharp. It
also bridges to the method that consistently breaks stalemates in
the wild: experimental optimization (see the chapters Decisions,
Engineering, and Deployment).
11.2.1 Operational vignette
Consider a mundane episode. A national retailer operates two
regional DCs, each with a nominal receiving capacity of forty
containers per day. Purchasing works on a weekly cadence keyed to
supplier MOQs and rebates; the planning system computes reorder
points from point forecasts and average lead times; warehouse KPIs
track dock utilization and turns; stores are judged on “on-shelf
availability”. No villain, only sensible local goals.
Every Monday, sixty to seventy containers arrive at DC West.
The gates saturate by noon; trucks idle; demurrage mounts; a frac-
tion of the freight rolls to Tuesday; the tail spills into Wednesday.
Meanwhile, store shelves run patchy mid-week on long-tail SKUs
that were “in transit” in bulk on Monday yet physically receivable
11.2. REAL-WORLD COMPLEXITY 453
only on Wednesday. No dashboard lies: service looks fine on a
weekly average; the rebate ladder was captured; dock utilization
peaked; inventory turns hold. Yet coins leak everywhere—idle
trucks, overtime, transient stockouts—while the organization con-
gratulates itself for “hitting targets”.
Nothing subtle is required to fix the picture; what is required
is to admit the picture. In dual-run, a thin numerical recipe was
introduced: lead times as distributions with fat right tails; a priced
shadow for dock saturation that rises nonlinearly beyond forty
containers per day; supplier MOQs as penalties rather than edicts;
and, crucially, a daily assessment of orders without an obligation
to order daily. The engine could emit “do nothing yet” whenever
the option value of waiting exceeded the expected, risk-adjusted
return.
On day one, the halt tripped on insanity—a planned Monday
inbound of forty-eight containers for a forty-container dock; dossiers
exposed the drivers—priced in coins—and the defect was removed:
weekly batches were replaced by staggered releases that preserved
rebates while flattening the inbound. Within four weeks of dual-run,
demurrage and receiving overtime fell markedly; mid-week shelf
gaps on the long tail diminished without raising working capital;
rebate capture persisted. No heroics, no grand reorganization:
the same assets, the same suppliers, the same stores—only the
decisions changed.
The lesson is not that “Mondays are bad”. It is that the real
complexity was economic and temporal, not algebraic. Averages
hid tails; a single service metric hid heterogeneous payoffs; hard
“constraints” became elastic once priced. Computers did nothing
human planners could not, in principle, do—they simply did it for
thousands of micro-options every day, without fatigue, surfacing
the right frictions in money.
11.2.2 Local maximum
The episode shows why mediocre equilibria persist. Each function
perched on a defensible hill: purchasing on rebates and MOQs;
454 CHAPTER 11. STAGNATION
warehouses on utilization; stores on availability; finance on turns.
The hills were real; the mountain lay elsewhere. Attempts to
“optimize” any one hill in isolation merely displaced pain. This
is the wickedness described in the first chapter: objectives couple
across time and space, and measures become targets the moment
livelihoods depend on them.
Escaping a local maximum requires no manifesto—only a safe,
mechanical way to try better decisions and keep them when they
pay. Experimental optimization provides exactly that: run side-
by-side against end-of-day extracts; halt fast on insanity; expose
drivers in coins; promote only on measured uplift; revert cheaply.
Because the method prices trade-offs at firm scope, it dissolves
many made-up “constraints” into penalties and turns departmental
vetoes into tunable, auditable parameters.
In practice, the barrier is seldom physics; it is narrative. As
long as averages and single KPIs narrate success, the equilibrium
looks acceptable. Replace the narration with daily, auditable
commitments priced in coins, and the supposed “constraints” reveal
themselves for what they are: choices. The next section examines
why the institutions meant to sharpen those choices—academia,
vendors, consultants—so often teach averages and KPIs instead.
11.3 Why the equilibrium holds
Bad ideas need no shocks to survive. A supply chain can stay
serviceable while its methods lag. The equilibrium persists because
incentives are muted and misaligned: each actor involved in shaping
the discipline earns a quiet rent from keeping decisions human-
mediated and deniable, while no one is paid in coins for unattended
decision quality.
Such equilibria endure because they clear a low bar. Trucks
still move, rebates are captured, dashboards stay green, and no
one must stake a career on a bolder alternative. In the absence
of an existential shock, the local gains of inaction often outweigh
the uncertain gains of rewiring decisions. As long as the apparatus
11.3. WHY THE EQUILIBRIUM HOLDS 455
works well enough, pressure to change remains muted.
The mechanics are banal. Long feedback loops dilute respon-
sibility; dashboards substitute targets for economics; pilots are
staged where failure cannot bite; “exceptions” divert the hardest
cases back to humans, thereby insulating brittle logic from falsi-
fication. In such a setting, mediocrity compounds quietly. What
passes for prudence—alerts before action, manual “validation”,
consensus around a single forecast—merely preserves a clerical
loop that neither prices uncertainty nor learns from it.
Around this loop sits a network of interests that damps pres-
sure to change. Academia is rewarded for puzzles and publication,
not for decisions that survive contact with docks and cut
-
offs;
it polishes stylized models that are easy to grade and hard to
operationalize. Software vendors make their margins on seats,
storage, and upgrades of systems of records and reports; genuine
automation would shrink seats and erase screens. Consultants sell
choreography—workshops, matrices, “alignment”—whose deliver-
ables leave code and economics untouched; the engagement ends,
the rituals remain, and the spreadsheet flow resumes. Corporate
data-science teams, measured by projects and dashboards, stop
at insights; ownership of the coin-denominated commitment sits
elsewhere, along with the incentive to rework assumptions when
reality disagrees. Inside the firm, supply chain remains second-class
and subordinate to central IT: budgets favor ledgers that never fail
over engines that must decide; risk is defined as platform deviation
rather than economic loss.
Prices signal these incentives as clearly as any market does.
Vendors bill for access, not uplift; consultants for effort, not gains;
academics for novelty, not resilience; internal teams for artifacts,
not emissions. Goodhart’s law then does the rest: once a met-
ric promises career safety, the organization optimizes the metric.
“Forecast accuracy” improves by shortening horizons and padding
buffers; “service level” rises with inventory that nobody prices;
“adherence to plan” is celebrated while trucks idle at a saturated
dock.
The vignette above is typical, not exceptional. Nothing in
456 CHAPTER 11. STAGNATION
physics prevented the weekly inbound pile-up; it persisted because
the measures governing each silo—rebates, utilization, availabil-
ity—were local, while the costs they displaced—demurrage, over-
time, mid-week gaps—were borne elsewhere. Where no one is
explicitly accountable for the firm
-
wide economics of the decision,
clerical equilibria harden into custom. The silence of profits and
losses at the level of the micro
-
choice leaves narratives—“best
practice,” “industry standard,” “one set of numbers”—to fill the
void.
The sections that follow examine the main sustainers of this
equilibrium. Academia’s attachment to publishable puzzles, ven-
dors’ rebranding of records and reports as “intelligence” while
outsourcing intelligence to users through alert theater, consultants’
buzzworded ceremonies that stop short of code, and corporate data
science’s insulation from ownership all contribute to a stable but
mediocre state. The organizational trap—supply chain’s second-
class status and subordination to central IT—locks the arrangement
in place. None of this requires malice. It suffices that incentives are
decoupled from unattended, auditable, coin
-
denominated decisions.
Until that coupling is restored, the equilibrium holds.
11.4 Who sustains it
The stalemate endures not through malice but through dispersed
incentives. Four constituencies, each pursuing reasonable aims on
its own terms, collectively sustain a stable but mediocre equilib-
rium: academia, enterprise software vendors, consultants, and
corporate data
-
science programs. Each constituency supplies
something the others demand—publishable puzzles, marketable
platforms, confidence-building ceremonies, and dashboards of “in-
sights”—while leaving the daily act of allocation where it has long
resided: with human clerks reconciling exceptions.
The lens adopted here is adversarial, not hostile. Earlier chap-
ters have already set out what workable progress looks like: unat-
tended, auditable emissions of decisions priced in coins. Against
11.4. WHO SUSTAINS IT 457
that yardstick, we can examine how each constituency, by following
its own incentives, perpetuates practices that never clear the bar.
We begin with academia, whose stated mission—advancing
knowledge—should have made it the natural antidote to stagnation.
Instead, through a persistent misframing of supply chain, it has
too often furnished the rhetoric of science without the substance
of unattended decisions.
Academia
Universities and journals promise science; the factory floor receives
spreadsheets. The gap is not a matter of taste but of framing. As
argued in Epistemology, supply chain is applied economics: it prices
options under variability to allocate scarce resources profitably.
Recasting it as “applied mathematics”—a catalog of tidy puzzles
with conveniently fixed assumptions—yields papers that read well
and travel poorly. What matters operationally is not a theorem
about an idealized reorder point; it is whether unattended software
can turn records into auditable, coin-denominated commitments
that survive contact with docks, cut-offs, rebates, and fat-tailed
lead times.
This misframing explains why mainstream academic outputs
so rarely register in practice. The canonical artifacts—point fore-
casts, Gaussian safety stocks tied to service targets, deterministic
plans—suppress exactly what moves money: tails, correlations,
batching, and ratchets that bind tomorrow’s options. When those
artifacts are installed in software, they reappear as alert theater:
the brittle logic delegates the difficult cases back to humans, who
become the de facto intelligence. The organization then congrat-
ulates itself for “following best practice” while planners continue
the clerical reconciliation computers were meant to end.
The puzzle filter compounds the problem. A publishable paper
is easiest to produce when the world is simplified until a closed
form or a small solver can be shown to “optimize” a toy. One
more constraint is added here, one more index there; results are
reported on synthetic instances or sanitized extracts. Yet nothing
458 CHAPTER 11. STAGNATION
in this workflow compels the authors to expose their premises to
falsification in the wild. In business, knowledge earns trust only
when it places bets—emits decisions unattended—and improves
profit. Absent emissions, the bar is not met, no matter how elegant
the mathematics looks in isolation.
The neglect of data semantics and software engineering is
equally telling. As detailed in Information, modern flows are
mediated by overlapping systems of records and reports, where
field meanings drift and where “derived data” (forecasts, ABC
tags, buffers) contain no new information. A workable system of
intelligence—the engine that issues unattended decisions—must
therefore (i) read authoritative but messy records, (ii) carry un-
certainty without flattening it, (iii) price trade
-
offs in coins at
firm scope, and (iv) write commitments back, with dossiers fit
for audit. Most academic treatments airbrush this junction away.
They assume clean, stationary inputs and stop where the work
becomes decisive: semantics, economics, and code that runs every
day at scale.
The right test bench has long been available. Intelligence,
Decisions, and Deployment laid out a minimal discipline: dual-run
against full-scope end-of-day extracts; halting heuristics that stop
the engine when confidence drops; whiteboxed dossiers showing the
monetary drivers of each line; promotion only on measured uplift;
reversibility through statelessness and time-travel. A scientific
contribution that claims operational merit should instantiate itself
within that harness—so that failures surface as “insane lines”
and are removed by revising costs, constraints, or semantics, not
by exception lists and meetings. Papers that do not cross this
threshold are chronicles or conjectures, not supply chain knowledge.
Curricula mirror the literature. Master’s programs continue
to center on time
-
series gadgets from the 1980s, Gaussian safety
stocks tethered to arbitrary service targets, and classroom ABCs.
Programming and data engineering are treated as accessories;
enterprise semantics and probabilistic economics scarcely appear.
Graduates learn to supervise spreadsheets and packaged “planning”
modules, not to author numerical recipes that convert records into
11.4. WHO SUSTAINS IT 459
unattended emissions. The best student, sensing the ceiling, drifts
toward other STEM fields where code, data, and falsification govern
status. The remainder staff the clerical loop that the literature
presupposes.
None of this condemns mathematics. Durable mathematical
contributions do exist in our trade—but they present themselves
as subroutines inside an economic and engineering frame: queueing
results that calibrate congestion costs; linear
-
programming compo-
nents that assemble a truck; concentration bounds that regularize
demand tails. The error is to mistake the subroutine for the system.
An optimizer with the wrong objective, wrong semantics, or no
unattended pathway is not an advance; it is a polished detour.
A workable academic agenda is close at hand. Treat supply
chain as applied economics first; let mathematics, statistics, and
computer science serve that end. Deliver artifacts that run: code
that ingests semantically faithful extracts; price schedules in coins
rather than “service targets”; probabilistic forecasts evaluated by
proper scoring rules and uplift, not by retrospective fit; numerical
recipes shipped whiteboxed so practitioners can audit each commit-
ment. Above all, accept the discipline of experimental optimization:
iterate in the field until insanity disappears, then publish what
survived. Citations may follow; if not, the coins will.
11.4.1 Software vendors
Enterprise vendors sustain the equilibrium not by conspiracy but
by economics and engineering fit. As argued in Information, there
are three distinct software families: systems of records (authori-
tative ledgers and workflows), systems of reports (narrations of
the past), and systems of intelligence (engines that issue unat-
tended, auditable decisions). Most vendors sell records or reports,
then repackage add-ons as “planning” or “optimization” to chase
growth. The category confusion is not semantic; it is operational.
A CRUD-first ledger cannot host heavy analytics without harming
latency; a reporting stack built to reshape aggregates cannot carry
uncertainty or price trade-offs in coins. When records or reports
460 CHAPTER 11. STAGNATION
masquerade as intelligence, the flow reverts to clericalism.
The dominant pattern is alert theater. “Automation” takes the
form of an exception list that delegates the hard cases back to
humans. Users—recast as human coprocessors—manually reconcile
thin rules against fat-tailed realities. Nothing in this posture
improves the software’s intelligence; it merely hides the absence
behind modern interfaces. The incentives align: vendors bill by
seats, storage, and modules, not by uplift; genuine unattended
decisions would reduce seats and screens. In Intelligence we noted
the culture clash: records vendors optimize CRUD productivity
and uptime; systems of intelligence optimize economic decision
quality under uncertainty. The former has no incentive to build
the latter.
Hype cycles amplify the misfit. Every few years, a wave of
“AI
-
driven planning”, “digital twins”, or “control towers” promises
a clean break from spreadsheets. Startups are acquired and folded
into suites; the add-on inherits the host’s record/report DNA and
reappears as a configurable ruleset with alerts. The combinatorics
remain unaddressed; variability is flattened into averages; costs
are proxied by arbitrary service targets. The result is continuity
under new acronyms.
Analyst quadrants stabilize the story. Because the research
economy is funded by the vendors it rates, superficial features
and fashionable labels are rewarded. The list of “leaders” ro-
tates slowly; the criteria privilege breadth of catalog and refer-
enceability over unattended decision quality. On the client side,
RFP theater—critiqued in Deployment—produces adverse selec-
tion: questionnaires canonize yesterday’s artifacts, demos reward
choreography, and the winner is the supplier best at answering
questions that never should have been asked. When the stack goes
live, the exception list returns.
Operationally, the failure modes are consistent. Category confu-
sion creates resource contention (optimizers inside the ledger starve
workflows) and information loss (derived artifacts are ingested as
if they contained new information). Semantic drift—tolerable in
a ledger when humans supervise—becomes lethal when fed to a
11.4. WHO SUSTAINS IT 461
decision engine that nobody lets decide. Where a system of in-
telligence would recompute from raw, authoritative records and
write back priced, auditable commitments, the packaged “planning”
tier consumes pre
-
digested numbers and emits suggestions that
planners must edit to stay safe.
None of this requires bad faith; it is the equilibrium implied
by the business model. Vendors price access, not coins; buyers ask
for screens, not unattended emissions. Thus, a stable mediocrity
persists. The remedy is not a new quadrant but a different contract
and a different architecture: keep records, reports, and intelligence
strictly disjoint; run the decision engine as a stateless, replayable
asset that ingests raw records, carries probabilistic inputs, prices
trade
-
offs in money, and writes commitments back with dossiers
that expose their drivers. Select suppliers through adversarial
market research rather than RFPs; pay in proportion to unattended
decision scope and measured uplift. Where these conditions are met,
alert theater dies of disuse; where they are rejected, spreadsheets
resurface—an outcome that should be read as the diagnostic it is.
11.4.2 Consultants
Consultancies thrive by promising change that preserves what their
clients are least willing to disturb: the applicative landscape, the
org chart, and the daily rituals that assign accountability without
owning outcomes. In the equilibrium described earlier, vendors sell
records and reports under “planning” labels; consultants complete
the loop by supplying the rhetoric and choreography that let brittle
tools pass for progress. The deliverable is seldom a decision but a
narrative: maturity models, quadrants, roadmaps, and ceremonies
that rebrand yesterday’s routines.
The literature that underwrites these engagements is volumi-
nous and shallow at once. It catalogs horizons, capabilities, and
archetypes; it partitions the world into neat tiers and dial settings;
it multiplies 2×2 matrices whose corners invite an infinity of pages.
Such segmentations are not knowledge; they are staging. They
evade the economic question—what allocation of scarce resources
462 CHAPTER 11. STAGNATION
improves profit under variability—and substitute a taxonomy that
can be taught and sold. As argued across this book, classifications
earn their keep only when they predict; most consulting taxonomies
do not.
Inside the engagement, buzzwords supply momentum. “S&OP
alignment”, “integrated business planning”, “demand-driven”, “dig-
ital twin”, “control tower”, “resilience by design”—none is intrinsi-
cally absurd, but all are routinely deployed as covers for unchanged
decision loops. The work product is workshops, cadence charts, and
“one-number” committees. The same pattern reappears: exception
lists grow, spreadsheets metastasize, and the hardest cases—fat
tails, binders, ratchets, incoherent semantics—are politely routed
back to the very clerks the software failed to relieve.
This cycle repeats because it is safe. Reorganizing meetings
does not threaten the ERP; renaming phases does not expose
premises to falsification; publishing a “target operating model”
does not force a single coin-denominated commitment to be placed
without human arbitration. The engagement pleases every con-
stituency: executives receive a story of progress; middle man-
agement retains vetoes through ritual checkpoints; vendors keep
modules in place; the consultant departs with references. The
change in the flow of goods is negligible.
A workable standard exists to separate choreography from
substance. In Information, Intelligence, Decisions, and Deployment
we set the bar: daily, full
-
scope dual
-
run on authoritative extracts;
promotion only on measured uplift in coins. By that yardstick, most
consulting interventions are non
-
events: they emit no unattended
decisions, carry no probabilistic futures, price no trade
-
offs, and
accept no halting responsibility. They narrate; they do not allocate.
The economic test is simple. If a proposition cannot be run
tomorrow night against the client’s end-of-day extracts, audited
in coins, and—if it survives—paid for in proportion to uplift, it
belongs to theater, not practice. When consultants are willing to
sign for unattended emissions and accept reversibility and uplift as
the basis for fees, they cease to be consultants in the conventional
sense; they become co
-
authors of the numerical recipe. This is
11.4. WHO SUSTAINS IT 463
rare, not because the work is mystical, but because the prevailing
business model rewards ceremonies over code.
The “consultant cycle”—periodic transformations that rename
the same routines—will persist until buyers invert the incentives.
The remedy, already outlined in Deployment, is adversarial market
research: have vendors and would
-
be advisors draft and critique
each other’s problem statements, demand back
-
of
-
the
-
envelope
TCOs and peer lists, and pay only for unattended scope and
measured uplift. Where the engagement produces emissions, keep
it; where it produces quadrants and cadences, let it lapse. The next
section shows how a similar equilibrium reproduces itself inside
the firm under the banner of “data science”—insights without
ownership standing in for decisions.
11.4.3 Corporate data science
Inside firms, “data science” has largely reproduced the same equilib-
rium sustained by academia, vendors, and consultants: it produces
insights without taking ownership of emissions. Since the early
2010s, teams have multiplied dashboards, pilots, and forecasts; yet
few organizations can point to daily purchase orders, allocations,
or prices issued unattended by code and written back to ledgers.
The work looks modern, but it stops short of placing bets in the
world.
Three traits recur. First, deliverables remain forecast
-
centric
and KPI
-
centric. Error bars, accuracy scores, and tidy graphs
abound, while the firm
-
wide money ledger—the only common
yardstick—is absent. Uncertainty is described, not priced. Second,
the engineering junction is elided. Models are built atop derived
artifacts (ABCs, buffers, “cleansed” tables) instead of authoritative
records; field semantics drift; halting conditions are undefined.
When a recommendation is brittle, an “exception” hands the
hard case back to a planner. Third, incentives reward slides and
proofs
-
of
-
concept rather than unattended commitments. “Decision
support” becomes a convenient euphemism for preserving human
arbitration over the very decisions software should make.
464 CHAPTER 11. STAGNATION
The result is insight theater that complements vendors’ alert
theater. Planners end up reconciling brittle rules against fat-
tailed realities by hand. The combinatorics that justify com-
puters—millions of micro
-
options each day—are pushed back to
people. Clericalism, now animated by dashboards, persists.
The alternative is well within reach and has been sketched
across this book. Replace the notion of an insight factory with a
Supply Chain Scientist who owns a numerical recipe: a compact
program that ingests authoritative records, carries probabilistic
views of demand, lead times, and reliabilities, prices trade
-
offs in
coins at firm scope, and emits unattended decisions. The Scientist
is accountable for emissions, not for decks. This posture forces
tacit assumptions into the open and binds modeling to economics.
The operational vignette above (flattening the Monday pile
-
up
by pricing dock saturation and admitting “do nothing yet” as
an option) is typical. No reorgs, no new warehouses: the same
assets deliver better because decisions changed. This is what
day
-
to
-
day ownership accomplishes. As soon as dock capacity
carries a shadow price, MOQs become penalties rather than edicts,
waiting competes economically with buying, and the weekly rhythm
stops dictating losses. A system of intelligence turns such reasoning
into unattended practice; a data-science dashboard does not.
Accountability also imposes proportion. Once costs, constraints,
and ratchets are encoded end
-
to
-
end, marginal gains in a sub
-
model
are pursued only when they move coins at the emission boundary.
Fashionable metrics (forecast accuracy, service level) lose their
talismanic status; what matters is uplift in risk
-
adjusted operating
profit after write-back. Collaboration then shifts naturally: finance
publishes capital charges and valuations; procurement publishes
rebates, MOQs, and reliabilities; operations publishes capacities
and cutoffs. The recipe binds these inputs nightly under dual
-
run,
so learning compounds where it pays.
This is how the internal equilibrium breaks. Data science stops
narrating the flow and starts running it—under safeguards that
make automation safe to trust: daily replayable extracts, state-
less computation, whiteboxed dossiers, and reversible emissions.
11.5. THE ORGANIZATIONAL TRAP 465
The chapters Information, Intelligence, Decisions, and Deployment
have already set the harness; here the point is institutional: until
someone is paid for unattended decision quality, “data science”
will continue to feed clerical loops. The next section arms read-
ers with rejection filters for the literature that nourishes those
loops—materials that look busy yet fail the tests of prediction,
pricing, and falsification.
11.5 The organizational trap
External constituencies—academia, vendors, consultants—help ex-
plain why the equilibrium holds, but the decisive brake is internal
to firms. Supply chain retains second-class status, de facto subordi-
nated to corporate IT. Reputation, bureaucracy, and ticket latency
interlock until the organization accepts clerical vigilance in lieu of
engineered intelligence. The remedy is not another committee: a
structural reallocation of ownership and a thin seam of data that
frees flow from platform vetoes.
11.5.1 Second-class status
Supply chain is praised as “mission
-
critical”, yet treated as pe-
ripheral. Because trucks keep moving and shelves are stocked,
leadership infers that only incremental polishing is warranted. The
appearance of continuity becomes an alibi for underinvestment in
code and the people who write it. A self-fulfilling loop follows:
prestigious recruits aim elsewhere; the department inherits routines
rather than authors decisions; shadow spreadsheets proliferate to
compensate for fragile “planning” add-ons; the resulting clericalism
confirms the original prejudice that nothing better is possible.
The reputation gap is old—logistics as the dull chore after
the “real” work of product and brand—and it still shapes staffing
and budgets. What makes the loop particularly tenacious is its
intellectual veneer: singular, noneconomic KPIs a “98 % service
level,” “forecast accuracy” masquerade as objective. When
466 CHAPTER 11. STAGNATION
livelihoods depend on such surrogates, buffers inflate, exceptions
are deferred to humans, and the ledger glows even as coins leak. The
department looks busy and safe, even as it refuses the probabilistic
economics that would measure trade
-
offs in money and automate
the mundane.
11.5.2 Bureaucracy and ticket latency
Where intelligence is manual, bureaucracy thrives. Alerts and
exceptions are routed through rituals—monthly “one
-
number” con-
claves, pre
-
allocation councils, approval cascades—whose true func-
tion is to distribute accountability rather than to improve decisions.
Each ritual adds a layer of process without reducing uncertainty;
the combinatorics remain unchanged, only now filtered through
more hands.
Ticket latency completes the trap. Any substantive change—an
added constraint, a revised cost schedule, a subtle semantic cor-
rection—requires filing tickets against central systems. Weeks
turn to months as priorities are triaged by platform risk, not by
expected economic uplift. By the time a ticket clears, the business
has moved on and the patch is already stale. What should be a
daily, reversible code edit inside a decision engine is recast as a
cross-department project; the status quo prevails by default.
Compromise therefore drifts toward the smallest step all parties
can tolerate. “Machine
-
learning” modules are grafted on as dash-
boards that defer hard cases to an exception list; ERP upgrades
unify fields but leave the decision rules untouched. The facade
changes; the clerical loop persists.
11.5.3 Subordination to IT
Corporate IT’s charter is uptime, standardization, and compliance.
This is proper for systems of records and reports. It is ill-suited to
systems of intelligence. When “planning” is hosted inside the ledger,
heavy analytics contend with workflows; when it sits atop reporting
views, uncertainty is flattened and decisions devolve into alert
11.5. THE ORGANIZATIONAL TRAP 467
theater—users as human coprocessors. Budgets follow the charter:
capital flows to CRUD platforms; decision automation is purchased
as a module and delivered as configuration plus exceptions.
Risk is inverted: deviation from the platform is treated as
danger; economic loss from inferior decisions is treated as business
as usual. Gatekeeping then hardens the boundary conditions:
derived artifacts are treated as authoritative, schema changes are
anathematized, and any tool that cannot be absorbed by the suite
is rejected on principle. Under such governance, even sincere
attempts at “advanced analytics” resolve into dashboards feeding
the clerical loop.
11.5.4 Patch accretion
Half-measures accumulate. A temporary workbook to smooth a
rebate ladder, a bespoke report that reconciles two masters, a
per
-
vendor exception in a buyer’s tool—each acquires dependents
who rely on its outputs. Over time the patchwork gains defenders,
and removing any one part appears riskier than leaving all in place.
What began as a stopgap hardens into custom.
Patch accretion scatters accountability. No one owns the eco-
nomic quality of the whole; each team owns a sliver and a story.
This is wickedness in miniature: the problem mutates as each
patch adapts to its neighbors, and survival is mistaken for success.
The cure is architectural rather than exhortatory. A seam
that makes facts portable and a stateless recipe that turns those
facts into priced, auditable commitments dissolve patches into
parameters. With daily dual-run, veto power retreats; patches are
either encoded properly or retired.
11.5.5 Seam and ownership
Two structural moves break the trap.
First, establish a seam: a faithful daily extract covering the
systems of records. It is boring on purpose: append-only snapshots
with manifests; enough history to cross a full cycle; no “helpful”
468 CHAPTER 11. STAGNATION
upstream massaging; least-privilege reads. With it, the decision
engine can be stateless and replayable. Tickets that once sought to
alter ledgers become code changes against a private copy; semantics
are audited empirically rather than negotiated by email. Latency
shrinks from months to hours, and reversibility becomes routine.
Second, assign ownership of the numerical recipe to supply
chain. Roles follow naturally (see Deployment): a Data Officer runs
the seam; a Supply Chain Scientist authors the recipe; an Executive
Sponsor arbitrates prices and boundaries; a Flow Manager brings
field sense and escalates anomalies. Vendors and consultants are
judged by emissions and measured uplift, not by screens.
Absent such a seam and such ownership, nobody is even posi-
tioned to ask—let alone answer—the only question that matters
operationally: why are we not using modern computation to handle
uncertainty and allocate scarce resources more profitably?
These two moves demote bureaucracy and platform vetoes
from governors to parameters. Once a seam exists and ownership
is clear, the department is no longer a second
-
class custodian of
other people’s software; it becomes the author of its economics in
code. The rest of this chapter arms the reader with filters to reject
materials that cannot survive such a harness. If a proposition
cannot run nightly against the seam, and defend its emissions in
coins, it belongs to theater, not practice.
11.6 Rejection filters
The bottleneck today is not a shortage of material but its sur-
plus. More than a million papers and tens of thousands of books
parade under the “supply chain” banner; the pile grows faster
every year while day
-
to
-
day practice remains spreadsheet
-
centric.
The simplest explanation is that most of this production lacks the
properties that make knowledge operational—properties that let
unattended software place profitable, auditable bets every night.
This section offers three filters to discard what will not help: trivia,
authorities, and puzzles. They are not a taxonomy of the literature;
11.6. REJECTION FILTERS 469
they are a reader’s toolkit for protecting attention.
11.6.1 Trivia
Trivia catalogs the world—lists, matrices, horizons—without show-
ing why the classification is essential, not accidental. Ockham’s
razor applies: do not multiply categories beyond necessity. Here,
necessity means power to predict and economize in coins. A seg-
mentation earns a place only if learning it reduces uncertainty about
future outcomes and improves the engine’s unattended allocation
decisions.
The periodic table clarifies the standard: chemistry’s grid is
not a decoration; it compresses causality. Atomic number orders
the elements so behavior in reactions becomes predictable; the
table is a compact, falsifiable theory of matter. By contrast, much
planning jargon is mapmaking without geology. Splitting “plan-
ning” into strategic, tactical, and operational horizons says little
unless the split predicts different money-denominated choices un-
der uncertainty; otherwise the trichotomy is mere scenery. Likewise
for product ABCs, “fast, slow, non
-
moving”, “local, regional, na-
tional” sites, or the immortal 2×2 quadrants
1
where “autocratic
vs. consensus
-
seeking” crosses “think
-
then
-
act vs. act
-
then
-
think”
2
.
These devices are easy to teach and infinitely recombinable; their
predictive content is typically nil.
The test is blunt: if adopting a classification does not change
emissions—purchase orders, transfers, prices—it is trivia. In the
earlier operational vignette, nothing about the weekly inbound
pile-up required a new quadrant or horizon. The fix came from
pricing dock saturation, replacing averages with distributions, and
admitting “do nothing yet” whenever the option value of waiting
dominated. A handful of prices and distributions moved coins; a
shelf of segmentations would not.
1
It’s always 2×2 matrices.
2
See “Leadership styles”, page 149, Dynamic Supply Chains (2010), John
Gattorna.
470 CHAPTER 11. STAGNATION
11.6.2 Authorities
You’re in no position to lecture the public about anything.
Opening monologue of the Golden Globes (2020), Ricky
Gervais.
Prominence is not proof. Executive quotes, celebrity forewords,
and brand
-
name anecdotes entertain; they do not decide. Treat
the appeal to authority as a red flag unless it crosses three bridges:
falsifiability, counterfactuals, and emissions. Falsifiability means
the claim could turn out wrong once encoded; counterfactuals
require the speaker to state what would likely have happened
otherwise; emissions require the claim to be precise enough to alter
unattended commitments tomorrow night.
Large firms complicate the matter. Success attaches to many
causes—product, brand, capital structure—while supply chain
remains secretive by design. Even where the chain truly underpins
success, the public spokesman may not be its author; outsized
contributions are unevenly distributed, while many linger doing
very little
3
. And those who are responsible have incentives to
keep effective methods tacit or to recast them as palatable rituals.
Counting quotes is not the road to knowledge.
The operational standard is narrow and fair. A proposition
from a famed operator or vendor earns attention only once it can
be written as code that ingests authoritative extracts at day’s close,
emits, and defends each line in coins. Until then, file the story as a
hypothesis and resist building processes around it. Authority may
inspire; it may not substitute for falsification.
11.6.3 Puzzles
Academic supply
-
chain literature tends to optimize what is tidy to
write, grade, and publish—stylized puzzles with fixed assumptions.
Puzzles can be delightful and, at times, useful as subroutines. They
3
Any CEO knows that half of his employees are doing nothing of value, but
he doesn’t know which half.
11.6. REJECTION FILTERS 471
become harmful when presented as supply
-
chain knowledge absent
the junction where knowledge must live: semantics, economics,
and code that runs every day. The issue is not mathematics; it is
misframing. Supply chain is applied economics under variability; a
paper that never places bets—never emits unattended decisions—is
not yet in the arena.
Two questions separate a subroutine from a system. First, can
the proposal ingest the original records without laundering meaning
through derived artifacts? Second, does it carry uncertainty and
price trade-offs all the way to the emission boundary? If the answer
to either is “no”, the contribution remains a puzzle. The correct
harness has been described throughout this book: daily dual
-
run
on full-scope extracts; halting heuristics; dossiers that expose the
top monetary drivers of each line; promotion only on measured
uplift; reversibility through stateless, replayable computation. A
method that cannot survive this harness is not “early stage”—it is
untested.
This filter is not hostile to theory; it is hospitable to reality.
Queueing results that calibrate congestion costs, programming
blocks that assemble a truck, concentration bounds that regularize
fat tails—these are welcome once embedded in a numerical recipe
that moves coins. Mathematical elegance without unattended
emissions is mere storytelling in symbols. The literature will
modernize when papers ship code that passes the same economic
test we require of production: under the same constraints and
audit trail, does it raise risk-adjusted operating profit?
11.6.4 Using the filters
Applied consistently, these filters compress a reading list to mate-
rials that change emissions. Trivia falls to Ockham’s razor unless
it demonstrates predictive, money-denominated power; authorities
are accepted only once their claims can be falsified under dual-run;
puzzles advance only when they attach to unattended, auditable
decisions. This is why so much activity coexists with little progress:
most of what circulates fails these tests while looking busy. The
472 CHAPTER 11. STAGNATION
remedy is not cynicism but discipline: price uncertainty, demand
emissions, and let coins—not credentials—arbitrate.
11.7 Looking onward
Looking onward, the point is not another manifesto but a change of
posture. The yardstick proposed throughout this book is practical
and narrow: a supply chain advances when, every night at the
closing bell, software emits unattended, auditable commitments
priced in coins—or halts itself and explains why. Everything else is
staging. By this measure, progress is neither a quarterly ceremony
nor a new screen; it is the steady replacement of clerical reconcilia-
tion with a mechanical flow that carries uncertainty explicitly and
prices trade-offs at the scale of the firm.
The path requires no revelation—only two durable moves that
place responsibility where it can compound. First, make facts
portable through a seam: schema
-
faithful, immutable extracts
that let the decision engine be stateless and replayable. Second,
condense the firm’s economics into a numerical recipe that ingests
those extracts, carries probabilistic views of the future, assigns
penalties and rewards in money, and emits unattended decisions.
Whiteboxed dossiers that accompany each emission expose the
few drivers that mattered in coins. With this discipline, improve-
ment becomes routine: when reality falsifies an assumption, the
recipe changes, replays, and resumes. Errors do not accumulate;
knowledge does.
People do not vanish; their work acquires leverage. Planners
become stewards of frames and prices rather than line
-
editors of
suggestions. Finance publishes capital charges and valuations that
the engine must honor; procurement publishes rebates, MOQs,
and reliabilities as prices rather than vetoes; operations publishes
capacities and cut-offs as auditable constraints. Leadership ar-
bitrates trade-offs in money, not in slogans, and owns the rare
occasions when the engine halts. The test is the same for everyone:
did the emissions raise the firm’s risk-adjusted rate of return?
11.7. LOOKING ONWARD 473
Alternatives—alert theater, plan theater, insight theater—can
be kept alive indefinitely because trucks still move and averages
still comfort. Its cost is largely invisible in a day’s snapshot yet
crushing over years: options neglected, tails mispriced, attention
spent grooming artifacts that carry no information. The choice
facing mature firms is therefore not between stability and risk, but
between a clerical equilibrium that quietly taxes every flow and a
mechanical discipline that makes uncertainty pay.
Before the constructive program, we must be blunt about
what stands in the way. The next subsection names the habits
that guarantee clerical equilibria no matter the software or the
rhetoric. Only after discarding them does the better path—already
sketched in earlier chapters and entirely compatible with existing
ledgers—become straightforward to adopt.
11.7.1 What must stop
If progress is to be more than a slogan, three entrenched perfor-
mances must cease. Each has a pedigree; each flatters professional-
ism; and each reliably preserves clerical equilibria while computers
idle.
First, alert theater must stop. Software that styles itself “au-
tomated” yet emits lists of exceptions for humans to reconcile is
not automation; it is an interface for postponing decisions. A
system that cannot write back unattended and expose its drivers
in coins reduces planners to reconciling exceptions by hand—and
calls it progress. The criterion is simple and non
-
negotiable: either
the program issues commitments to the ledgers—orders, alloca-
tions, prices—or it is a reporting tool and should be judged (and
purchased) as such. Anything that requires daily line
-
editing to
be “safe” belongs to the past. Seats, screens, and alerts are no
substitute for emissions.
Second, plan theater must stop. Monthly consensus rituals
around a “one set of numbers” flatten uncertainty, with ignorance
masquerading as precision. Forecast accuracies, service
-
level tar-
gets, and other plan
-
adherence indicators are not economics; they
474 CHAPTER 11. STAGNATION
are numerically phrased etiquette. They reward buffers, not judg-
ment; compliance, not profit. Where edicts are declared “hard con-
straints”, they typically mask unpriced preferences; where averages
stand in for futures, tails are ignored until they bite. The remedy is
not decorum but prices: treat most limits as penalized constraints
in coins; let “do nothing yet compete with “commit now” through
an explicit option value of waiting; replace plan adherence with
the only test that matters—under the same constraints and audit
trail, did the emissions raise the firm’s risk
-
adjusted rate of return?
Absent this discipline, plans will continue to look impeccable while
trucks idle at saturated docks.
Third, insight theater must stop. Dashboards, control towers,
and “digital twins” that never place a bet are ornaments. So are
pilots that live forever in sandboxes, “visibility
-
only” programs that
move no levers, and procurement rites (RFPs) that select vendors
on choreography rather than unattended decision scope. Inside
the firm, the mirror image is a data-science function that produces
slides and prototypes yet refuses ownership of write-backs. The
standard is again mechanical: nightly dual
-
run on faithful extracts;
stateless, replayable computation; halting on insanity; promotion
only on measured uplift; reversibility by design. Anything less
narrates the flow; it does not contribute to it.
These three performances—alerts in lieu of decisions, plans in
lieu of prices, insights in lieu of emissions—persist because they
are comfortable and deniable. They end the moment leadership
demands coin-denominated unattended decisions rather than seats
and ceremonies. Only then does the constructive program that
follows have room to work at all.
11.7.2 Towards a better way
The better way is prosaic in routine, radical in consequence. It
treats tomorrow’s uncertainty as priced raw material and turns the
daily close into a disciplined reckoning. From authoritative records,
a compact numerical recipe recomputes the firm’s admissible moves,
carries probabilistic views of the future, attaches coin-denominated
11.7. LOOKING ONWARD 475
payoffs and penalties, and either writes back unattended commit-
ments or halts itself when confidence drops. Nothing else qualifies
as progress. Everything that matters—options, tails, couplings,
ratchets—is expressed in code that can be audited, replayed, and
revised without ceremony.
Once this discipline is in place, the day takes on a different
texture. Insanity disappears first: dock saturation is no longer a
surprise averaged away in weekly dashboards; “MOQs as edicts”
dissolve into priced trade-offs; an explicit option value lets “do
nothing yet” compete fairly with “commit now”. Trust accumulates
not in meetings but through whiteboxed dossiers that explain, line
by line, in coins, why an order was placed, a transfer withheld, or
a price nudged. When reality falsifies an assumption, the engine
halts; the premise is corrected; the run replays; the flow resumes.
Wrongness becomes cheap and reversible; rightness scales.
Two brief vignettes show this in practice. A seasonal fashion
label opens with generous bets but replaces push plans with policy:
assortments are evaluated as options; early sell-through updates
a basket-level probabilistic view; markdowns become a reified
function rather than after-the-fact remorse. The engine prices
stockout pain against obsolescence in one ledger, lets late binding
prevail where half-life is long, and writes unattended allocations
and price moves each night. The visible effects are not theatrical:
fewer mid-season transfers, smaller end-of-season fire sales, and
steadier gross margin. Beneath the surface, the gains come from
admitting tails, pricing them, and letting postponement win when
it should.
In civil aviation MRO, a rotable pool is governed as a portfolio
of options. Unscheduled removals are modeled as fat-tailed arrivals;
AOG exposure carries a shadow price; cannibalization, lease, repair,
and buy are admissible moves ranked by expected risk
-
adjusted
return. Many nights the most profitable action is to wait; on others,
the engine pre-positions stock against a rising risk shadow, not a
calendar. What used to be a chain of emails becomes unattended
dispatch and purchase orders. Each carries a dossier that exposes,
in coins, why a part flew to this tail number, why another stayed
476 CHAPTER 11. STAGNATION
on the shelf, and why a lease beat a repair given turnaround
dispersion. Fewer crises are “resolved” by heroics because fewer
are manufactured by earlier edicts.
Institutional consequences follow naturally. Roles tilt from
arbitration to authorship: planners become stewards of frames and
prices; finance owns capital charges and valuations the recipe must
honor; procurement publishes rebates, MOQs, and reliabilities
as schedules rather than vetoes; operations states capacities and
cut-offs as auditable constraints. Vendors and advisors are paid for
unattended scope and measured uplift, not for seats, screens, or
ceremonies. Budgets move toward the only asset that compounds
this century: a small codebase that writes the firm’s economics
down and improves under experimental optimization.
None of this demands revelation. The instruments have been
laid out already to achieve unattended decisions. Demand a change
of posture: accept that every commitment is a bet whose quality
can be stated in coins ex ante and audited ex post, and organize
work so that software places these bets at the cadence reality
allows.
The book opened by insisting that supply chain is an intent,
not a thing, and closes by making that intent executable. When
a firm can explain each nightly commitment in coins and stop
itself when it cannot, it has crossed the line that separates theater
from mastery. The rest is slow compounding of knowledge—seeing
a little farther each evening and foreseeing enough to deserve
tomorrow’s return.
Annex A
Technical notes
A.1 Aperiodic rate of return
This annex formalizes the aperiodic rate of return referenced
in Chapter Economics. The intent is practical: each option (a
micro
-
decision that commits scarce resources) should be ranked
by the speed at which its tied
-
up capital is repaid and begins to
compound. Because supply
-
chain cash flows seldom align with
calendar periods, the option itself selects its most informative
horizon.
We consider a discrete time grid
t
= 0
,
1
, . . . , T
max
(days, weeks,
or months). Let
s(t) R
+
and r(t) R
+
denote, respectively, the present
-
valued spending and revenue at
time
t
associated with the option. (If discounting is desired, convert
future amounts to present value first; see the next section.)
Define the total capital the option requires—valued at
t
=
0—and the cumulative revenue up to any cutoff T :
S
T
max
X
t=0
s(t), R(T )
T
X
t=0
r(t).
477
478 ANNEX A. TECHNICAL NOTES
The aperiodic rate of return is then
RoR
aper
= max
0<T T
max
1
T
R(T )
S
1
.
Interpretation. Slide a cursor along the option’s cash
-
flow timeline.
At each date
T
, compute the per
-
unit
-
time return you would have
realized if you had closed the position then: cumulative inflow
R
(
T
), divided by the total capital
S
the option demands, minus one,
scaled by 1
/T
. The metric keeps the peak value; the corresponding
T
is the option’s implicit horizon. Two options with very different
calendars become commensurable because each is judged at its
own best repayment point.
Units. If
t
is measured in days,
RoR
aper
is a daily rate. Annual-
ization—when desired—follows compounding on the chosen grid
(e.g., (1 +
ρ
)
365
1 for a daily rate
ρ
). For small
ρ
, multiplying by
the number of periods is a coarse approximation; compounding is
safer once rates are large.
Worked example. A buy today of 100 coins (
S
= 100) yields
two sell
-
throughs: 60 coins at day 15 and 60 coins at day 30. The
per-day speeds are
ρ(15) =
1
15
60
100
1
= 0.026
¯
6, ρ(30) =
1
30
120
100
1
= 0.006
¯
6.
The maximum occurs at
T
= 30 days with
ρ
0
.
006
¯
6
per day;
compounded over a year this is roughly (1 + 0
.
006
¯
6
)
365
1
10
.
3
(i.e., about 1
,
030% per annum). Such magnitudes are commonplace
at the item level because they reflect fast inventory rotations rather
than portfolio-wide returns.
Two caveats. First, a positive bias: the metric assumes that the
released capital at
T
can be redeployed immediately into an equally
attractive opportunity. In practice, matching opportunities may
arrive with a delay. Second, a negative bias: revenues beyond
T
are ignored for the sake of measuring speed. In reality, unless goods
are scrapped, later inflows will still materialize and lift the ultimate
profit. Both effects are acceptable—indeed useful—because the
purpose here is not to settle the final P&L of the option, but to
prioritize scarce capital toward the fastest compounding uses.
A.2. DISCOUNTED CASH FLOWS 479
A.2 Discounted cash flows
When we discount cash flows, we merely restate the earlier dis-
cussion of time preference in coin and on a discrete time grid. A
coin received later is worth less today because it cannot be used,
buffered, or reinvested in the meantime. Discounted cash flow
(DCF) converts every future amount into “today
-
coins” and, once
this conversion is done, the aperiodic rate of return of the previous
section can be applied without any further change.
Fix a time grid
t
= 0
,
1
, . . . , T
max
(days, weeks, or months). Let
k
denote the firm’s cost of capital per grid step; if the cost of funds
varies with time (credit line tiers, temporary squeezes), write
k
t
.
The discount factor that brings an amount at date
t
back to
t
= 0
is
D(t) =
t
Y
τ=1
1
1 + k
τ
(for constant k, D(t) = (1 + k)
t
).
Let
r
(
t
) and
s
(
t
) be the undiscounted revenue and spending
at time
t
(with immediate amounts recorded at
t
= 0). Their
present-valued counterparts are
r(t) = D(t) r
(t), s(t) = D(t) s
(t).
The discounted cash flow of the whole option (the sequence it
entails) is then
DCF =
T
max
X
t=0
r(t) s(t)
=
T
max
X
t=0
D(t)
r
(t) s
(t)
.
This sum is an absolute valuation in today
-
coins (when positive, it
creates wealth; when negative: it destroys value).
Connection to the aperiodic rate of return. In the previous
section,
r
(
t
) and
s
(
t
) were explicitly defined as present
-
valued
flows. Thus, to rank options by speed, first discount as above,
then plug the present
-
valued series into the aperiodic rate formula
480 ANNEX A. TECHNICAL NOTES
unchanged. In symbols, keep
S
=
P
T
max
t=0
s
(
t
) and
R
(
T
) =
P
T
t=0
r
(
t
),
and evaluate
RoR
aper
= max
0<T T
max
1
T
R(T )
S
1
.
Discounting enforces time preference; the aperiodic rate then re-
ports how fast the present
-
valued outlay is repaid and begins to
compound.
Choosing
k
. Use the firm’s marginal cost of liquidity on the grid
you compute with. If you work with an annual rate
K
on a daily
grid, convert it to a per
-
day step by
k
= (1 +
K
)
1/365
1 (mutatis
mutandis for weeks or months). When liquidity is piecewise or
time
-
varying, encode it as a path
{k
t
}
; the definition of
D
(
t
)
already accommodates this. In finance it is common to “pad”
k
to reflect uncertainty; in supply chain we avoid this double
counting—uncertainty is handled explicitly via the probabilistic
treatment of returns later in the annex.
Worked micro
-
example (daily grid). Spend 100 coins at
t
= 0;
receive 60 coins at day 15 and 60 coins at day 30. With an
annual cost of capital of 10%, the daily step is
k
(1
.
10)
1/365
1.
The discount factors are
D
(15)
(1
.
10)
15/365
and
D
(30)
(1.10)
30/365
. The DCF is
100 + 60 D(15) + 60 D(30) 12.6 coins.
Feeding the present
-
valued series into the aperiodic metric gives
S = 100 and R(30) 60 D(15) + 60 D(30) 112.6, hence
RoR
aper
1
30
112.6
100
1
0.0042 per day,
which compounds to roughly (1 + 0
.
0042)
365
1
2
.
6 (about
260% per annum). The precise figure is not the point; the method
is. Discount first to express everything in today
-
coins; then com-
pare options by how fast those present
-
valued inflows repay the
present-valued outlay.
A.3. INFORMATIONAL ENTROPY 481
Practical notes. Units must match:
t
is measured in the same
step as
k
. If payments are staged (e.g., a deposit at
t
= 0 and a
balance at shipment), treat each as its own
s
(
t
). While interest
rates matter, inventory rotations typically dominate in supply
chain; discounting’s role here is to restore comparability across
options with different calendars so the aperiodic rate can do its
ranking job cleanly.
A.3 Informational entropy
This annex distills the few pieces of Shannon’s theory used in Chap-
ter Information, in a form directly usable for inventory examples.
We stay with discrete variables and base
-
2 logarithms so units are
bits (changing the log base merely rescales the unit).
Consider a discrete random variable
X
that takes states
x
with
probabilities
p
(
x
). The information revealed when we learn that
the state is x—the self-information—is
I(x) = log
2
p(x) bits.
A certain outcome (
p
= 1) carries 0 bit; rarer outcomes carry more
bits.
The entropy of X is the average self-information:
H(X) = E[I(X)] =
X
x
p(x) log
2
p(x) bits.
Entropy depends only on the probabilities, not on how we name
or store the states. It can be read as the average number of fair
yes/no questions needed to identify the state. Two immediate
corollaries suffice for our purposes:
If X has n equally likely states, then H(X) = log
2
n.
If X
1
, . . . , X
k
are independent, the joint uncertainty adds:
H(X
1
, . . . , X
k
) =
k
X
j=1
H(X
j
).
482 ANNEX A. TECHNICAL NOTES
(When variables are correlated the joint entropy is below that sum;
we will assume independence here to keep the illustration simple.)
We will model each product’s stock level as a discrete variable
and, as a first pass, treat different products as independent. This
lets us compute a store’s entropy as “100 times” the per
-
product
value. The next subsection applies these formulas to two contrast-
ing stores—a luxury watch shop and a suburban depot—and makes
the contrast quantitative. For a compact, accessible primer on the
wider theory, see James V. Stone, Information Theory: A Tutorial
Introduction (2019).
Entropy of two stores
We now make precise the contrast used in the original chapter.
Model each SKU’s on
-
hand as a discrete random variable and, as
a first pass, treat SKUs as independent.
1
The store
-
level entropy
is then the sum of the per-SKU entropies.
Luxury watch shop. Each SKU has at most one unit on hand,
with a 95% probability of being available and a 5% probability of
being out of stock. Thus a single SKU is binary with
p(1) = 0.95, p(0) = 0.05.
Using the binary entropy,
H
(1)
watch
= 0.95 log
2
(0.95) 0.05 log
2
(0.05) 0.2864 bits.
For 100 independent SKUs,
H
watch
= 100 H
(1)
watch
28.64 bits.
Suburban depot. For each SKU, assume the on
-
hand can be
any integer from 0 to 9,999 with equal probability. A uniform law
on n states has entropy log
2
n, hence per SKU
H
(1)
depot
= log
2
(10,000) 13.2877 bits,
1
Independence is a simplifying assumption. Correlations—e.g., shared capac-
ity shocks—lower the joint entropy relative to the sum of per
-
SKU entropies;
they do not overturn the order-of-magnitude gap shown here.
A.4. TOPOLOGICAL SORT 483
and for 100 SKUs
H
depot
= 100 H
(1)
depot
1328.77 bits.
Numerically, the depot carries about H
depot
/
H
watch
46
.
4
times more inventory information than the watch shop.
Read in “twenty
-
questions” terms, fully specifying the watch
shop’s entire 100
-
SKU stock state takes on average about 29 yes/no
answers; the depot takes about 1,329. The uniform assumption
slightly overstates entropy relative to realistic, peaked stock distri-
butions, but the gap remains of the same order. This calculation un-
derwrites the chapter’s claim that two assortments of identical size
(100 SKUs) can have radically different informational weight—one
measured in a few dozen bits, the other in kilobits—hence the
necessity of software for anything beyond the simplest flows.
A.4 Topological sort
A topological sort arranges a collection of items so that every
prerequisite appears before the item that depends on it. The
input is a directed graph whose nodes are items (tasks, tables,
components) and whose arrows encode “must come before”. If the
graph has no directed cycle, such an order exists; if there is a cycle,
no ordering can honor all prerequisites.
Supply
-
chain practice is full of such precedence structures. A
bill of materials requires sub
-
assemblies before the assembly; a
daily numerical recipe must ingest records before deriving features,
forecast before valuing options, and value options before emitting
purchase orders or transfers. In a system of intelligence, this
structure is made explicit as a directed acyclic graph (DAG) and
executed in that order at each run.
Intuition. Start with everything that has no prerequisite; place
it in a worklist
S
. Repeatedly pick one item
n
from
S
, output
it, and relax its outgoing arrows: each successor
m
has one fewer
prerequisite; when that count reaches zero,
m
joins
S
. If, in the
484 ANNEX A. TECHNICAL NOTES
Algorithm 1 Kahn’s topological sort
Require:
Directed graph
G
= (
V, E
) where an edge (
uv
) means
u before v.
Ensure:
Either an order
L
that respects all edges, or error if a
cycle exists.
1: Compute in[v] |{ u | (uv) E }| for every v V
number of prerequisites
2: S queue of all v with in[v] = 0 no prerequisites
3: L empty list
4: while S is not empty do
5: remove a node n from S use a stable key for
reproducibility if desired
6: append n to L
7: for all edges (nm) E do
8: in[m] in[m] 1
9: if in[m] = 0 then
10: insert m into S
11: end if
12: end for
13: end while
14: if |L| < |V | then
15:
return error
the remaining nodes participate in at least
one cycle
16: else
17: return L
18: end if
A.4. TOPOLOGICAL SORT 485
end, some items were never output, they mutually depend on one
another—a cycle.
Worked supply
-
chain example. Consider a tiny bill of materials:
Screw and Panel have no prerequisites; Frame requires Panel;
Assembly requires Screw and Frame. One valid order is:
Screw, Panel, Frame, Assembly.
Running the algorithm,
S
initially holds Screw and Panel; after
outputting Panel, the in
-
degree of Frame drops to zero and it joins
S
; after Screw and Frame have been output, Assembly can be
produced. The exact order among simultaneously eligible items is
irrelevant to correctness but matters for reproducibility.
Correctness and complexity. Every edge (
u v
) is honored
because
v
can only be output after its last prerequisite is removed,
which happens only after
u
has been output. Conversely, if an
item is output it had no unmet prerequisite at that moment. The
method runs in linear time
O
(
|V |
+
|E|
) with adjacency lists and
a single pass over edges, and uses linear space. In production
systems, make outputs deterministic by taking items from
S
in
a fixed key order (e.g., (
site, sku, step
)); the same inputs then
reproduce bit
-
for
-
bit the same order, which supports auditing and
time-travel replays.
Operational use in a system of intelligence. The daily numerical
recipe is a DAG: ingest raw extracts, normalize, derive features, fit
or refresh probabilistic models, compute valuations in coins, then
emit idempotent write
-
backs (POs, transfers, prices). The runtime
performs a topological sort of that computation graph and executes
nodes accordingly. A detected cycle is not a corner case to be
“overridden”; it is a breach of semantics that must halt the engine
(as argued in Chapter Intelligence) and surface a small witness
cycle to fix the logic or the data contract. Producing such a witness
is straightforward: after the algorithm stops with
|L| < |V |
, follow
predecessor links within the residual subgraph until a node repeats;
the loop thus revealed is the minimal explanation the operator
needs.
486 ANNEX A. TECHNICAL NOTES
A.5 Four-layer perceptron
A four
-
layer multilayer perceptron (MLP) is the smallest useful
specimen of what the “deep learning” section described in words:
a fixed circuit made of linear maps separated by simple non
-
linear
gates. It is a good vehicle for demystifying the mechanics because
every step reduces to ordinary linear algebra.
We revisit the handwritten
-
digit toy task (MNIST) used earlier;
see Figure 6.2 in Chapter Intelligence. Each gray
-
scale image is
a 28
×
28 array rescaled to numbers in [0
,
1] and flattened into a
vector x
R
784
. The network computes scores for the ten digits
0, . . . , 9, then turns those scores into probabilities. The predicted
digit is the one with the highest probability.
Forward pass (inference)
The network stacks three affine transforms separated by rectifiers.
Using column
-
vectors and applying non
-
linearities element
-
wise,
the computation is:
z
1
= W
1
x + b
1
, a
1
= R(z
1
),
z
2
= W
2
a
1
+ b
2
, a
2
= R(z
2
),
y = W
3
a
2
+ b
3
.
Here W
1
R
256×784
, W
2
R
128×256
, W
3
R
10×128
are weight
matrices; b
1
R
256
, b
2
R
128
, b
3
R
10
are bias vectors. The
non-linearity is the rectifier
R(u) = max(0, u),
applied coordinate
-
wise. Rectifiers are not only sufficient for ex-
pressiveness; they are also cheap to evaluate on silicon, which is
why they displaced older sigmoid-shaped gates.
The output y
R
10
collects one score per digit. To obtain
class probabilities we apply the soft-max map:
p
k
=
e
y
k
P
9
j=0
e
y
j
for k = 0, . . . , 9.
A.5. FOUR-LAYER PERCEPTRON 487
The classifier reports
arg max
k
p
k
; the full probability vector p
is useful during training because it quantifies uncertainty and
supports a smooth loss.
Training by stochastic gradient descent
Training chooses all weights and biases so that the network assigns
high probability to the correct digit on images it has not seen. Let
(x
(i)
, t
(i)
) denote the
i-
th labeled example, with
t
(i)
{
0
, . . . ,
9
}
the true class. The standard loss for a mini
-
batch
B
is the average
cross-entropy
L
B
=
1
|B|
X
iB
log p
t
(i)
(x
(i)
),
optionally augmented by a small quadratic penalty on the weights
(weight decay) to curb overfitting. Modern frameworks compute
the gradient of
L
B
with respect to every parameter by automatic
differentiation. One step of stochastic gradient descent (SGD) then
nudges parameters in the direction that reduces the loss:
θ θ η
θ
L
B
,
where
θ
denotes the collection of all W
,
b
and
η >
0 is the
learning
-
rate. Repeating this simple loop—sample a mini
-
batch,
run the forward pass, evaluate the loss, apply the gradient up-
date—gradually lowers the loss on held
-
out images until improve-
ments stall. Variants (momentum, adaptive learning rates) refine
the same idea; none alter the basic picture.
Two practical choices matter for stability and speed but do not
complicate the story. First, initialize weights so that activations
neither blow up nor vanish as they propagate: with rectifiers,
a zero
-
mean Gaussian scaled like
N
(0
,
2
/n
in
) works well,
n
in
being the fan
-
in of the layer. Second, process examples in small
contiguous groups (“mini
-
batches”) to exploit hardware parallelism;
inference and training then reduce to a handful of dense matrix
products (GEMMs) plus cheap rectifiers.
488 ANNEX A. TECHNICAL NOTES
Sizes, costs, and what is really computed
Counting only the weights, the 784
256
128
10 MLP carries
784 × 256 + 256 × 128 + 128 × 10 = 234 752
trainable numbers; the three bias vectors add a negligible 256 +
128 + 10 on top. One forward pass is “three matrix products and
two rectifiers. This mechanical view—linear algebra punctuated
by a simple gate—is the correct mental model. There is no hidden
magic; the network is a programmable circuit whose parameters
are tuned by gradient descent.
A.6 Mathematical optimization
This annex complements the discussion The optimization paradigm
in Chapter Decisions. Its purpose is practical: to restate the
vocabulary of that section—marginal commitments, feasibility
tests, priced penalties, probabilistic forecasts, and economic objec-
tives—in a minimal mathematical form that admits unattended,
repeated resolution.
From decisions to a model
A decision is a finite set of marginal commitments. Let
A
=
{a
1
, . . . , a
n
}
denote the admissible micro
-
moves for the present
cycle (buy one more unit, allocate one more unit to store
s
, schedule
one more stop, and so on). Selecting a move is recorded by
x
i
{
0
,
1
}
; the vector
x {
0
,
1
}
n
encodes the whole commitment. The
frame makes
A
explicit and attaches to each
a
i
the data needed
to test feasibility and to value consequences.
Feasibility is expressed as tests that a candidate
x
must pass.
Some limits are absolute—legal prohibitions, physical impossi-
bilities—and appear as hard tests that reject
x
outright. Most
operational limits tolerate relaxation at a price; rather than forbid-
ding those violations we price them. Let
g
k
(
x, ω
) be a measurable
A.6. MATHEMATICAL OPTIMIZATION 489
quantity the firm wishes to restrain (dock load on a day, missed
window minutes, overtime hours), computed under a realization
ω
of the uncertain future. A priced penalty
C
k
(
g
k
), stated in coins,
turns that preference into the objective. Keeping only truly hard
limits as tests keeps the engine flexible and honest; priced penalties
make unlike nuisances commensurable on one ledger.
The objective is money
-
denominated. Within the window of re-
sponsibility attached to the present decision class, each micro
-
move
a
i
earns coin flows (margins, rebates, salvage) and incurs flows
(procurement, handling, carrying, obsolescence, congestion, late
fees). Under a probabilistic view of futures , the engine evaluates
the expected value of x,
J(x) = E
n
X
i=1
x
i
v
i
(Ω)
X
k
C
k
g
k
(x, Ω)
,
where
v
i
(Ω) is the net coin contribution of
a
i
over the window once
all ordinary costs are charged. Risk adjustments—e.g., heavier
pricing of tail losses—live in the
C
k
terms rather than as abstract
percentages. When ranking small alternatives that tie up capital
for different durations, Chapter Economics recommends comparing
by aperiodic rate of return; in practice, this means computing both
the expected coin delta and the tied
-
up capital and then reporting
speed with the metric defined earlier in this annex. The solver
can maximize
J
and, for emitted marginals, report the associated
aperiodic speeds.
Uncertainty is carried in the inputs, not hidden in the objective.
The random element collects the distributions the firm main-
tains for demand, lead times, reliabilities, returns, and any priced
exogenous factor. The objective
J
is deterministic given those
distributions. This separation keeps causality and blame legible
when outcomes disappoint: either the frame omitted options, or
prices were misguided, or forecasts were miscalibrated.
490 ANNEX A. TECHNICAL NOTES
Penalties versus chance bounds
Two formalisms encode “keep this under control”. A chance con-
straint requires
P
g
k
(
x,
Ω)
τ
k
1
α
k
for some threshold
τ
k
and tolerance
α
k
. A priced constraint assigns a coin cost
C
k
(
g
k
)
to all values of
g
k
. The former hides economics inside a dimension-
less
α
k
that must be tuned elsewhere; the latter states trade
-
offs
directly in money and aggregates naturally with other terms. In
supply
-
chain operations—where most limits can be overrun at a
cost—the priced form is the workhorse; pure chance bounds are
best kept as guardrails for rare, safety-critical fronts.
A convenient choice is a convex, rising penalty, often piecewise
linear, that approximates the firm’s nuisance curve. For dock load
q
beyond a soft capacity
K
with sharp pain after
K
+, one writes
C
(
q
) =
c
1
(
q K
)
+
+
c
2
(
q K
∆)
+
with
c
2
c
1
. The exact
numbers are economics; once stated, the solver finds least
-
bad
compromises across heterogeneous frictions.
Worked micro-model: filling a container
An importer must raise a full
-
container order. Each candidate
unit
a
i
carries a volume
w
i
and an expected, window
-
bounded net
coin m
i
(Ω) that already includes procurement, handling, holding,
obsolescence, salvage, and shortage effects. The container capacity
is
W
in the same unit as
w
i
. Overfilling is possible but painful at
rate λ per excess unit.
A faithful objective is
max
x∈{0,1}
n
E
"
X
i
x
i
m
i
(Ω) λ
X
i
x
i
w
i
W
+
#
,
subject to hard tests such as legal incompatibilities or forbidden
co
-
loadings. This is a penalized knapsack under uncertainty. A sim-
ple, effective resolution aligns with the chapter’s marginal stance:
at each step compute a shadow price for capacity, pick the ad-
missible unit with the highest expected marginal
m
i
w
i
above that
shadow price, recompute the shadows, and stop when the next
A.6. MATHEMATICAL OPTIMIZATION 491
unit fails the economics of waiting cut
-
off (expected, risk
-
adjusted
aperiodic return below capital plus option value of delay). Because
valuation depends on the set in the box and not on the ceremony
of its selection, this greedy accumulation targets what matters
economically; the window and terminal valuations ensure clean
attribution.
Worked micro-model: smoothing inbound
A warehouse faces random supplier drifts and a finite number of
receiving doors. Let
d
t
(
x,
Ω) be the number of arrivals on day
t
implied by
x
under . Overtime, driver detention, demurrage, and
disruption are captured by a convex penalty
C
dock
(
d
t
). A daily,
coin-denominated objective over horizon T is
max
x
E
X
i
x
i
m
i
(Ω)
T
X
t=1
C
dock
d
t
(x, Ω)
,
again with only truly hard limits enforced as tests. The optimizer
advances or defers orders until the expected return of the next
admissible batch exceeds the shadow cost of capital plus the option
value of waiting, while the convex
C
dock
spreads load without
brittle edicts.