Mechanical Sympathy: The Missing Ingredient in Supply Chain Software
When I started working on supply chain problems nearly twenty years ago, I expected the hard part to be physics: trucks, ships, pallets, containers, production lines. Instead, I found myself wrestling with screens that took several seconds to refresh, overnight batches that routinely spilled into the next morning, and “optimization” engines that had to be simplified until they were barely better than a spreadsheet.
The real bottleneck was not steel or concrete. It was software. More precisely, it was our collective indifference to the machine that runs that software.
In my book Introduction to Supply Chain, I argue that modern supply chains are, above all, decision-making engines under uncertainty. The quality of those decisions depends on the quality of the software that prepares them, and that software, in turn, is constrained by the underlying computing hardware, from CPU caches to memory bandwidth.
Over the last decade, I have come to believe that supply chain practice cannot progress much further unless we develop what I call mechanical sympathy: an instinctive feel for how computation actually behaves, and a willingness to design our methods around that reality instead of ignoring it.
What I Mean by “Mechanical Sympathy”
The phrase comes originally from motor racing. A good driver is not just skilled at following the racing line; they also “feel” what the car can and cannot do. They avoid brutal maneuvers that overheat the brakes, they listen to the engine, they sense when the tires are about to let go. They sympathize with the machine.
In computing, the equivalent is understanding that a modern server is not a magical, frictionless calculator that performs “operations per second” in the abstract. It is a physical object with peculiarities: tiny caches that are fast but small, main memory that is slower, disks that are slower still, and networks with latency you cannot wish away. The difference between respecting those constraints and ignoring them routinely amounts to gains of one or two orders of magnitude in performance.
We have a striking example in the history of deep learning. The early neural network literature was obsessed with biological inspiration: neurons, synapses, learning rules that sounded like neuroscience. The breakthrough came when researchers abandoned any pretense of imitating the brain and instead optimized ruthlessly for GPUs and numerical stability. Rectified linear units, mini-batches, dense layers, mixed-precision arithmetic: many of these ideas are less about neuroscience and more about accommodating the hardware. The field took off the moment it showed mechanical sympathy for silicon.
Supply chain is not image recognition, of course. But it is every bit as computational, and it has just as much to gain from respecting the machine.
Supply Chain as a Computational Problem
Supply chains are often described in terms of physical flows: factories, warehouses, trucks, and stores. What is less obvious at first sight is that behind every pallet that moves, there are dozens of small decisions: what to buy, what to produce, how much to ship, where to store, at what price to sell, when to mark down, which order to prioritize.
Each of those decisions is made under uncertainty. Demand may spike or collapse. Lead times slip. Suppliers fail. Transportation capacity evaporates overnight. If we want to make those decisions systematically better than human guesswork, we need algorithms that consider many scenarios, not just a single forecast; that compare options in terms of expected profit, not just service level; that propagate constraints through the entire network instead of treating each warehouse or store in isolation.
All of this is computationally expensive. A probabilistic approach that keeps track of thousands of plausible futures and evaluates decisions against each of them will do far more arithmetic than a simplistic safety-stock formula. A network-level view that couples inventory, pricing, and capacity planning will push far more data through the machine than a local reorder-point calculation.
This is where mechanical sympathy becomes decisive. If the underlying software and hardware stack is sloppy, if every query triggers dozens of round trips to a transactional database, if the algorithms are written in ways that defeat caching and parallelism, then these richer methods simply do not fit within the available time window. You end up shrinking the problem until it fits the machine, instead of improving the machine so that it can address the real problem.
When I see a company whose replenishment “optimization” must start at 6 p.m. to finish by 6 a.m., I know that the organization will never seriously experiment with alternatives. Every new idea becomes a multi-week project, because just getting one run takes half a day. The economics of experimentation collapse.
The Mainstream View: Hardware as an Afterthought
If you look at mainstream supply chain textbooks, you will find extensive discussions of network design, sourcing strategies, contracts, inventory policies, transport modes, and performance metrics. You will also find chapters on “information technology” describing ERP systems, planning tools, and integration. What you will almost never find is a serious discussion of how the underlying computing actually behaves.
IT is treated as a neutral enabler. The message, roughly, is that once you have chosen your software and integrated your data, you can focus on process design and managerial levers. The inner life of the machine – how memory is laid out, how data is stored, how computation is scheduled – belongs to vendors and technicians.
The same mindset pervades most enterprise software in our field. Planning systems are built on top of transactional databases that were originally designed for booking orders and updating stock levels, not for crunching billions of probabilistic scenarios overnight. The architecture is usually organized around screens and forms that create, read, update, and delete records. Additional “optimization modules” are bolted on: a forecasting engine here, a routing solver there, an inventory heuristic somewhere else.
From the outside, this looks modern enough: web interfaces, cloud deployment, APIs, maybe even “AI” in the marketing brochure. Under the hood, however, the computational core is often starved. Calculations are dispatched through layers of abstraction that fragment the data, scatter the work, and defeat the hardware’s strengths. The result is software that struggles to keep up with today’s data volumes despite running on machines thousands of times faster than those of thirty years ago.
Niklaus Wirth famously quipped that software is getting slower more quickly than hardware is getting faster. In supply chain, you can see this directly: it still takes multiple seconds to open the “details” page for a single product-location combination in many large systems, even though the underlying hardware should be able to scan millions of such records in the same time. We have managed to consume almost all the progress of the hardware in layers of inefficiency.
Once inefficiency is baked into the architecture, the consequences are not merely technical; they are doctrinal. If your platform cannot afford to keep track of thousands of scenarios for each SKU, then you will favor methodologies that only require a single point forecast. If your engine cannot evaluate the financial impact of decisions at the network level, you will gravitate toward local rules and key performance indicators that can be computed in isolation. “Theoretical” limitations quickly become “practical” dogma.
What Changes When You Care About the Machine
What happens if we flip the story? Suppose we start from the assumption that the machine matters.
First, we begin to design data flows that respect locality. Instead of scattering information across dozens of tables and asking the database to stitch it back together for every computation, we arrange data so that a single pass over memory provides everything an algorithm needs. This alone can shift performance by a full order of magnitude.
Second, we favor batch and vectorized operations over chatty, row‑by‑row processing. The hardware is extraordinarily good at doing the same operation over and over on large arrays. It is terrible at answering thousands of tiny, unrelated questions that require jumping around in memory and across the network. When the analytical part of a supply chain system is expressed as a coherent program rather than as a swarm of form-driven transactions, it becomes much easier to harness this strength.
Third, we look at the entire decision pipeline, from raw data to the quantities that appear on purchase orders and pick lists, as something that can and should be profiled, optimized, and re‑engineered over time. We stop treating “the model” as a black box and start treating the recipe as software. This is precisely what allowed fields like computer graphics and scientific computing to evolve from slow prototypes into industrial tools: engineers spent years shaving off inefficiencies at every layer.
The direct benefit is speed. A computation that used to take six hours might now take six minutes or six seconds. But the deeper benefit is not the raw number; it is what the speed enables. If your team can run a hundred variants of a replenishment policy in a day instead of one variant per week, they will explore ideas that were previously unthinkable. They will refine models in response to disruptions, test alternative assumptions, and gradually push the frontier of what is economically achievable.
This Is Not Geek Vanity, It Is Economics
Some readers may worry that all of this sounds like an engineer’s obsession with elegance for its own sake. Who cares how many cache misses an algorithm makes, as long as the shelves are stocked and the trucks leave on time?
The answer is that inefficiency is not free. In the age of the cloud, you pay for it twice.
You pay directly, in the form of oversized infrastructure. If your software requires ten times more compute and storage than necessary to produce the same decisions, you will pay ten times more to your cloud provider or for your hardware – or you will accept ten times longer runtimes, which is simply another form of cost.
You also pay indirectly, in the form of weaker decision logic. Because inefficient systems struggle to compute sophisticated policies at scale, vendors simplify the mathematics to fit the machine. They reduce probability distributions to single numbers. They decouple processes that are tightly linked in reality so that they can be calculated in separate passes. They hide crude approximations behind glossy dashboards. You may never see the shortcuts, but you will feel them in the form of excess safety stock, missed sales, and brittle reactions to shocks.
Mechanical sympathy, by contrast, lets you invest your computational budget where it matters most: in exploring uncertainty and trade‑offs. An efficient system can afford to simulate thousands of futures and pick decisions that maximize expected profit while controlling risk, instead of relying on rules of thumb. It can afford to recalculate frequently, so that decisions stay fresh in the face of new data. It can afford to keep the entire network in view instead of myopically optimizing each node in isolation.
In that sense, mechanical sympathy is not a technical preference; it is an economic stance. It says: we will not waste scarce computational resources on gratuitous overhead; we will spend them on the calculations that actually change our cash flows.
What I Expect from Supply Chain Leaders
None of this means that every supply chain executive must become a systems programmer. But I do believe that leadership teams need a basic sense of scale and direction. You do not have to design engines to know that a truck that consumes three times more fuel than its peers for the same route is a problem. You do not have to be a chip designer to recognize that a system that takes hours to perform a calculation that could reasonably be done in seconds is a problem.
When you select software or a vendor, you should feel comfortable asking questions such as:
How often can we realistically recompute all key decisions for the entire network?
What happens if we double the number of SKUs or locations – do runtimes double, or do they explode?
How much hardware does this platform require to process our data, and how confident are we that those requirements will not balloon in two years?
If the answers are evasive, or if nobody in the room can even estimate orders of magnitude, something is wrong. In the LokadTV conversation on computational resources, I compared this to basic geography: you do not need to know the exact elevation of every mountain, but you should know whether a given road crosses the Alps or a flat plain.
Likewise, internal teams should be encouraged to see their analytical work as code that can be profiled and improved, not as static “models” that are either right or wrong. The ability to reason about performance is part of professional competence, just as the ability to reason about unit economics is part of finance.
A Different Kind of Sympathy
Mechanical sympathy is ultimately a form of respect.
It is respect for the machine: acknowledging that our servers are not infinite, that their quirks and limits are real, and that we ignore them at our peril.
It is respect for the people who rely on those machines: planners who need timely, trustworthy recommendations; managers who need room to experiment; executives who must commit capital based on what the software tells them.
And it is respect for the discipline of supply chain itself. If we claim that our field is about making better decisions under uncertainty, then we cannot shrug at the quality of the machinery that produces those decisions. We owe it to ourselves – and to the companies that trust us – to treat computational resources as seriously as we treat warehouses and fleets.
The mainstream view has been to treat hardware and software internals as a backstage concern, something that “IT will handle.” I believe that view has reached the end of its usefulness. The more our supply chains depend on algorithms and data, the more we need to bring the machine into the spotlight.
Mechanical sympathy does not mean turning every planner into a programmer. It means cultivating, at every level, an informed curiosity about how our tools really work – and a refusal to accept slowness, opacity, and waste as inevitable.