Enterprise software vendors like to call their platforms the “digital backbone” of the firm. In the supply chain world, this backbone usually means ERP, with a supporting cast of MRP, WMS, TMS, CRM, and similar systems. For many organizations, the natural next step seems obvious: if ERP already runs the transactions, why not let it run the decisions as well? Add some planning modules, a forecasting engine, a few AI features, and the supply chain should more or less take care of itself.

Abstract diagram separating ERP backbone and decision engine

After nearly two decades spent building software that does exactly the opposite of this, I have reached a rather impolite conclusion: the usual transactional vendors are structurally ill-suited to ever deliver satisfying supply chain decision systems. This is not a complaint about their competence. It is about the category of software they sell, the economic incentives they face, and the expectations the market has placed on them.

I explore this argument at length in my book Introduction to Supply Chain, but I want to tease out one thread here: why “where the brain lives” in your software landscape matters more than most of the literature is willing to admit.

What we are really asking software to do

Modern supply chains are not just about moving boxes efficiently. They are about placing bets under uncertainty, every single day.

Should we buy one container or three? Delay a promotion or bring it forward? Ship scarce stock to Europe or the US? Accept this supplier’s minimum order quantity, or push back and risk a rupture? Each decision is a small wager with asymmetric consequences. Some mistakes cost a little; others cost a lot and keep costing for months.

Fifty years ago, humans could plausibly keep most of these bets in their heads, assisted by ledgers and spreadsheets. Today, even mid-size companies manage tens of thousands of SKUs, hundreds of locations, volatile lead times, and demand patterns that react to prices, weather, marketing, competitors, and supply disruptions. The combinatorics are simply beyond unaided human judgment.

So we ask software to help. But “software” is not a single thing. In a typical company, three very different kinds of systems coexist:

First, there are the transactional systems: ERP, MRP, WMS, TMS, OMS. Their job is to capture everything that happens: orders, receipts, shipments, invoices, movements in and out of locations. They are glorified ledgers plus workflows. Their primary virtues are reliability, traceability, permissions, and auditability. If a shipment goes out twice or a payment disappears, everything else becomes moot.

Second, there are the reporting and analytics systems: BI tools, dashboards, KPI portals, spreadsheets fed by data warehouses. Their job is to summarize and visualize what has happened, and sometimes what might happen. They are built for humans to inspect: charts, tables, exception lists, variance reports.

Finally, there is something that is mostly missing in practice: a true decision engine. By this I mean software whose primary job is to take all those raw facts, understand their patterns, evaluate options, and then emit concrete, auditable decisions: purchase orders, allocation moves, production schedules, price changes, assortment changes. These decisions should be produced routinely and unattended, and then written back into the transactional systems as if a very consistent, very fast planner had done the work.

The mainstream view tends to blur these three roles together. If the ERP “backbone” already integrates data and runs the processes, surely it can also plan and optimize. If not, another all-purpose platform will: a “digital core,” a “control tower,” or a “unified planning suite.” In the literature you will routinely find ERP described as the core of business information processing and the backbone of the enterprise, with supply chain planning and execution presented as natural extensions of that same platform.

On paper, this is appealing. In practice, it is precisely where things start to go wrong.

Why transactional vendors are in the wrong business to build the brain

To understand the misalignment, it is useful to step back from branding and look at what ERP and its cousins actually are.

At their heart, these systems are large, multi-user databases with strict rules about who can change what, and in what sequence. They are optimized for many small, simple, well-structured operations happening at once: book this order, post that invoice, confirm this receipt. When ERP textbooks call them the “backbone” of the enterprise, they mean it literally: they carry the nerves and blood vessels of day-to-day operations.

A genuine decision engine, by contrast, is not optimized for thousands of lightweight clicks. It is optimized for heavy thinking.

It needs to ingest large quantities of data, often across several systems. It needs to explore many possible futures, not just extrapolate a single forecast. It needs to evaluate options according to economic criteria, not just service levels or lead time targets. It needs to perform computations that are occasionally expensive: probabilistic models, simulations, combinatorial optimizations. And it needs to do all this in a way that can be audited later: why did we buy twenty pallets of this item on that day, instead of fifteen or none?

This is not a question of programming language or clever optimization. It is a different runtime profile, different engineering trade-offs, and different forms of accountability. Placing this kind of engine inside the same environment that runs your daily invoices and warehouse screens is like mounting a jet engine inside a city bus. It is not simply overkill; it actively interferes with the bus doing its job.

From there, several structural problems emerge.

The first is operational. If you try to run heavy analytics and large optimization jobs directly on your transactional platform, you either throttle the analytical work to avoid slowing down operations, or you risk compromising response times when humans are trying to get work done. The more you succeed at embedding “intelligence,” the more you degrade the system’s primary mission: never lose a transaction and never block a user.

The second is semantic. Modern companies rarely run on a single monolith. They have ERP for finance and core operations, WMS for warehouses, TMS for transportation, CRM for customer interactions, e‑commerce platforms, sometimes specialized systems for manufacturing or retail. Each of these systems has its own vocabulary and its own version of the truth. Yet a useful decision engine must see across all of them. The natural tendency of ERP vendors, however, is to treat their own schema as the center of the universe. Everything else is either mapped into it in a lossy way or ignored.

The third is economic. Large transactional vendors are mostly paid for licenses, seats, modules, and storage; in the cloud era, add subscription tiers and consumption. They are not paid as a function of how many decisions they can safely automate or how much cash they put back into your P&L. If anything, a truly effective decision engine would reduce the number of “planner” seats and simplify many workflows. This is not what their pricing model is designed to reward.

Overlay this with the certification ecosystem and the picture becomes even clearer. The APICS/ASCM CPIM program, for example, is explicitly presented as the global standard for planning and inventory management, and training material openly notes that ERP systems embed business rules derived from this body of knowledge.

In other words: standard planning logic is assumed to live inside the ERP. Vendors are expected to encode it; practitioners are expected to configure and follow it. The mission is to align practice with the canon, not to rethink the software architecture or the economics of decisions.

Finally, there is culture. Building record-keeping systems tends to attract, reward, and promote a particular engineering mindset: one that prizes stability, backward compatibility, and coverage of edge cases. Over decades, these systems accrete layers of functionality, modules, customizations, and integrations. They become extremely capable, but also extremely complex. Adding yet another planning module or AI feature is culturally easier than removing anything. The result is an ever-growing mass of configuration screens and parameters, woven into workflows that are already fragile.

Asking this machine to reinvent itself as a sharp, opinionated decision engine is optimistic at best.

The mainstream narrative, in its own words

If you read vendor material, white papers, and much of the academic and professional literature, you will see the same story repeated with minor variations.

ERP is presented as a unifying platform that integrates all data and processes, “harmonizing” procurement, inventory, order processing, and distribution while embedding analytics and planning tools. It is described as the software foundation of the business, the “digital core” that underpins everything from finance to manufacturing to the supply chain. On top of this core, we are promised increasingly advanced features: demand forecasting powered by predictive analytics, scenario modeling, integrated business planning, and more.

In parallel, the operations management canon provides the mental models: the CPIM body of knowledge, S&OP frameworks, safety stock formulas, MRP logic. These are codified as best practices, exam syllabi, and maturity models. The implied promise is simple: if you implement the right ERP, configure it according to these standards, and train your planners accordingly, your supply chain will behave.

If reality does not match the promise, the usual explanations are familiar: data quality, change management, project scope, lack of executive sponsorship, insufficient training. The remedy is always more of the same: more careful implementations, more disciplined processes, more complete adoption of the standard body of knowledge.

Notice what is rarely questioned: the assumption that the “brain” of the supply chain should live inside the very same systems that book orders and print invoices.

What actually happens on the ground

On the ground, most organizations end up with a pattern that is depressingly consistent across industries and geographies.

The transactional systems do their job reasonably well. Orders flow, stock moves, invoices go out, financial closes happen. There are always quirks and frustrations, but by and large the ledgers hold.

Around this core, reporting proliferates. BI tools connect to data warehouses, which extract from ERP and its satellites. Teams build dashboards, scorecards, cockpits, and control towers. Planners spend a growing fraction of their time navigating these screens, reconciling discrepancies between them, exporting data to spreadsheets, and re-importing “adjusted” numbers.

Planning and optimization modules do exist in many of these systems. They generate forecasts, propose replenishment quantities, raise alerts. Yet most of the heavy lifting remains manual. Forecasts are overridden. Suggested orders are “reviewed” one by one. Exception lists grow long enough that no one can reasonably clear them in a working day, so people develop local heuristics: trust these indicators, ignore those, always favor this vendor, never touch those items.

Automation mostly takes the form of conditional logic inside the transactional systems: if availability is above a certain level, release this order; if below, park it in an exception queue. From a distance, this can look like intelligent behavior. Up close, it is brittle rules plus human coping strategies.

I sometimes call this situation “automation as paperwork”: the systems are elaborate, busy, even impressive, but they rarely carry full responsibility for a decision that has real financial weight. There is always a human expected to click “OK” somewhere, even if that click has become a ritual.

This is not what a mature decision engine looks like.

A different separation of concerns

If we take seriously the idea that supply chain is a daily practice of economic decision-making under uncertainty, then we should design the software landscape to reflect that.

In that landscape, transactional systems keep doing what they do best: they remain the authoritative record of what has happened and what is currently committed. Their schemas and workflows will continue to evolve, and they will remain critical. But we stop asking them to be clever.

Reporting systems keep doing what they do best: they help humans see, understand, and discuss what is going on. Good dashboards and analytics will remain invaluable. But we stop confusing visualization with optimization.

Then, separately, we introduce a dedicated decision engine.

This engine receives regular feeds of raw data from all relevant transactional systems: orders, stock positions, capacities, lead times, prices, constraints. It does not care whether this data originates in ERP, WMS, TMS, or something else. It reconstructs a coherent view of the world from these facts, explicitly acknowledges uncertainty in demand and supply, and evaluates alternative actions according to their financial consequences: expected margin, risk of stockout, cost of obsolescence, opportunity costs in constrained resources.

The output of this engine is not a dashboard. It is a stream of proposed actions: buy this much of that SKU from that supplier on that date; ship these pallets from this warehouse to that one tonight; increase the price of this item by two percent; phase out that variant over the next month. Each action is accompanied by enough context that a human can audit it if necessary: what data it used, what patterns it inferred, what trade-offs it made.

Crucially, these actions are written back into the transactional systems using well-defined interfaces. The purchase order is created in ERP as if a planner had typed it. The stock transfer appears in the WMS as if a warehouse manager had initiated it. The price change lands in the pricing system with a clear validity date.

Humans do not disappear from the loop. Their role changes. They become stewards of the numerical recipe: they decide which costs to include, how to value risk, what constraints are real, and which scenarios are acceptable. They review and refine the behavior of the engine, rather than micromanaging every line it produces.

This architecture sounds exotic only if you have spent too much time in the ERP-centric narrative. From the perspective of software engineering and economics, it is simply a clean separation of concerns: ledgers for facts, dashboards for understanding, engines for decisions.

How this confronts the mainstream view

Seen from this angle, the mainstream narrative is not “wrong” so much as incomplete.

It is perfectly reasonable to want an integrated transactional backbone. Companies benefited enormously from the move away from fragmented accounting, inventory, and manufacturing systems toward integrated ERP. It is also reasonable to want good reporting, and to standardize terminology and processes across the profession. Initiatives like CPIM have played an important role in giving people a shared language and basic toolkit.

Where I part ways with the mainstream is in the implicit assumption that, if we keep enriching the transactional backbone with more planning modules, more forecasting features, more analytics, and more configuration options, we will eventually arrive at effective, automated supply chain decisions.

I do not believe this convergence will happen.

As long as the “brain” is expected to live inside systems whose primary mission and business model are record-keeping, role-based workflows, and generic best practices, we will remain stuck in a pattern of impressive interfaces and timid automation. We will continue to see organizations where planners spend their days massaging lists of alerts rather than shaping the economic logic that generates those alerts.

The alternative is not to throw away ERP or ignore established methods. It is to treat them for what they are: necessary infrastructure and valuable professional heritage, but not the place where the real decisions should be made.

Once we accept this distinction, a more honest conversation becomes possible. We can ask our transactional vendors for excellent ledgers and clean interfaces, rather than “AI-powered optimization.” We can ask our BI teams for simple, truthful views of reality, rather than animated dashboards that pretend to be control rooms. And we can hold our decision engines—and the people who build and operate them—to a higher standard: nightly, unattended decisions, accountable in cash, continuously improved.

This is the direction we have pursued at Lokad for many years. It is not the only path, but it is one that starts from a simple observation: when your supply chain is effectively bounded by software, then where you put the brain of that software makes all the difference.