Supply chains involve a patchwork of enterprise software. These software layers have been gradually, and sometimes haphazardly, rolled out over the last four decades1. The venerable EDI (Electronic data interchange) may sit next to a blockchain prototype. Such systems dominantly operate mundane, but essential, supply chain aspects: production, storage, transport, billing, compliance, etc.

How many people does it take to change a supply chain light bulb?

These systems were not put in place with the intent to offer a clean data environment for R&D purposes. This single fact explains why most forecasting initiatives, and more generally most data science initiatives, fail in supply chain. By way of anecdotal evidence, it’s usually faster to physically move all the goods held by a warehouse to another site than it is to migrate all the IT plumbing to a new site.

As a consequence of this complexity, the roll-out of “modern” supply chain initiatives invariably involves too many specialists. For a sizeable company, the typical supply chain project involves:

  • The consultant who steers the project and assists the top management.
  • The IT infrastructure specialist who assesses the risks involved with the extra IT plumbing.
  • The database administrator who identifies the relevant tables in the relevant systems.
  • The ETL specialist who engineers the pipeline that ensures the data logistics.
  • The IT consultant who provides an extra pair of hands with the finicky IT parts.
  • The project coordinator who interfaces the IT folks with the supply chain folks.
  • The business analyst who builds most of the reports for the management.
  • The data scientist who will handle the predictive modelling part.
  • The vendor tech support who navigates the bugs of the tech being introduced.
  • The vendor salesperson who manages expectations and upsells “stuff” along the way.
  • The supply chain practitioner who represents the “customer’s voice”.
  • The supply chain executive who is championing in the initiative.

However, having many specialists on the case creates its own series of problems. Nobody, not even top management, really gets what is going on. The IT parts are invariably opaque for everyone but the IT folks. Conversely, IT is struggling so much and on so many fronts - not just supply chain - that they have very little bandwidth left to work out the fine print of the problems they are attempting to solve. Finally, data science exacerbates the problem with yet another discipline that is mostly opaque to consultants, IT and supply chain practitioners alike.

Furthermore, third parties, consultants, IT companies and tech vendors all have agendas of their own, which are not aligned with the company’s. There is money to be made in ensuring some extra friction2 at every stage of the process. This allows things to start with a thin provisional budget, which “surprisingly” happens to grow steadily over time as more and more resources need to be poured into the initiative.

Part of the complexity listed above is irreducible, but another part is fairly accidental. The old joke that every CEO knows that half of their company is doing nothing of value, but they don’t know which half.

In this regard, Lokad’s strategy, as a tech vendor, has been to frontally tackle this accidental complexity. The gist of it is simple: dramatically reduce the number of specialists involved. One person, namely the supply chain scientist, tackles the whole IT pipeline, which starts with raw input data and ends with finalized supply chain decisions. The supply chain scientist bears the full responsibility for everything that happens along the pipeline - smart bits included, like machine learning.

Classic enterprise software is not compatible with supply chain because a ‘configurator’ isn’t expressive enough to cope with the sheer diversity of problems faced by supply chains. A programming language3 is needed. Unfortunately, generic programming languages, like Python, are not compatible with the role of supply chain scientist. The skill bar is too high, and those roles, within the company, devolve into software engineers. There is nothing wrong with having software engineers though, it’s just that supply chain expertise has to be re-introduced along the way through specialists who are not software engineers. Soon enough, most of the roles listed above are part of the initiative.

However, for the supply chain scientist to wear this many hats, a dedicated programming environment is needed: one that lets the scientist tackle the challenges of a supply chain’s predictive optimization with the least amount of fuss4. Lokad’s technological answer to this problem was Envision, a domain-specific language.

The concept of Envision is rooted in the idea that it’s better to be approximately correct than exactly wrong. One expert who can hold the whole supply chain situation in their mind is vastly more likely to produce a sensible solution than 10 experts, each one being only familiar with a facet of the situation. Moverover, the solution produced by a single mind - compared to the solution produced by a committee - is almost always simpler and easier to maintain.

In most engineering fields, the upside of having a committee working on the problem mitigates the extra friction introduced by the very existence of the committee. However, in supply chain, this is rarely the case. The end-to-end consistency of the strategy, obtained as the product of a single mind - or at least, few of them - tends to trump most of the ‘local’ optimizations that a committee invariably delivers. Aligning supply and demand is fundamentally a system-level challenge5.

The prime value of the supply chain scientist is to operate at system-level, encompassing the whole supply chain, from the raw electronic records up to the strategy devised by the company’s top management. However, far from being a loner, the scientist gets a lot of help. IT facilitates the access to the relevant data (without trying to preprocess the data). Operations document the processes in place, the operational constraints and the various overheads. Marketing clarifies the opportunity costs that cannot be read from the accounting books, e.g. stock-out costs. Top management crystalizes the vision, clarifying what it is that the scientist should be optimizing in the first place, etc.

In the end, supply chain decisions6 are not the product of a “system” where the responsibility is diluted over many, frequently dozens, of people. Those decisions, all of them, are the product of the numerical recipes implemented by the supply chain scientist, a single mind, who takes ownership of their performance in regards to the company as a whole. This person is fallible, but he/she gets a lot of help, which includes peers ready to take over if the need arises. In my experience, this is the only way to even start optimizing a supply chain, even if any sizable committee will invariably bury all observers under KPIs, charts and reports in order to attempt proving the contrary.

  1. To get a glimpse of what engineering supply chain software may look like a couple of centuries from now, I recommend A Deepness in the Sky (1999) one of the very best books by Vernor Vinge. The advent of programmer archeologists as an established profession might even happen in our lifetime. ↩︎

  2. Frequently, the extra friction starts even before the supply chain initiative itself. Having consultants “helping” the company with RFI and RFQ processes will double both delays and budgets with near certainty. ↩︎

  3. This need of programmability is fulfilled nowadays by Microsoft Excel. The vast majority of present-day supply chains are run through spreadsheets, even when fancier systems like APS (advance planning and scheduling) are supposedly in place. ↩︎

  4. Many IT concepts are better off abstracted away from the supply chain scientists. For example: object-oriented programming, text encoding, packet management, network management, disk management, memory management, Linux administration, database administration, disaster recovery, API protocols, distributed computing, multithreading, injection attacks, side-channel attacks, etc. ↩︎

  5. Russell Ackoff illustrates system-level thinking with the design of a car. If the CEO of a car manufacturer were to ask their staff to identify, for every car part, the best part found on the market (the best pad breaks, the best axles, …) putting together all those parts would not even result in an actual car. The parts would not fit. The “best” part only makes sense when considering the car as a whole, not in isolation. ↩︎

  6. How much to buy? How much to produce? When to increase / decrease the price? How much stock to move around? Etc. ↩︎