Why Practitioners Are Right to Ignore This “AI Era” Vision for Supply Chain
When forty‑odd professors and industry figures publish a “vision statement” for supply chain in the age of AI, one might expect something that helps an actual supply chain professional take better decisions on Monday morning.
The paper I have in mind is Supply Chain Management in the AI Era: A Vision Statement from the Operations Management Community by Maxime Cohen, Tinglong Dai, Georgia Perakis and thirty‑nine co‑authors. It announces, in its abstract, that the operations management (OM) community has “an important role and responsibility” not only in shaping how AI transforms supply chains, but in ensuring that the supply chains behind AI are “sustainable, resilient, and equitable.” It then develops a five‑layer framework—intelligence, execution, strategy, human, infrastructure—and tours a large body of OM and AI literature through that prism.
On paper, this sounds auspicious. In practice, it is an almost perfect illustration of why supply chain practitioners are right to ignore the bulk of academic production in our field.
In my recent book Introduction to Supply Chain, I define supply chain as the mastery of options under uncertainty in the flow of physical goods and argue that, in a market economy, the practical goal of supply chain is to raise the firm’s risk‑adjusted rate of return on every scarce resource it touches—capital, capacity, time, goodwill. All the usual desiderata—higher service levels, shorter lead times, greener transport, happier employees—matter only insofar as they contribute to long‑run profit expressed in hard currency. Supply chain is not moral philosophy; it is applied economics that survives or perishes by the ledger.
Read against that yardstick, this “vision statement” ticks nearly every warning box I have come to distrust in academic writing about supply chain: supra‑economic virtue signaling, frameworks that do not touch actual decisions, a blizzard of self‑referential citations, and the persistent belief that another layer of time‑series modeling plus modern AI will somehow redeem the planning paradigm that has already failed practitioners for decades.
Let me unpack why.
Supra‑economic Virtues and the Ethics of Other People’s Balance Sheets
The most revealing sentence of the whole paper appears in the abstract:
“The OM community has an important role and responsibility to lead in shaping not only how AI transforms supply chains, but also how the supply chains that enable AI are designed to be sustainable, resilient, and equitable.”
The conclusion repeats the same trio of virtues, declaring that OM should guide us toward supply chains that are “more intelligent, equitable, and sustainable.”
Notice what happens here. Before telling us what supply chains are for, the authors tell us what adjectives they should satisfy: sustainable, resilient, equitable. No explicit economic objective is ever stated. Profit, capital productivity, risk‑adjusted return—these appear, if at all, only indirectly. The paper simply assumes that “efficiency” and “resilience” float alongside a set of favored moral goals, and that it is the duty of the OM community to push all of them at once.
In Chapter 4.4.5 of my book, “Supra‑economic goals”, I use that term—supra‑economic—for precisely this pattern: appeals to ends that allegedly outrank “mere” monetary considerations and thus justify overriding the discipline of prices, costs, and opportunity costs. Sometimes the tone is moralistic (“the firm should advance a social agenda beyond serving customers”); sometimes apocalyptic (“imminent catastrophe demands immediate sacrifice of profitability”). In both cases, the move is the same: economic calculation is quietly demoted, while the author’s preferred concern is elevated above it.
The problem is not that sustainability or equity would be unimportant. The problem is that scarcity does not disappear just because we invoke them. Every pallet, man‑hour and coin devoted to one objective is withheld from another. As I put it in the book: invoking a higher purpose “does not dissolve scarcity; it simply relabels trade‑offs … there is no alternative to profits and losses.”
If carbon emissions matter, they must enter the calculus as costs—through carbon prices, regulations, customer behavior, or brand risk—so that alternative decisions can be compared in a common unit. If equity matters, we must say whose equity, at what price, and with what consequences, again in a way that can be reflected in decisions and audited later. Otherwise, we are merely decorating the discussion with adjectives.
Yet the AI Era vision paper is content to declare that supply chains “must” be sustainable and equitable, without ever specifying what these words mean operationally, who pays for them, and how much. In the healthcare section, for instance, we are told that delivery supply chains must operate under “stringent safety and equity constraints.” As a matter of ethics, that sounds reassuring; as a matter of supply chain, it is empty. How safe is “safe enough”? Equity for which patient groups, at what cost in foregone throughput, and compared to which alternatives? No numbers, no prices, no trade‑offs.
Worse, the paper presents these supra‑economic aims as a responsibility of the OM community over other people’s balance sheets. It is one thing for a parliament to set taxes or safety standards after democratic debate. It is quite another for academics to tell managers that they have an obligation to design “equitable” supply chains with no explicit quantification of whom they are redistributing from and to. The former is politics; the latter is, at best, paternalism and, at worst, a quiet invitation to betray the fiduciary duty owed to shareholders, bondholders, employees, and customers who may not share the same priorities.
Once you accept that any cause can claim supra‑economic priority, there is no limiting principle. As I note in the book, history is littered with firms that enthusiastically aligned themselves with causes later recognized as disastrous—from openly discriminatory hiring to charitable support for eugenics—armed at the time with an impressive “scientific consensus.” In each case, economic calculation was subordinated to supra‑economic rhetoric; in each case, resources were squandered that could have been used to serve customers better.
Supra‑economic virtue signaling is not a harmless flourish. It is an ethical failure in its own right, because it clouds judgment about trade‑offs while spending resources that are not the authors’ to allocate. A “vision” for supply chain that begins and ends with such signaling teaches the next generation of practitioners that they should optimize for adjectives rather than for the hard‑currency consequences their decisions will have.
Frameworks, Layers, and the Appearance of Depth
The second hallmark of this paper is its love of frameworks and references.
After the abstract, the authors announce that they will structure their discussion around five “layers” of interaction between AI and supply chain management: intelligence, execution, strategy, human, and infrastructure. Each layer then gets its own section, and the rest of the paper is organised around this classification.
There is nothing inherently wrong with taxonomy. The question is always: what decisions change because we now have this particular taxonomy rather than another? If, tomorrow, we collapsed the five layers into three, or split them into eight, would a single purchase order, transfer, or price be different? The authors never attempt to answer this. The framework acts as a filing cabinet for pre‑existing ideas; it does not become an instrument of choice.
Practitioners have seen this movie before. In Introduction to Supply Chain I devote a few pages to how “planning” became the marketing banner for enterprise systems in the 1990s, even when they contained little more than time‑series forecasting and basic safety‑stock formulas. ERP vendors, followed by APS vendors, rebranded generic record‑keeping as “integrated planning,” then “advanced planning,” and, more recently, “digital twins” and “control towers.” The terminology changed; the spreadsheets and clerical workflows underneath did not.
The five‑layer architecture in this paper feels like another turn of that wheel. It creates the impression of depth, but there is no evidence that it leads to different decisions, better automation, or improved economics. A taxonomy that does not alter what happens on the warehouse floor or in the replenishment run is, from a practitioner’s standpoint, ornament, not progress.
The same applies to the reference list and the way it is used. The paper emphasizes that it came out of an “extensive collaborative process” involving 42 researchers, practitioners, and technology leaders, many of whom also contribute to the authors’ forthcoming book AI in Supply Chains: Perspectives from Global Thought Leaders. The references then lean heavily on that same circle: multiple citations of Cohen, Dai, Perakis and their co‑authors, as well as a cluster of recent working papers and in‑press articles by the author team.
Again, there is nothing illegitimate about citing your own work. The problem is that the sheer breadth of the list is presented as a kind of evidence in itself. Practitioners are treated to a parade of titles—“How machine learning will transform supply chain management,” “Using AI to detect panic buying,” “Large language models for supply chain optimization”—without being told how any of these pieces performs when applied to full‑scope, messy corporate data, emitting unattended decisions, and being judged on actual profit and loss.
If you run a network of factories and warehouses, you do not care how many papers exist on a topic. You care whether there is a numerical recipe you can deploy on your records, under your constraints, that will make tomorrow’s purchase orders, transfers, and prices better in cash terms than yesterday’s. For that, one well‑documented field implementation, with full economic results and clear limitations, is worth more than a dozen vision statements and fifty citations.
The AI Era paper offers the former in only glancing, anecdotal form. A section on “Optimal Machine Learning” mentions two Fortune‑150 case studies where a consulting firm allegedly improved service levels and reduced inventory costs. The reader gets no baseline, no counterfactual, no detail about the total capital employed or the risk profile before and after. In other industry “spotlights”, we are told that JD.com built a strong analytics team and used AI to explain forecasts to management, or that humanitarian organizations can use AI for better pre‑positioning of stocks. All of this might be true; none of it goes beyond marketing brochure level.
From the outside, it looks like a closed circuit: a circle of authors citing one another and their students in support of a framework they already agreed upon, with the occasional practitioner story sprinkled on top. For academics, this may be how a field signals activity. For practitioners, it signals that nothing here will help them decide how much to buy next week.
AI, Forecasting, and the Old Planning Equilibrium
The heart of the paper—the “intelligence layer”—is devoted to AI itself. Here, the authors describe how machine learning improves forecasting, how reinforcement learning can be used for inventory control, how an emerging paradigm called “decision‑focused AI” embeds optimization objectives into the loss function, and how large language models (LLMs) might provide natural‑language interfaces and “agentic reasoning via chain‑of‑thought” for complex supply chain problems.
Much of this is technically accurate in a narrow sense. Machine learning can, indeed, incorporate many features; reinforcement learning can, indeed, learn policies under simulation; LLMs can, indeed, parse and generate text around optimization models. The issue is not whether these tools exist; it is whether their use, as framed in the paper, addresses the actual structural weaknesses of the planning paradigm in supply chain.
It does not.
Forecasting is a good example. The authors write that machine learning “improves forecasting accuracy,” and that advanced demand predictions can rely on “hundreds of dynamic variables coming from both internal and external datasets.” Later, in their discussion of decision‑focused AI, they acknowledge that traditional “predict‑then‑optimize” pipelines can misalign prediction and decision, and propose training models directly on downstream decision costs.
All of this proceeds as if the fundamental problem of supply‑chain forecasting were lack of sophistication in the time‑series models. It is not.
In the book, I devote an entire section to why the time‑series paradigm is structurally ill‑suited to business decisions. A time series collapses a history of transactions into a sequence of numbers indexed by time buckets. That representation is lossy in ways that matter. Two demand structures can produce identical weekly sales series—one where a thousand independent customers each buy one unit per week, and one where a single large account buys all thousand units. In the first case, demand collapses slowly; in the second, it can collapse overnight. The weekly time series does not distinguish them, but the inventory risk is radically different.
Similarly, a product that sells ten units per week could be ten small baskets or one large basket. The time series is identical; the sensible stock position differs by a factor of four or more. Time‑series forecasting, however sophisticated, cannot recover information that the aggregation itself has destroyed. It is not a matter of adding more features or deeper networks; the representation is wrong for the decision.
The paper never engages with this structural critique. It simply assumes, as countless papers before it have, that better time‑series forecasting is a central bottleneck in supply chain and that machine learning is the natural answer. The brief nod to decision‑focused losses is incremental: the models now optimize a more relevant loss function, but they are still trained on the same impoverished object.
Worse, when the paper does touch specific decision criteria, it reaches for the usual suspects: service levels and inventory costs. OML is praised for “significantly” improving service levels and reducing inventory costs in case studies. The underlying economic question—how much capital should be committed to which options, under which risk profile—is never formulated explicitly.
In the book, I call safety‑stock formulas “hazardous stocks” and note that they supply a litmus test for gross incompetence in supply chain. These formulas hinge on choosing a target service level—say, 95%—and treating that percentage as if it had an intrinsic connection to profit. It does not. Service level is a surrogate for a cash trade‑off between stock‑out pain and carrying cost. Unless we price both sides and compute the trade‑off explicitly, targeting “95%” or “97%” is numerology. As I also remark, service level has become a classic “escapee” KPI: a proxy that has broken free of its economic roots and now commands the organization, while nobody is forced to state actual prices.
The AI Era paper never questions this KPI culture; it embeds AI inside it. Forecasting is improved; inventory policies may be adjusted; service levels become a little higher and inventory a little lower—and we are told this is progress. No mention is made of risk‑adjusted rates of return, of how options are valued against a working‑capital constraint, or of how model performance is judged at the boundary where recommendations are written back into the ERP and money actually moves.
The treatment of large language models is another example. The paper suggests that LLMs “promise to make advanced planning tools more accessible” and can provide natural‑language interfaces that “democratize access to advanced decision‑making tools.”
In the book, I argue that language models generally consume orders of magnitude more computation than specialized algorithms performing the same job and are unlikely to be competitive for numerical data processing. Their rightful role in supply chain is narrow: speeding up the writing and upkeep of numerical recipes and documentation, and extracting features from unstructured text. Using them as forecasting engines is explicitly misguided: they are “ill‑suited to time-series forecasting—or numerical work of any kind” and perform poorly, at high cost, compared with basic statistical models.
The vision paper, again, leans into the fashion: LLMs become “agentic” problem‑solvers that can help tune reinforcement‑learning policies and reason via chain‑of‑thought about complex supply‑chain decisions. There is no serious discussion of numerical reliability, cost, or the basic point that stochastic text generators are a very poor foundation for unattended commitments involving millions of dollars of inventory.
Stripped of its AI gloss, what the paper offers is the same planning equilibrium that has dominated for decades: forecasts as time series, plans as bundles of time series, service levels as talismans, humans validating the results. AI is invited to sit atop this stack as an enhancer, not to challenge its premises.
Why Practitioners Will (and Should) Look Away
None of this would matter much if the paper were merely an academic exercise. But it is explicitly pitched as a guide for practitioners and educators. Its authors conclude with calls to researchers, industry leaders, and universities, asking them to build curricula around human–AI collaboration, to develop governance frameworks for “ethical” AI deployments, and to design supply chains that enhance “resilience, productivity, and social welfare.”
The difficulty is that the underlying mental model never leaves the comfort of the seminar room.
There is no insistence that techniques be tested on full‑scope, messy corporate seams of data, emitting unattended decisions and being judged against a cash‑denominated baseline. There is no insistence that supra‑economic concerns be translated into prices, regulations or quantified risks before they are allowed to override profit. There is no insistence that frameworks be justified by the concrete changes they induce in emissions—what is bought, moved, and priced—not by the number of slides they can fill.
In Chapter 6.2 of my book, when discussing general intelligence and the role of software in supply chain, I point out that many published models treat the crucial design choices—objective, constraints, admissible options—as implicit. They operate within neat, bounded puzzles while leaving the messy part, the part entrepreneurs actually get paid for, off‑stage. The remedy is conceptually simple, if hard in practice: state the economic objective in monetary terms, enumerate admissible options, define halting conditions, and then decompose the work into bounded subproblems machines can solve.
The AI Era vision statement does not do that. It begins from unpriced adjectives, piles on a classification, surveys a literature mostly written by its own authors and their peers, and then calls for more of the same under the banner of AI. It is eloquent, earnest, and, for anyone trying to run a supply chain, almost entirely beside the point.
That is why practitioners ignore this kind of work. Not because they are anti‑intellectual, but because they have learned, often the hard way, that frameworks without objective functions, forecasts without an honest discussion of representation limits, AI without an economic yardstick, and ethics without prices all converge to the same place: impressive slide decks, modest pilot projects, and no durable uplift in the rate of return of the business.
If academia wants to matter again in supply chain, it will have to reverse the pattern illustrated so clearly by this paper. Start with economics, not adjectives. Translate concerns—environmental, social, or otherwise—into explicit trade‑offs instead of moral slogans. Judge models by their performance on messy data, under real constraints, with unattended decisions and money at stake. Accept that time‑series planning is, for many problems, a dead end, and that AI is not magic fertilizer for a flawed paradigm.
Until then, practitioners are not merely justified in ignoring such vision statements. They are being prudent.