Every few weeks, I meet an executive who is still “deploying the ERP.” The program started under a previous leadership team. The budget has become a number no one wants to repeat out loud. And the post‑mortem, already half-written, points fingers at the vendor, the integrator, the “change resistance,” or the uniqueness of the business. That story is so common that it has become background noise.

Hourglass of money before sprawling ERP maze

The uncomfortable truth is that most ERP deployments are slow and expensive for structural reasons. Not because every team involved is incompetent. Not even because the requirements were unclear. They are slow and expensive because the economic and technical incentives that shaped ERPs over decades make them slow and expensive as a category.1

In my book Introduction to Supply Chain (Chapter 5, “Information”, section “Systems of records”), I spend time on a surprisingly neglected idea: the software that records transactions is both indispensable and fundamentally limited. Its job is to be a reliable ledger—nothing more. When companies forget that, they start paying “strategic” prices for what is, at its core, a transactional clerical machine. That misunderstanding is where the trouble begins.

The entity jungle and the vendor treadmill

Let’s start with what an ERP is in practice. It is a transactional system, built on top of a relational database, that tracks the company’s resources and routine operations through many modules, many screens, and a large catalog of “entities” (products, customers, invoices, orders, and so on).1

Now, here is the dynamic that quietly ensures that ERP deployments will remain painful.

An ERP vendor does not build software “for you.” They build software for the sum total of their client base, over decades. The product grows by accretion. Every big client comes with a handful of “must-have” concepts, exceptions, workflows, regulatory quirks, and vocabulary that will be requested as first-class features. Vendors can add; they almost never remove. There is no commercial upside in pruning obscure entities that some legacy client depends on. The result is a product that expands relentlessly, one entity at a time, one module at a time.1

That matters because complexity does not scale linearly. Supporting twice as many features costs far more than twice the engineering effort, and the internal “semantic cohesion” of the product degrades over time.1

On the first deployment, this growth is disguised by the sales motion. ERP vendors sell by modules; client companies adopt by modules. One module now, another in six months, another next year. The company gets the illusion of incremental progress, and the integrator gets to stretch the effort over phases.1

Upgrades are where the illusion collapses.

Even if you stick with the same vendor, upgrades are notoriously hard and often stretch into many months, sometimes years.1 The technical reason is not mysterious: the migration is rarely a simple one-to-one copy. The meaning of entities evolves. A table that once meant “customer orders” gets repurposed to also represent returns; the “fix” becomes negative quantities, extra flags, additional joins, special cases, and downstream assumptions that quietly break.1

This is how you get the real killer: the mapping problem.

The bulk of the labor is not “installing” software. It is building a correspondence between two incompatible dictionaries of the business: the old ERP’s semantics and the new ERP’s semantics. And because both semantics are shaped by vendor history, not by your company alone, the number of concepts you must reconcile reflects the vendor’s accumulated client base. Your company pays to navigate a maze built by everyone else.

Customization compounds the problem. When the client and its integrator add bespoke entities, bespoke screens, bespoke workflows—things that were not viable for the vendor to adopt—those custom pieces become a local dialect. Later, when the vendor refactors (even slightly), the company is forced into a second, much uglier mapping: the old custom dialect to the new standard dialect, plus whatever new customization is required to patch the gaps.

This is not an accident. It is what you get when a product’s commercial success depends on continuously expanding its scope while maintaining backward compatibility for a diverse client base. It is the slow motion inevitability of an entity jungle.

When one system tries to do three jobs

The second structural problem is even more pervasive: companies routinely ask the ERP to do three different jobs at once.

They want the ERP to record transactions (the ledger), to provide analytics and dashboards (the “mirror”), and to produce decisions (the “brain”). Vendors happily encourage this because bundling is profitable: once the ERP is framed as “the platform that runs the company,” every additional module becomes a “must-have.”2

Yet, from a software design perspective, these jobs pull in opposite directions.

Transaction processing is about fast, reliable updates and strict consistency. Analytical processing is about scanning large datasets, aggregating, slicing, and exploring. These are different workloads, often implemented with different data models and different performance trade-offs; the industry has had names for this distinction for decades (OLTP vs OLAP).3

When you try to do both with one monolith, you create a system that is perpetually at war with itself. The “fat” operations—reports, analyses, pseudo-planning features—compete with mundane transactional operations. Users experience it as slowness, timeouts, nightly batches, fragile integrations, and the eternal refrain that “the system can’t do that.” Meanwhile, the company pays a premium for the privilege of embedding contradictions into the architecture.

It does not help that “ERP” is a misleading name. Historically, the “P” for planning reads like a promise of intelligence. In reality, planning is at best a secondary concern for ERPs, and predictive analytics tends to be at odds with their core transactional design.1

My June 2024 article on enterprise software argued that the industry’s chronic waste comes from blurring these categories and paying extravagant sums for the wrong things. The basic point, without jargon, is simple: a ledger is not a strategy engine, and asking it to be one produces complexity without delivering the upside.2

If you want a vivid confirmation that I am not alone in thinking this way, look at how the major cloud vendors explain data systems: they explicitly recommend using transactional systems to reliably store and update records, and analytical systems to generate reports and perform complex analysis. They do not pretend one database mode excels at everything; they describe complementary roles.3

ERP vendors sell the opposite dream: one system to do it all. The dream is lucrative. The hangover is paid for by the client.

The trap called “customization” and the work called “migration”

Third-party literature and vendor documentation converge on a point that practitioners already know in their bones: customization and migration are where projects go to die.

A ScienceDirect paper on ERP customization describes it as a “double-edged sword,” noting that while customization can improve fit, it also brings increased implementation costs, increased complexity, and expenses for subsequent upgrades.4 That is the academic version of what integrators quietly price into every proposal: every deviation from the standard path is a future tax.

Even when we stay within the orbit of mainstream vendors, the story is the same. Microsoft’s own guidance for Dynamics 365 implementations states bluntly that “Migrating data is a complex and time-consuming process,” and then proceeds to list the kinds of work involved: analyzing sources, defining scope, mapping and transforming fields, loading, testing, verifying and validating data quality.5

Read that list carefully. It is not exotic technology. It is not “innovation.” It is the industrialization of mapping and reconciliation.

This is precisely why ERP programs expand into multi-year affairs: the work is not building new software; it is translating a living business into a sprawling, ever-shifting, vendor-shaped schema, then repeating the exercise during upgrades. The more the ERP tries to be the ledger, the mirror, and the brain, the more that translation becomes a perpetual project rather than a one-time exercise.

A saner alternative: build a small ledger, on purpose

At this point, the obvious objection appears: “Fine. ERPs are messy. But we need sophisticated features. We need planning. We need optimization. We can’t just build this ourselves.”

That objection contains the same category mistake as the ERP pitch.

If you restrict the scope to the ledger—the plain transactional layer that captures the business semantics and supports routine workflows—the problem becomes radically simpler. Not easy in the sense that it requires no thought, but simple in the sense that it is mostly a disciplined CRUD application.

And this is where 2026 is genuinely different from 2015.

Coding agents now exist that are specifically designed to take a codebase, a set of tasks, and produce working changes at speed. Anthropic’s Claude Code is an agentic coding tool that lives in the terminal and helps produce code through natural language commands. OpenAI’s Codex has been positioned as a software engineering agent that can handle tasks such as writing features, fixing bugs, and proposing pull requests. xAI provides coding-oriented Grok models that can be used with code editors via common agent workflows.

These tools are not magic. A randomized controlled trial published by METR in July 2025 even found that, in a particular setting (experienced developers working on familiar open-source repositories), AI tool usage made developers slower on average, because time shifted from writing to reviewing and correcting.6 That result is an important reminder: “agentic” does not mean “automatic.”

However, it also clarifies where the advantage is strongest: when the work is repetitive, well-scoped, and semantically constrained—exactly what a deliberately boring ledger should be.

Here is my recommendation, stated plainly: for any sizeable company—say, above €50M of annual revenue—the default stance should be to build the ledger layer in-house rather than buying an ERP monolith and paying a decade of rent to its ecosystem.

Why?

Because the dominant cost driver of ERP “modernization” is schema mapping across thousands of vendor-shaped entities and years of accumulated semantic drift. If you start from scratch, you can build a records layer aligned with your own semantics, not a compromise designed to fit every company in the vendor’s portfolio. In practice, the resulting data model is often one or two orders of magnitude smaller. Not because your business is trivial, but because you stop paying for the semantic debris left by other clients.

The rollout also changes shape. Instead of a heroic cutover after years of specification, you can iterate like a sane engineering team. You import a fresh snapshot of the legacy data. You let employees interact with the new system. They report mismatches between the software and the real workflows. You adjust the code. You re-import. You repeat until the ledger fits. This approach is simply the disciplined version of what Microsoft’s guidance already implies: migration needs mapping, testing, validation, and rehearsal.5

Yes, it requires coding skills. That skill can be internal or outsourced, but it must be real. It also requires what I often call mechanical sympathy: enough understanding from leadership to steer a technical project without being hypnotized by vendor theatrics. The analogy I have used elsewhere is that great pilots are not mechanical engineers, but they know enough about the machine to get the best out of it.7

This is the part many companies try to avoid. They would rather outsource the thinking along with the implementation. They prefer to buy the illusion of safety.

But outsourcing the ledger of the company to a monolith whose incentives are to grow, bundle, and lock-in, is not safety. It is the slowest possible way to buy software you will eventually need to replace anyway.

If you want planning and optimization, treat them as separate concerns, layered on top of a clean ledger. Don’t force the ledger to become the optimizer. Don’t drag optimization into the migration project. First make the records accurate, fast, and boring. Then—only then—build decision systems that can be improved without having to rewrite the entire corporate memory every time.

ERP deployments are slow and costly because the ERP category evolved under incentives that make them slow and costly. The way out is not better project management inside the same trap. The way out is to stop asking a ledger to be an oracle, and to stop paying a vendor’s historical baggage as if it were your own.