00:00:00 SAP failures trigger this deep dive
00:02:15 Billions lost on ERP—just the tip of the iceberg
00:06:15 Legacy design mistakes haunt modern systems
00:10:20 Land grabbing logic behind ERP sprawl
00:12:26 CRUD applications and commoditization of ERPs
00:16:30 HANA was SAP’s strategic pivot to analytics
00:20:38 Why tabular databases fail at reporting
00:26:23 Columnar databases: pros and hidden costs
00:30:21 Betting on RAM is the gamble that failed
00:34:31 Performance collisions in hybrid databases
00:40:41 Hybrid software fails at everything
00:42:33 System of intelligence: a third paradigm
00:46:59 Early ERPs falsely marketed as intelligent
00:52:33 Why S/4HANA can’t be all things at once
00:58:43 Google’s failure with CRUD shows cultural mismatch
01:05:02 Programmability is key for decision systems
01:10:38 Economic failure of the all-in-one approach
01:16:30 Why ERP modernization is so slow and costly
01:22:06 Optimize decisions—not just processes
01:27:24 Advice on building your own records layer
01:34:13 Open-source and the true cost of commodity tech

Summary

In an insightful discussion between Conor Doherty of Lokad and CEO Joannes Vermorel, they delve into the financial setbacks associated with SAP’s software implementations. Vermorel elucidates the fundamental problems of merging systems of records, systems of reports, and systems of intelligence into a single ERP solution, which leads to inefficiencies and conflicts. They highlight the costly failures experienced by companies like DHL and Lidl due to SAP’s flawed strategies, particularly the integration of HANA. Vermorel advocates for specialized systems and open-source solutions like PostgreSQL, which offer robust functionality at lower costs, steering companies away from complex, ineffective software amalgamations.

Extended Summary

In light of recent revelations showing myriad financial losses tied to the implementation of SAP software by major corporations, Conor Doherty, Director of Communication at Lokad, sat down with Joannes Vermorel, CEO and Founder of Lokad, for an in-depth discussion. Their conversation, spanning the challenges of enterprise software design and the pitfalls of ambitious yet flawed SAP strategies, offers a rich and enlightening analysis.

Vermorel explains the nuanced distinctions in software systems used by companies: systems of records, systems of reports, and systems of intelligence. The fundamental issue arises when companies attempt to amalgamate these three distinct systems within a single software package, inevitably leading to conflicts and inefficiencies. This scenario is exemplified by SAP’s incorporation of fundamentally incompatible elements within its ERP solutions, such as HANA, resulting in significant strategic and operational setbacks over time.

The examples of SAP-related failures are staggering. As Conor recounted, companies like DHL, Lidl, Spar, and Asda have reported losses totaling hundreds of millions of dollars due to their SAP ERP implementations. These losses are only a fraction of the larger economic stagnation experienced by these organizations during the prolonged periods dedicated to system upgrades. Vermorel points out that such ventures freeze modernization efforts and stifle other digital advancements, dramatically inflating the true costs of these projects.

The core of the problem, Vermorel argues, dates back to essential decisions made by SAP decades ago. SAP originally focused on systems of records—essentially sophisticated electronic ledgers that aimed to cover comprehensive aspects of company operations. This “land-grab” strategy led to extensive scopes but also resulted in a commoditization of CRUD (Create, Read, Update, Delete) operations. By the early 2000s, a shift towards systems of reports began, leading to sophisticated yet cumbersome analytics tools.

One of the critical choices SAP made was integrating HANA, a columnar, in-memory database designed to enhance analytical capabilities but unsuitable as a foundational database for transactional operations. Vermorel details how this strategic blunder has had widespread repercussions, slowing down core processes and creating performance issues that required extensive and costly over-engineering to manage.

The interview touches upon the inherent conflict between the design needs of different software systems. Systems of records require fast, high integrity transactional processing, while systems of reports demand extensive data analytics capabilities. Combining these with systems of intelligence, which aim to automate decision-making through programmability, further complicates the software architecture. This dilemma is likened to trying to build a vehicle that is both an excellent plane and a fantastic boat—an endeavor doomed to result in a subpar hybrid.

Conor brings up the role of mechanical sympathy in choosing software solutions—essentially understanding software’s inherent characteristics and constraints, like the differences between tabular and columnar databases. Such basic knowledge could help steer companies away from costly decisions.

Vermorel’s upbeat note highlights the promise of open-source solutions like PostgreSQL. He advocates for leveraging these accessible and robust systems, suggesting that their modest cost and high functionality offer a viable path to circumventing the pitfalls illustrated by SAP’s missteps.

In conclusion, Vermorel and Doherty’s discussion underscores an essential caution: while ambitious software solutions promise to unify diverse functionalities under one roof, the reality is that such integrations often lead to undue complexity and performance issues. Firms should instead consider specialized systems tailored to their specific needs, benefiting from the rich landscape of open-source offerings that deliver high-engineering excellence without the associated exorbitant costs. The conversation serves as a guiding framework for companies to navigate their digital transformation while avoiding historic errors that have proven to be economically detrimental.

Full Transcript

Conor Doherty: Welcome back to Lokad. In light of the recent news that some very large companies lost some very large amounts of money on their SAP implementations, Joannes and I decided to sit down and discuss, well, what went wrong exactly. Joannes describes the challenges behind designing enterprise software as well as his perspective on the differences between systems of records, systems of reports, and systems of intelligence. As Joannes argues, when companies try to produce all three in one piece of software, problems start to emerge.

Now as always, if you like what you hear, subscribe to the YouTube channel and follow us on LinkedIn. And with that, I give you today’s conversation. So Joannes, thank you for joining me in the Black Lodge. I should set the table a little bit for you.

In the intro, I acknowledged that one of the reasons we’re here is to discuss some very large companies that lost very large sums of money. Now, that’s a very broad statement. How we really came about having this conversation was a post that went somewhat viral on LinkedIn from Anthony Miller, a friend of the channel, where he collated a lot of fairly damning figures related to, in many cases, multi-billion dollar companies reporting hundreds of millions of dollars of losses which they ascribed to their SAP implementation.

So, just to quote or paraphrase, I should say, DHL apparently lost over $370 million trying to implement an SAP and IBM solution. Lidl in Germany—now before someone corrects me in the comments, it is Lidl in Ireland, it’s Lidl on the continent but it’s Lidl in Ireland—they cut their losses after spending half a billion euros. So, they pulled the plug on the implementation after half a billion euros.

SPAR, the Dutch chain, I believe, claimed to have lost $109 million in sales due to their SAP implementation. So, a consequence of the implementation. And ASDA reported $25.5 million worth of stock inconsistencies between their SAP ERP and their WMS. Again, there are others, but the point here is to say that that’s over a billion dollars or a billion euros worth of losses. So, we’re not talking about chump change here.

So, I come to the question: Joannes, what exactly is going wrong with these enormous companies and their SAP implementations?

Joannes Vermorel: And obviously, the common denominator to all those problems is picking the wrong vendor, SAP. But, uh, but also, I think unfortunately, those numbers are only the tip of the iceberg. Actually, my own casual observation is that the losses are orders of magnitude higher and these are the ones that people are acknowledging. I mean, and they don’t really acknowledge the real costs.

They just, um, for example, they just, uh, acknowledge the fact that they paid money for something that didn’t work. What they don’t acknowledge is that usually the process took many years for many of those projects. Actually, the company was frozen in time for half a decade, sometimes a decade, where they could not do anything but to focus on the ERP upgrade.

So, you see it means that not only you wasted hundreds of millions, but in fact, that is just a small thing because what you effectively did was to pause all the modernization, all the digitalization, all the ongoing transformation that you could have otherwise because you had to take care of your ERP upgrades. And I’ve seen that many, many times where companies go for like a multi-year process where they pause everything for this thing to happen. The cost is absolutely gigantic.

I mean, it is obviously a testament to the quality of those companies that they even survive this process because, let’s be real, for example, in the software industry, anybody who pauses their own development for half a decade would just be dead. I mean, they would be replaced by things that are like three generations of technologies beyond them. So, kudos to those companies to survive. It means that they have absolute mastery of their game so that they can survive that long with such a dysfunctional process.

The reality is that the outlook is absolutely terrible. And if we go behind this common denominator and we want to examine the root cause, because blaming SAP does not shed light on the case, I think the root cause is a series of mistakes that have been committed a long time ago, like literally decades ago.

Conor Doherty: Business decisions or do you mean software mistakes? When you say mistakes, what do you mean?

Joannes Vermorel: I mean, strategic design mistakes by people at SAP that were committed decades ago. Those failures can be traced back to mistakes done decades ago. And that’s interesting because when you analyze these failures, it will be the blame game: “The integrator wasn’t good” or “The change management was not done correctly in the company” or “The IT department of the company was not up to the level they should have” or this or that or that or that. You know, excuses excuses excuses.

And indeed, every single failure looks kind of unique because it’s a very specific mess every single time. But again, those are not satisfying explanations. I think if you want to really understand why all these issues arise, it’s really something that is very specific to those big enterprise software vendors. There are clear root causes that can be traced back to decisions made decades ago. Now we are just seeing the consequences unfold.

The audience might not realize but when you operate a software company, you have to live with your sins, with your past mistakes for a very long time, possibly forever. And that’s very strange because you would think that software is completely mutable, you can change everything you’ve done. The reality is different.

Back to differ, usually when it comes to architectural decisions, when you make mistakes, you can be stuck with them indefinitely. And those mistakes just come and haunt you forever, and they poison everything you’re doing. People see the symptoms—all the problems—but don’t necessarily trace it back to the root cause, which is frequently so old that people don’t see it.

Conor Doherty: Well, SAP, for the record, is an enormous company that goes back more than 50 years. So when you’re talking about root causes and strategic design choices from decades ago, living with those errors in 2025, those are some pretty extreme claims which require extraordinary evidence. Can you give me an example of what a strategic design choice from the 1970s might be that is haunting an SAP ERP in 2025?

Joannes Vermorel: So, SAP started in the late 70s and they are what I describe as systems of records. That’s what people call ERP, CRM, WMS, all those software pieces, which are fundamentally the electronic counterpart of something that is happening in the company. Those systems of record, there is no intelligence; they are, I would say, fancy ledgers. If we go back in the past, the vendors who did succeed were the ones who did the most land grabbing.

What do I mean by land grabbing? If you start to manage inventory, you need to manage employees, then you need to manage orders, then payments, then procurements, clients, suppliers, partners, etc. The idea is that you have to go and grab the land of having complete coverage—a coverage that is as extensive as possible for your records. Going back in time, let’s say the 80s, this land grabbing competition was going on. The reality was that for any given company that starts to adopt a dominant vendor (at the time it could be SAP, Oracle, or IBM), you would have a winner-takes-all effect within the company.

You’re a client company, you operate software from a vendor, as soon as you start dealing with this vendor, you will pretty much push all your business toward this vendor. Why? Because at the time engineering distributed software was absolute. It meant that if everything software-wise did not live in the same mainframe, you were screwed. In theory, you could do networking (we are in the 80s) and some banks were doing that at the time, but it was just extremely complicated, extremely costly.

Realistically, the only economically viable solution to manage inventory and suppliers and connect the dots was to put all of that into the same system. Thus, a vendor to win must cover everything. That’s what would make ultimately the vendors that will succeed—the people that will develop what is called ERP and CRM. Those big mega systems were the ones that succeeded, having massive coverage. What does that imply? It means that you need to cover as much land as possible and you end up touching everything.

Now, that doesn’t create much incentive to have hyper-consistency, hyper-integrity. It’s really about grabbing as much land as possible as fast as possible. In the process, what large entities realized is that CRUD (Create, Read, Update, Delete) apps are complete commodities. In the process of doing that, the software industry realized this process is a complete commodity. If you want to have a system of records, a CRUD app, it is very simple.

You need a relational database and a framework to provide a series of views for every entity, offering a user interface to execute CRUD operations for all entities. You can create a client, update a client, delete a client, etc. You can do that for clients, suppliers, invoices, etc. Very quickly, those vendors realized this thing is just complete commodity. The land grabbing phase pretty much ends in the 2000s. By 2000, not everything was covered—there were new areas emerging, for example, e-commerce.

At the time, it wasn’t there; you need to add that. There is constantly new stuff, but most of the land grabbing was covered, which is a problem. Vendors like SAP saw that commoditization was coming super strong. Those systems of records are easy and should be dirt cheap—dirt cheap. So, how are you going to maintain your profits and margins if you are selling a technology that is a complete commodity?

That’s where they saw a silver lining: systems of reports. In the mid-90s, a company called Business Objects emerged, creating the first large-scale success in selling OLAP (Online Analytical Processing) technology—a system of reports. This was a wild success. Business Objects was acquired in 2006 or 2007 by SAP. People realized that maybe the value is more in the systems of reports. At the time, it seemed fancy.

Clearly, unlike electronic records where once you have all there is to know in the database about your clients, your value is kind of limited. You are supposed to provide electronic counterparts to the business. Once you have a satisfying electronic representation of the business, there isn’t much to add. There are only so many things you can add to describe a product in your database, a client, etc. Bottom line, there is this drive toward analytics. SAP realized that and decided to go all-in on this second stage starting in 2010 with HANA.

That would be, for me, the crux of most problems today. I think it comes from this decision to go for HANA. Going back to this decision at the time, SAP had still a strategic problem: its dependency on third-party databases. In the 2000s, they relied dominantly on Oracle databases, a little bit on Microsoft and IBM DB2, but dominantly Oracle. This meant that the ERP at the time, SAP ECC and all their suites (with many products and acquisitions), depended on a third-party database.

That was a problem because a big portion of the value was going to another software company. Due to commoditization, competing for a shrinking market was problematic. SAP decided to roll out its own database layer, code-named HANA. Clearly seeing that they wanted to push strong in this analytical direction, they aimed for a system that is columnar and in-memory. This decision alone, back in 2010, unfolded the error.

The ERP S4 is only released in 2015. So once they have their new database system, they need a few years to rebuild their own ERP technology on top of this new database. But if we go back in time, I think the crux of the error that is unfolding right now can be traced back to HANA. Now we have to understand a little bit more. That’s a little bit more complicated, so I will maybe take a little bit of time to explain what is going on here.

For a system of record, what you need is a tabular database—that is, a traditional database. So what is a tabular database? That’s a database that has tables where the data is organized line by line. That’s one thing you have to keep in mind with computer systems: locality of reference is extremely important for performance. That means that when you want to access a collection of data, if you want to have performance, all of this data should be located in one place in the system, at the same place.

So let’s say you want to update a supplier. The supplier has many pieces of information—its name, its location, its certification, this, that. I mean, you can think of all the attributes of the supplier entity inside your system. It’s going to be a table. There will be maybe a table called “Suppliers,” and this table will have dozens, possibly a hundred, columns. So if you organize your database in a tabular fashion, it means that when you pick a supplier, it will display the supplier.

All the information from a given supplier is kind of collocated at the same place inside your system. And if you want to update, same thing. And if you have a tabular representation of your data, it is very straightforward to add a line, remove a line. Again, those things are nicely collocated. It works great. So for tabular databases, for systems of record, they are just perfect.

But they are completely crap when it comes to analytics. That’s a problem. That is the crappiness of those tabular databases. It’s the number one reason why business objects, you know, the one and all those BI, business intelligence tools, succeeded in the first place. It’s because in the ’90s, they started to provide what is called OLAP, which are cubes, hyper cubes, that live next to the database and that are much more convenient to provide analytics.

And why is that? I mean, just think of it. If you want to look at, let’s say you have a table that is orders, and the table orders has, let’s say, one column that is the amount expressed, let’s say for the sake of simplicity, in dollars. But the table orders otherwise has like 100 columns. Now you want to compute how much sales in dollars did you have for last year. The reality is that if you want to compute this turnover for last year, which is just a sum of the column, you know, amounts, you will be able, if you have a tabular database, to parse pretty much the entire table. Your system will go through every single line, but as a result, it will not be able to single out the amount column because it’s completely in the middle of all the rest.

So to compute this addition of this one column, you end up looking at the entire table, which might be hundreds of times more information than the one figure that you want for every line, which slows down the system massively. Now, there is a solution, and it’s to organize the database along a columnar setup. So what does that mean? That instead of grouping or packing data line per line, you pack it per column.

Suddenly, if you want to pick a column and do an operation like, “I want to sum everything that is in this column,” it becomes super fast because you have access to this column in isolation. You don’t need to have the order ID, the client ID, the product ID, and whatever all those other columns that you would find in the orders table. You will single out the one column that you want. Same thing if you want to do a selection for date, you will be able to pick the date column and just apply your filter.

It’s going to be orders of magnitude more efficient. That’s very cool. So, and by the way, this columnar setup is historically, you know, when people started to do enterprise analytics, they started with hyper cubes, those OLAP technologies. But very quickly, by the end of the 2000s, people had realized that columnar databases were just better. So there was like a phase in terms of technology where business objects were dealing with hyper cubes, and technology very quickly morphed into a superior version of that, which was columnar databases.

Okay, now columnar databases are much better for analytics. So for all those systems of reports, that’s fantastic news. We have something that is much better for the audience. By the way, a columnar database would be, for example, Spark, an open-source implementation that provides a columnar database.

That’s incredible scalability to deal with, and that is very, very efficient in terms of performance. Now, it doesn’t mean that this thing doesn’t come with a trade-off. The trade-off is that a columnar database is complete crap for a system of records. That is, by design, that’s going to be on two fronts. First, if you organize your data in columns, then whenever you want to update a line, you end up touching many columns. You have to identify the correct position in every column, and if you have 100 columns, that means you will have 100 distinct places in your system to update.

While in the past, with your tabular database, you could just do one update; it would be just local. But here, your data is organized in columns. So if you want to update a record, it will be spread over many columns. It’s going to be orders of magnitude less efficient. Again, there is no free lunch here. Either you organize your data as a tabular system, and that’s great for systems of records, or you do columnar, and that’s great for systems of reports. You can’t have it both ways.

SAP decided to go all-in with HANA, which was a columnar database. I think that was the decision alone that spelled the doom on everything they were doing as a foundational layer, which was the systems of records.

Conor Doherty: Okay, just to jump in there because that was a lot that was covered, that was good, thank you. But just in case, to situate myself as someone who might be listening to this now, they might say, “Are you seriously maintaining that DHL and Spar and Asda lost, in total, a billion dollars worth of inventory or just money trying to implement just because of columnar versus tabular analytical errors? That’s the root cause of the problem. Is that a claim being made?”

Joannes Vermorel: To a large extent, yes. I mean…

Conor Doherty: Back that for me.

Joannes Vermorel: Yes, the thing is that when you have critical architectural mistakes, they tend to spiral out of control causing thousands of side effects. It’s not because something is super chaotic and complex that the root cause is itself something very complicated and chaotic. An example would be the rotation of the Earth. The Earth compared to the Sun has a tilt in its axis, which causes a super complicated system of seasons with hot and cold periods, winds, and tons of things that are just a consequence of this simple tilt.

So you can have a root cause that is extremely simple, but when you look at the consequences, they are incredibly complex and varied, and yet it can be traced back to something very simple.

Conor Doherty: I agree, and that’s a lovely astronomical example.

Joannes Vermorel: Here, this mean that by taking a decision that is positive for their analytics but detrimental for their system of records, they have created a system that is incredibly sluggish. Here we can also see the sort of gamble that SAP took back in 2010. They didn’t only make it columnar; they also made it in-memory. That was another mistake. The idea to make it in-memory means that they said all the data is going to live in DRAM, essentially the sort of RAM that you have for servers.

The thinking of the time was that DRAM would become impossibly cheap and so much faster in the future that it would be nice. Software companies tend to say, “We have a performance problem now, but if we take the right decisions, the progress of the hardware industry will nullify this problem for us in the future.” They believe that if computing hardware becomes a hundred times faster or better along the right direction, then whatever performance problem they have now might be gone within a few years.

Microsoft was notorious for doing that for a long time. They would release software that would barely work on any machine, then a few years down the road everything was faster, and the software would run just fine. The problem is that since 2010, RAM has not progressed nearly as much over a decade and a half. It did progress, but just a little compared to everything else, especially compared to other forms of data storage such as SSDs.

RAM barely nudged cost-wise, speed-wise, latency-wise, and everything, but solid-state drives (SSDs) improved by a factor of over a thousand during the same period. So they went all-in on hoping this technology would improve by orders of magnitude, but it didn’t, and most likely it won’t for plenty of reasons. Other things are progressing like crazy in computer science.

Those graphic cards used—GPUs used for AI—are progressing enormously. I mean, there are plenty of other areas that happen to be progressing enormously, but this one didn’t. And the problem was that the one sacrifice that you’re making if you utilize your database by making it columnar to run a system of record is that your latency is just going to be horrible.

Things are going to be slow pretty much by design. And that was already a problem with the bottleneck with tabular design. That was already a problem, but here you’re making it much, much worse. Which means that as a consequence of that you will be firefighting all the time to mitigate all the problems that are caused in the first place by your incorrect architecture.

Conor Doherty: Well, that presents the perfect transition because I was just about almost taking the words out of my mouth. If you just apply what you’ve just described—the system of record and the system of reports—the system of record, your ERP, in a concrete use case you scan barcodes and the system is updated so you know how much of a thing you have in your warehouse at any given time or how much you have in the store. Okay, cool. That requires fairly low latency I would presume?

Joannes Vermorel: Absolutely.

Conor Doherty: Okay, cool. Now what about the design structure for that system of record means that or comes at the peril or comes at the cost of a good system of reports, which is just for business reporting purposes essentially. Where is the tug-of-war? Because you’re telling me there’s a tug-of-war, but I don’t understand in terms… You want the system, you want your rational core to be hyper snappy when it comes to scanning the barcode. So you scan the barcode and it beeps, the thing is acknowledged, everything is okay. The barcode does exist in your system. Everything. So that’s—that’s the beep that you get—the system acknowledges that everything is recorded. We’re good. Move forward. Perfect.

Joannes Vermorel: Now the problem is, at first, if you go for a columnar setup, this one operation, which is just acknowledge that you just scanned something, is going to be much slower by definition because it’s going to be dozens of information that have to connect the dots with many columns that are disjoint in your system.

So you have that as the first hurdle which makes having such a snappy system much more difficult by design because, again, columnar databases are great for large-scale analytics. They are not snappy. There is nothing in those databases that is really designed for low latencies. This is not what they have been designed for. The very design is kind of antagonistic to that.

Now the problem is compounded by another thing, which is you are stating that your database layer will be used as the very same one that you’re using for your system of records. So the things that are managing your inventory, the employees clocking, everything where you want absolute precision exactly. And you want also extreme responsiveness when somebody’s clocking and badging.

You don’t want the system to say “Give me 20 seconds while I figure out if you’re an actual employee and if you can actually badge today.” No! You want this thing to be super snappy. Badge, beep, immediately! Otherwise everything kind of slows down.

Now the thing is the very same system because you’ve engineered it as a columnar database with the explicit intent that you want to do analytics. Then you—the system of records for the system of reports—yes. What’s going to happen is people are going to do reports. You say, “Okay, we have this system that is literally designed so that we can do fancy analytics. Let’s do fancy analytics!”

But what is the consequence of fancy analytics? The consequence is that you do operations where you scan—where you have to process very large amounts of data. And that is completely antithetical to low latency. You have this database core that is shared among all the processes. It has to be shared because otherwise you don’t have integrity. People don’t have the same vision of what is the current stock level.

If you don’t have the same vision on what is the stock level, it means that somebody on the website can order a unit that you don’t have because somebody else already picked this unit and there were delays in propagating this information. The e-commerce believes that there is still one unit left when there is none. That’s a big mess!

So you need to have this integrity so that you don’t end up with plenty of stupid problems. Now your relational database resource is being shared between stuff that is very light and that needs to be super fast and stuff that is super heavyweight—your reports where you say “Analyze all the clients who have made more than three purchases last year per category” and whatever.

So you have requests that are computationally intensive, going through a sizable portion of the entire data that you have. This is the antagonism. That’s the whole point of doing analytics. Analytics is not about fetching this SKU or this client. Analytics is about scanning every single client to have statistics on this and that.

I will scan my entire sales history over the last few years to analyze a matter of trends. So you have the same systems where you have a lot of operations that are very greedy. And again, how do you mitigate the fact that it completely damages your latency?

I mean, the answer is you throw hardware at the problem. What sort of hardware do you need to throw at the problem? Well, you need to throw memory because you decided back in 2010 that it would be a columnar in-memory system. Tough luck!

If the world had taken a different direction, if DRAM had gotten a thousand times cheaper and faster, that would be the perfect choice. But it didn’t happen and most likely it won’t happen. I mean, this technology—DRAM—is still progressing but nowhere as fast as most of the other technologies in computer hardware.

It’s one of the technologies that is progressing the slowest, especially on latency where it has barely moved for almost two decades. There are even classes of stuff, especially on the latency front, where you should not expect any improvement anytime soon.

There will be plenty of improvements but not on this front. So people who went all in—like playing poker and going all in—but their cards are not good at all. That’s pretty much what happened in 2010. And we see the consequences unfolding in very diverse ways today.

Conor Doherty: Again, I want to try and summarize as much of that as I can. Correct me where I’m wrong. The design choices that one makes in order to excel at the first class of software—enterprise software you describe as systems of record—is antithetical to the design choices that you would make if you wanted to excel at an analytical piece of software such as a system of reports.

And trying to do both—hybridizing, as you said—trying to do both simultaneously results in essentially a tug-of-war where you might do both things, but you will do them less well than you would have if you did them individually.

Joannes Vermorel: More or less, yes. Exactly. I mean, that’s the problem: you can’t have something that is both a very good boat and a very good plane. It’s either or. If you try to do both, it’s possible, but it’s going to be a crappy boat and a crappy plane.

Conor Doherty: Well, because again I am familiar with the blog where you first introduced this concept—it’s going to get a little trickier in a moment. But before we do, let’s cover the ground or summarize the ground we’ve covered. You use the analogy of sprinting and weightlifting. The things that make you very good—and I like that because analytics is about taking everything. I liked the idea of weightlifting as an analogy.

You can be a very, very good weightlifter. If you want to lift 200, 300, or 400 kilos off the ground, you need to be large to do that. It’s a power-based move. Running 100 meters in under 10 seconds? You’re not doing that if you weigh 150 kilos; it’s just not happening. So you can be very good at running aerobic exercise, or you can excel and be world-class at anaerobic weightlifting exercises. You can’t be both, and if you try to be both, you don’t really excel at either one.

Joannes Vermorel: Yes, that’s exactly right. Yes, well, the thing is again, because I am familiar with where you introduced this perspective, there’s actually a third category to the classes of enterprise software. Because we’ve covered systems of records again—ERPs, recording lists of data, units of stock on hand—you have your systems of reports—BI tools essentially for analytical and presentation purposes. There is a third class of software that we haven’t touched yet.

Conor Doherty: What is that?

Joannes Vermorel: The third one is systems of intelligence. The interesting thing is that a system of intelligence, unlike the first two classes, aims to directly make decisions. That’s it. The interesting thing is that if you look at the history of enterprise software from the very start in the late ’70s, all enterprise software has always, no matter what they were effectively doing, been marketed as systems of intelligence, which is kind of very puzzling.

So it doesn’t matter if what you’re selling is a system of intelligence or not. As long as you’re targeting enterprises, it will be marketed as a system of intelligence—better decisions for you.

Conor Doherty: Yes, and that’s very puzzling. Let’s look, for example, at inventory management software. Effectively, this thing is just a ledger. It just counts how much you have in stock. When you pick something, it decrements your stock. When you put something, it increments the stock. This is it. It doesn’t do anything besides being an electronic reflection of your stock.

Joannes Vermorel: Those products were invariably marketed as “You will have fewer stockouts and fewer overstocks,” which has nothing to do with the ledger itself.

Conor Doherty: No, I mean, you could say that you will have more efficiency while bookkeeping your inventory. You may have fewer clerical errors, but to say that you’re going to have better inventory decisions—why would you think that? The software doesn’t do that. It just tells you what you have. It just lets you make your decisions, maybe in a slightly better way.

But it’s like you can play chess on a real chessboard or on a computer chessboard. Certainly, if you want to keep track of all your past games, the electronic version is more convenient, but the fact that it’s on a computer will not make you a better chess player. Most likely, it might even be a distraction that actually makes you a worse chess player.

It’s not a given that the decisions you’re making with a computer are automatically better just because you are interacting with a computer instead of doing it with a pen on paper. It may happen, but it’s not designed to do that logically. I would even argue that my own casual experience is that when you ask people to really think about something, a computer screen is kind of a distraction as a rule.

I would say if you want to really have good decisions, I’m not sure a screen with a large amount of clutter would help focus and really think it through.

Joannes Vermorel: That’s a super bold claim. Just to be clear, we’re not dismissing the value of a system of records. The point, as I understand it, is don’t mistake your system of records and its capabilities for the capabilities of a piece of software that is explicitly engineered to produce decisions.

To defend those early enterprise software vendors, it was extremely fuzzy. What is super clear to us now—those systems of records, crude apps, systems of reports with all the business intelligence tools and modern analytics, and then systems of intelligence where you have prediction capabilities, optimization capabilities, with the idea of literally automating decision-making processes—this sort of classification was just super unclear then.

They were trying to do all three, and in the 70s, the very first inventory management systems were being marketed as “We will directly automate all the inventory decisions.” They were marketed as such. The community realized that it was very easy to manage inventory. Manage in the sense that there’s even a twist in the term management.

When people were saying inventory management in the 70s, they meant all the sorts of things a manager would do, and a manager would not just be doing bookkeeping. He would also make inventory decisions. So people said when they thought of inventory management, they thought of something that would do the whole package—keep track of inventory and make all the relevant decisions. It turned out not to be the case.

So we have subdivided the domain into inventory management, inventory optimization—optimization belonging to the system of intelligence. But at that time, it was super unclear. The same thing with analytics—you can put things on display, but people had not realized how large the databases would grow and all the challenges you would have to produce statistics from them.

Having statistics was already possible from the very start. Those systems typically ran during the night, going once through the entire database to collect statistics. So it was kind of possible, even with a tabular database, you could have nightly batches that would collect the statistics you were looking for, but it was very inconvenient because it was very slow.

You had to prepare your logic ahead of time, and once you had your logic, you had to wait until the nightly batch for it to be executed. You would only realize you had a typo in your code the next day. Operationally, it was a nightmare. It was possible, but the amount of friction was just over the top.

That’s why those tools like Business Objects developed essentially OLAP technologies with hyper cubes, where you could have within seconds some analytics. This was a complete game changer because suddenly you didn’t have to go through, let’s say, implement something and wait until tomorrow to see if you got something wrong. You could do that at 100 times the speed you could before.

Conor Doherty: Earlier, you mentioned S4/HANA, which was released in 2015. At the time of recording this conversation, it is almost ten years old—I think it was ten years to the day, two days ago. Now having reviewed the marketing material around it, in particular using your classification, it was presented as having all the capabilities of a system of records, a system of reports, and through its AI-driven decision-making, a system of intelligence. So you covered the problems created by trying to combine systems of records and systems of…

Joannes Vermorel: Some analytics where, I would say, completely game changer because suddenly you didn’t have to go through, you know, let’s implement something and wait until tomorrow to see if we got wrong. Just to reiterate, you could do that at 100 times the speed that you could do it before.

Conor Doherty: Well again, so earlier you did in fact mention SAP S/4HANA, the what released 2015. Actually, at the time of recording this conversation, it is almost 10—I think, it was 10 years to the day two days ago. Now having reviewed the marketing material around that, in particular, using your classification, it was essentially presented as being having all the capabilities of a system of records, a system of reports, and through its AI-driven decision—a system of intelligence. So you’ve already covered the problems when you try to do systems of records and systems of reports simultaneously. What happens when you try to do all three of those systems under one hood?

Joannes Vermorel: Yeah, I mean, here we are trying to be a train, a boat, and a plane at the same time, so it’s even worse. You know, it’s like entering the Frankenstein territory where it’s going to be really ugly. The reality is that everything is kind of geared against you as a software vendor if you try to do all three. Just pause for a second to realize what it implies exactly.

A system of records, you would typically charge by operator, per seat, per user. That’s the way you sell these sorts of things today; that’s going to be a very typical metric. If you are doing a system of intelligence, that doesn’t make sense because you kind of eliminate the users. Oh yeah, so that’s your intent—you want to make nice decisions. So obviously, the better you are, the fewer users you have. Ultimately, you would have just a handful of users in the client company just to supervise your decisions and be done with it. So, if you’re charging per user, this is not going to work; you have such an antagonism in terms of incentive. But obviously, it’s just a detail; there are plenty of other problems.

The problem is that the sort of software engineers that you need are just not the same. By the way, I think one company, that software company, that suffered a massive problem from the opposite problem, was Google. Google was pioneered around things that were very much in the camp of systems of intelligence. What was the intelligent decision made by Google at the time? That was search; here is my query, identify what is the most relevant out of a billion websites. That’s a decision being made, and it has to be done very cleverly. Google made their fortune by having those decisions made extremely well—at least better than their competition. They built an entire workforce on the idea that they would have very smart engineers tackling very difficult problems because decision-making, the characteristic is that they are almost very difficult.

That’s why you want to delegate them to a computer. If they were not difficult, it would not even qualify as a decision. If something is just a matter of simple arithmetic, you would say “Okay, it’s just a basic calculation, nothing to see here, move on.” Google hired tons of capable engineers to do those fancy systems. When it came to develop systems of records, which are mundane, boring, and repetitive, well, they failed.

If you look at the history of Google, every single thing that was mundane was thrown out of the window. They had Google Reader, a blog reader, which was widely adopted. It was simple, a crude app, but a crude app is not enough for Google, so they threw it away. They had dozens of similar products where if it’s a crude app—crude in the sense of Create, Read, Update, Delete—they released it but could not maintain it because it was beneath them.

This is a real problem when you have your entire company geared on developing systems of intelligence: your engineering workforce is not keen on doing tedious, repetitive work. Conversely, if you’ve been living for decades doing systems of records, your workforce is just used to crafting thousands of screens, acknowledging “Okay, this software will need 5,000 or 20,000 distinct features, all individually super easy with no challenge whatsoever,” but they still need to be done.

In terms of engineering workforce and culture, it’s completely different. My perception is that SAP ended up with a workforce geared toward this mindset of land grabbing—let’s have a zillion engineers who can do a zillion screens and a zillion features, all of them super simple individually. Then they try to leap to systems of intelligence, and it failed super badly.

One area where you can see people who are good at systems of intelligence is that they achieve technological breakthroughs, and it leaks in the form of open-source projects. Google, for example, may have failed to maintain crude apps, but succeeded in releasing valuable pieces of open-source technologies, very intricate and difficult to engineer due to their talented workforce. If you look at SAP, I’m not sure I could mention even a single open-source project of interest that emerged from SAP as a comparison.

Conor Doherty: Well, it occurs to me that we’ve spent quite a bit of time delineating the structure of systems of records and systems of reports, but we haven’t actually covered what exactly the architectural design choices—the strategic architectural design choices—would be for a system of intelligence and why that would be incompatible with the other two architectures. For example, Lokad is a system of decisions, a system of intelligence; what is it about its makeup that means trying to square that with a system of records would be problematic, to put it lightly?

Joannes Vermorel: I would say a system of intelligence for its foundations has some similarities with a system of reports. You want this columnar representation; that makes a lot of sense for analytics. But then, what you want out of a system of intelligence is very quickly programmability. If you want the flexibility to engineer a decision-making process, you need something extremely expressive. By the way, that’s why Excel spreadsheets are frequently used to support decision-making processes; with Excel, you get a programming language, which is very important.

Excel formulas can be made arbitrarily complex, and if you’re fancy, you can even do programming through VBA or Python. Excel is programmable, and that’s why it’s frequently used as a tool to support systems of intelligence. You need programmability, which is a completely different game compared to systems of reports where it should be accessible to everybody. You are in the territory of WYSIWYG (What You See Is What You Get); you have fancy interfaces, which is the world of systems of reports.

Conor Doherty: It also occurs to me that there’s an economics perspective here that we haven’t touched on yet. And again, you’re a man of economics, a man of Thomas Sowell.

It occurs to me that whether or not it’s in terms of software an intelligent decision to try and buy a three-in-one solution where you have your system of records, reports, and intelligence altogether. We can leave that to one side. Is there not an economics argument that would say that as a set of trade-offs, it might be cheaper to buy essentially one piece of software that at least alleges to be able to do all three things rather than three separate subscriptions to three separate pieces of software, three separate problems, having to train everyone to use three separate things?

Joannes Vermorel: Yeah, I mean, that’s literally the story of the F-35. You’re talking about this American jet fighter which will be the most expensive plane ever. The problem is that those things are not linear.

If you say, “I want my plane to be super good in close fight, super good in long-range operation, and stealthy, and it can take off vertically,” you end up with something where every single time you add a feature, you’re doubling the price of the whole thing.

So in the end, you end up with one plane that is costing the typical price of maybe ten other planes—one for long-range operation, one for tactical fighting with another plane, one to be stealthy, one for vertical takeoff. You don’t necessarily need to have all of that, etc. etc. So, you see, it’s not linear. You can’t just have all those things into one setup.

Now, you could say that if SAP had decided to have three completely independent systems—one dealing with the system of records, one dealing with the system of reports, the third one dealing with the system of intelligence—and you put Chinese walls between all those divisions and they operate independently, that could have worked. But it is not what was done.

The temptation was way too strong to pull everything together, including third-party solutions that were acquired. The mix is just toxic. You take products that mix very badly, and you get something that is dysfunctional. I think that’s what we are seeing right now.

Going back to the timeline, we had HANA in 2010. It took them five years to roll out S/4 on top of that. Most of those failures were literally a decade in the making. We are looking at things that took a decade to unfold.

We are looking at HANA failures that are just becoming publicly visible. It’s only the tip of the iceberg. Many companies’ opportunity costs are just crazy because it takes so many years to upgrade. We’re talking about five-plus-year upgrades, which is just downright bananas.

It’s insane. Look at the fact that ChatGPT and generative AI did not exist five years ago. You are going to be so late to the battle. On top of that, very bad decisions were made, such as being all-in on in-memory systems, which means overspending on infrastructure.

If you have tons of hardware, way more than you should have, it means you need more system administrators, more people in IT to manage all of that. Everything becomes incredibly complex. It’s not only more complex but slower as well. Those things compound. It’s much more costly, slower, you need more people, and it makes the opportunity cost grow even worse.

Then people become scared in the management layer because there is so much at stake. It slows down the process even further. It’s like a vicious circle. The problems are just becoming visible. Companies do their utmost to make sure that internal screw-ups are never public knowledge because it’s not good press. You do not want to advertise your own incompetency. If you can deal with that behind closed doors, it’s much better.

Think about how extreme the problem must be to end up disclosing all of that laundry to the public. Almost all situations, unless the company is publicly traded and they are against the wall and have to disclose things because otherwise it would be classified as fraud, are not going to be disclosed.

Conor Doherty: Well, you’ve mentioned opportunity cost a couple of times. You’ve mentioned the opportunity cost of being sucked into an implementation. Suddenly, you are stuck in that process, barring you from pivoting to something else. It also has the ballooning cost associated with people maintaining it.

There’s actually another dimension to this, and I’ve written down here “perfectability”. In terms of opportunity cost, if you took all three of those classes of software individually, and theoretically they are at their best when kept separate because you have a separate system of records, a separate system of reports, and a separate system of intelligence.

You try to perfect each one. There’s an upper limit to how perfect a system of records can be. Once you have 100% accuracy, that’s it. For example, how many bottles are on the table? There are two. That’s it. You can’t make it any better than that.

Once you have latency at 50 milliseconds, it’s inconsequential; people don’t notice it anymore. A system of reports—how good can a dashboard look? We can debate aesthetics, but there’s an upper bound on how much time and money you want to invest before experiencing diminishing returns.

A system of intelligence, as you said, involves decisions. In Lokad’s philosophy, that means considering the return on investment for every decision made. Theoretically, it occurs to me that there isn’t a perfection point where you can say, “This is the best decision I could ever make.”

It’s not necessarily perfectable. You can continuously improve an algorithm to make better decisions. Am I wrong?

Joannes Vermorel: So, no, but we have also to dispel a misconception. Mathematicians would say, “Oh, but once you reach the optimal, you’re the best.” You know, by definition, once you have the optimal, it cannot be made any better. And for any, I would say, given mathematical puzzle, if you take a mathematical puzzle and say, “This is it, this is a puzzle,” then you can have the optimal solution for this puzzle.

But where it is incorrect to think like that in business is that the choice of puzzle is very arbitrary. You know, you can decide, “Okay, I have maybe what is optimal for this framework, but I can maybe reinvent my own business and then I will have a new framework that gives me even higher returns.” So, the options that you have are not static. You decide what you are even willing to do, and within the things that you’re willing to do, there might be an optimal, but fundamentally, your possibilities are endless.

There is no—the only upper limit, the way I see it, is human ingenuity itself. So, you say, “My optimal is meaningless because optimal is within formal limitations.” Where I say, “I can only decide within, you know, I draw a line in the sand, I say I can only look at this area and say, okay, within this area that I’ve delimited in the sand, yes, my optimal might be that, maybe I can even prove that mathematically.”

But the reality is that it’s just a line drawn in the sand. You can go wherever you want. So, yes, the bottom line is that, indeed, when you think in terms of business decisions, there is no limit besides the intelligence and ingenuity of people who are going to be working on the case.

Now, I believe that the software industry realized that super early on. You know, that’s why, right from the 70s, all the vendors were pushing for benefits expressed in terms of better decision-making. They were literally advertising a system of intelligence because they realized that the problem of having an electronic representation of your inventory was finite. Once it was done right, what would you do?

It turned out that people went through stages. First, you had text interfaces, then you had graphic interfaces like with desktop—the desktop, the fat client generation of the 90s. Nowadays, you have web apps and now mobile apps. So, it turned out that even at the interface level, user interface, there was a series of transitions.

Even if the challenge was finite, it turned out that software companies were still busy just upgrading from the text terminal to the graphic interface to the web app to now mobile app. But the thing is that it was very clear to everybody that this thing would end somehow, in the sense that it would not grow indefinitely to become something extremely large.

That’s why, very early on, a lot of vendors, including SAP, went into branding themselves against those decisions, even if their software had nothing to do in practice with generating better decisions.

Conor Doherty: Well, it occurs to me that we’ve talked a lot of theory, and that’s good. Well, theory mixed with some practice here, but certainly, I want to try and make it a bit more practical. I want to frame the question to you like this: At the start of the episode, I mentioned DHL, Asda, Spar. There were other examples from Anthony Miller’s post.

If you were in a room today, if you were in the boardroom with the decision-makers, the big wigs, the big players, and they said, “Okay Joannes, what should we do next?” We tried this, this did not work. What would your advice be?

Joannes Vermorel: My advice is very simple. Systems of records have been completely commoditized a decade ago, in fact, more than a decade ago. So, let’s take it step by step. Your database is going to be, let’s say, PostgreSQL. It’s open source, it’s excellent.

That was the thing back in 2010 when SAP said, “We had so many problems, we see Oracle as a strategic threat.” What they didn’t realize is that Oracle was insignificant. The real challenger was actually open source.

So, the relational database that won today effectively is open source databases. Relational databases are now a complete commodity to the point that the very best products are open-source projects, and PostgreSQL is a mile ahead of Oracle and private ones.

So, here we are in a situation where a big vendor, SAP, who wanted to contain another vendor, Oracle, ended up developing a technology that ends up being inferior to an open-source product—which is just PostgreSQL.

And how do I know that it’s inferior? I mean, just look on Hacker News or discuss what startups nowadays are doing. I’ve audited a three-digit number of startups. I’ve never seen a startup use a non-open-source relational database during the last decade. Never.

They would only use open-source databases. I’ve never seen a startup use, for example, Oracle Database. Never. So, that gives you the sentiment that people who know tech, who deal in tech, do not make this choice. First, I would say to this room of executives, “For the database layer, you need to pick one of the excellent open-source options that are on the market.”

That can be PostgreSQL. These solutions are excellent, and if you don’t do that, the opportunity loss is massive because what you get if you pick one of those databases is that on top of that, you have literally a million-plus software engineers who are already competent on those technologies compared to private, obscure firewall technologies that are much more difficult to operate. That was the first problem.

Then, what do you need on top? You need essentially a framework to roll out a crude app on top of the database. There are tons of those frameworks, and again, they’re available in open source. That would be Django if you’re in Python, that would be MVC framework in ASP.NET for Microsoft, which is also open source. The list goes on. These frameworks exist for all the major stacks—Python, Java, .NET.

Because it’s completely commoditized, either you pick for your system of records a vendor that is already open source—those exist—or you just roll your own, and it’s not even that expensive. The thing is that when you end up with a five-year project to do an ERP upgrade, you would be much better off just rolling out your own replacement piece by piece.

In six months, you could have a series of modules that are already being upgraded and replaced if you just do it internally or with some help of an IT company. Again, the nice thing about having those systems of records being completely commoditized is that it is super straightforward to find people who can do it. It is super straightforward to outsource these things—it is also super straightforward to do it internally if you want.

The bar is very low; it is very cheap. When you look at all the tech-savvy companies of this world—the Zalando of this world—they have developed their own ERP; they’re just fine with it. If you look at Amazon, they are doing the same; JD.com is doing the same. All the people who are digital natives saw that as an obvious solution for the system of records.

Records are completely commoditized. In the end, it is not difficult to, I mean, those products should certainly not be expensive. If they are expensive then your plan B should be, “Okay, you know, I will just do it myself.” If you ask a painter to do a piece of painting for a wall in your house and they say that’s going to cost 2 million euros.

You know, it’s four square meters, but you know, if I can’t do any better, you would say, “Okay, I’m just going to do it myself.” You know, if it’s the only option, I would do it myself. Yes, I would prefer not to do the painting, but you know, we’re not talking about sending rockets into space. We’re talking about something that is fairly straightforward anyway.

Conor Doherty: Well, this, again, because I take copious notes, as you can see. But the very next question I wanted to ask was, having delineated the three classes and again you’re back in that room with the executives, and our potential bias is clear, as you know. Lokad is a vendor in and of itself. In terms of costs for each of these classes, because again, you’ve commented multiple times in the last two or three minutes about how, “Look, I mean, you could basically do this yourself.”

I mean, if you’re technologically savvy, you could and you’re digitally— the term used was “digitally native,” you could design your own system of records. Okay, well then it should presumably cost less than a system of reports and a system of intelligence or how do you— how do you allocate budget in that regard?

Joannes Vermorel: I mean, clearly, systems of records are much simpler than the rest. You know, just again, they were the ones that came first. The technology is thoroughly commoditized. Open source provides all the pieces that you need. Developing a crude app is a poster child of what integrated development environments are for.

So, the tooling is excellent, the framework is excellent, and the tooling is excellent. You have tons of engineers who have massive productivity, and even better with LLMs nowadays. This is the sort of thing where generative AI just rocks when it comes to writing tons of unintelligent code. Those tools are super good at doing that.

Conor Doherty: So $20 billion, that’s what we should charge.

Joannes Vermorel: Yeah, so that should be relatively, I mean, that should be quite cheap again for companies as a baseline. If you end up paying more than a tenth of what you paid for similar systems 10 years ago, this is a ripoff. You know, that should be your baseline. So you should be looking for, “We are going to do it again, but at a tenth of the budget.” And that’s not even pushing it.

I think considering the evolution of the technology, it’s a starting point. I’m pretty sure that if you were to go into crazy, aggressive—let’s say Elon Musk takes on the case—it would be, “I’m going to do it for 100 times cheaper.” But I would say that your next ERP project should be a tenth of the cost of what you spent 10 years ago. I think it’s a reasonable baseline, at least for systems of records.

For the system of reports, the technology is fairly packaged. So the problem is that it usually ends up being very costly because the company just wants a bazillion reports. The cost is the implementation, bespoke, of an unlimited number of reports—a crazy high number of reports.

So, when you have a system of reports, at some point people would say, “Oh, but I would like to have some reports that are out-of-the-box, ready-made templates.” And then the vendor would say, “Yeah, no problem, how much do you want?” And then they list, and you end up with thousands of reports. If you do that, then it becomes incredibly costly just because you’re asking for so much.

So here, it should be quite cheap because the technology is quite packaged, but more than systems of records. It is more difficult, so it’s the sort of thing where I would not advise rolling your own. But again, with open source technologies like Spark, it’s not that difficult. Although, it is more difficult than systems of records.

Here, I think if you want to keep the budget tight, you need to keep your demands under control. There is this temptation, especially in large companies, to request an endless list of reports that nobody is ever going to consume. It turns out to be super costly because all those people have to look at those numbers once in a while, and it doesn’t really make sense.

So keep it short and focused, and the cost will not be very high. It should actually be much smaller than your baseline ERP cost from 10 years ago. I said that the new ERP should be almost a tenth of that. The system of reports should be a fraction of this cost, no more than 20% of the cost of this next generation ERP.

That’s what we’re talking about, something very small. Ultimately, reports should not be costing millions or tens of millions to report basic descriptive statistics about your business. There is no point in that. The technology has progressed enough so that it should be very cheap.

Conor Doherty: When you first proposed the categorization of the three classes or classification of the three types of enterprise software—You—I can’t recall the exact ratio, but it was something like 90% of your IT budget should go on systems of intelligence.

Joannes Vermorel: The division that I proposed was 20% for ERP, 5% for systems of intelligence, and 75% for—sorry, 20% for systems of records, 5% for systems of reports, and 75% for systems of intelligence. That is what I suggest as a heuristic for the quasi totality of businesses.

In contrast, what is happening today is 75% for ERP, 20% for business intelligence or systems of reports, and only 5% for systems of intelligence. I say these numbers are wrong. We need to swap them. Why should you spend the bulk of your money on systems of intelligence?

Obviously, Lokad is systems of intelligence, so the audience should be spending a lot of money. But the reality is that there is a return on investment exactly. That’s where you can have a big fat payback if the decisions are done right. Systems of reports are mostly about making sure that you’re not on fire.

It’s looking back to see if you are compliant with your own processes, if you are on track. But fundamentally, no company ever got massively successful by just looking backward. That’s the problem with systems of reports. You’re looking in the rearview mirror.

If you want to win a race, you don’t win a race by having your eyes on the rearview mirror. At some point, you need to look in front of you.

Conor Doherty: It occurs to me that I do actually have a laptop in front of me, and this one has Wi-Fi. The blog that I was referring to is actually called “The Three Classes of Enterprise Software.” It was from June last year.

Just to clarify the numbers, you recalled correctly that how companies, the quasi-totality of companies, typically divide their spending—Their IT budget is 75% for systems of records, and you have playfully put “wrong” in parentheses after that. 20% for systems of reports, also “wrong,” and 5% for systems of intelligence, with “completely wrong” at the end of that.

Inverted 75% for systems of intelligence, five for systems of reports, and 20 for systems of records. Now, uh, I highly recommend reading the blog. It’s very good.

Now, uh, last question before we start wrapping up, but it’s one that occurs to me. Uh, it’s a point that you’ve made before and in your lectures, and I’ve made it a few times in conversations here: the role of mechanical sympathy. And we don’t have to speak for very long on it, but it occurs to me that a basic familiarity with, let’s say, what are the parameters required or the architectural design choices required to excel at the tool that you’re looking to buy?

If you’re somewhat familiar with that, you can at least get a sense like, “This might not be for me.” So, again, even just the difference between columnar and tabular databases—if you understand well what does this system use and what do I want that system for? That might guide you to immediately reconsider a potentially expensive choice.

Joannes Vermorel: Yes. And again, if we go back to the sort of mistakes that SAP did in, uh, in 2010… I mean, HANA was released in 2010, so they were probably in the process of working through that for a few years already. And at the time, I suspect it looked like a good idea, you know, it looked like, “Okay, let’s assume that those latencies that we have inside computer systems are just going to improve, keep improving, that memory will keep getting much cheaper.”

Because again, if we look at what is the big difference between 2010 and, let’s say, uh, the year, uh, 15 years earlier, you know, 1995. During those 15 years, memory had gotten radically cheaper and radically more plentiful. I remember at the time, in 1995, I remember the first computer I used, Windows 95, had something like 8 megabytes of memory.

And then, by the time it was 2010, that was, I think at the time, Windows 7 or something. I suspect at the time I had 8 gigabytes, you know, so it had grown by a thousand over this time period. And the latency had been divided by something like a factor 50 or something.

So if it had continued on this track, that would have been incredible. It would mean that, you know, I had 8 GB on my computer in 2010. If 15 years later it had improved the same way, I would have had 8 terabytes on my computer today. This is not what I have, of course, on my computer. Nobody has 8 terabytes on their computer.

But you see that if the technology had progressed as it did the 15 years before, that would have been what people had. And if we had also the similar reduction in latency, the problem is that the speed of light is such a pain, you know. Physics are not playing nice with you. You have this speed of light which is kind of in the way.

But nevertheless, it was, I think, that they made this tragic mistake. And then from that, so many things had to be overengineered on top because this is like the original sin. Because of that, you had so many cover-ups that you have to do just to… When you have a massive design problem like that, you can try to duct-tape your way out of the problem. You can, but everything is going to be clunky.

And so when you face performance issues, you can solve them by sticking hardware on top. Yes, you can obviously start buying super-expensive hardware, that would be step one. But then you can overengineer like crazy certain stuff so that it’s not too dysfunctional. Because fundamentally, with enough engineering, you can mitigate some of those problems.

But still, SAP wants to cover everything, with the ambition that I was describing. You want to be the system of record for one thing, but they want to be the system of record for everything. Which means that the surface area where you can have performance problems just because of this choice is absolutely gigantic.

And I believe that what we’re seeing is just, you know, in slow motion, things imploding in slow motion. That’s what we’re seeing. It takes a lot of time. It took five years to release an ERP on top of HANA, that’s S/4. And then it took years to sell ERP to their first company.

Because again, when you want to sell enterprise software, you don’t close the deal in six months; it takes frequently several years. So then it takes like half a decade cycle to implement. And then maybe once you take half a decade, that’s your plan for the upgrade.

That is crazy. But the reality is that you start to fail because it’s not taking half a decade, it’s taking a full decade. So we are just witnessing failures caused by bad decisions that were taken a long time ago. And we are tracing back in the past decades. I think we will see those problems keep popping up, but it’s not new. It just reflects mistakes that were done a long time ago.

And now, the only thing that we can really suggest is don’t repeat ancient mistakes. But the crazy thing is that those mistakes were done such a long time ago that people by SAP… I mean mistakes by SAP, and people who adopted them thinking it would be right.

And the interesting thing is that those mistakes have been done such a long time ago that people might be thinking, “Oh, but nowadays it should be right.” They got their things together. They got their minds together. This is solved. Because obviously, as a vendor, if I were to pitch such a product, I would say, “Dear prospect, rest assured that we have learned from our mistakes. Those things have been corrected. Those mistakes, we have learned so much. They will not happen again.”

I would say, not quite, because the problem is still at the very core of your tech. HANA is still front and center in all the SAP offerings. As long as it’s… Yeah, you’re screwed. You have to unwind all of that to start possibly entering more sane territories.

Nowadays, unless they change course and just remove HANA, go for PostgreSQL or whatever is open source, and then slice and dice their offering so that they could have lean apps that are tightly decoupled. Integrating things over the internet is fairly easy nowadays, so they could have a super modular design.

Unless they start doing that, the original sins are still there; the ambition of being the system of everything plus the wrong engine at the core of the system.

Conor Doherty: Well, to try and end on an up note, are there any constructive pieces of advice you share with people so they can perhaps sidestep the, air quotes, landmines that some of these enormous companies have made?

Joannes Vermorel: I mean, the real positive outlook, and that’s where it makes me very sad when I saw those mistakes, is that the open-source community has delivered something that is incredibly great. PostgreSQL is a marvel of engineering. This is open source. This is magic.

We live in an era where, quite literally for the price, it’s not exactly free—you need to pay for your internet connection of course—but for an incredibly low price, you can have incredibly well-engineered systems or pieces of engineering that represent hundreds of years of some of the most brilliant engineers of our century. This is incredible, and you can get that for free. So, that’s my take.