00:00:00 Introduction to AI investments by governments
00:04:42 OpenAI’s success with ChatGPT
00:06:29 Difficulty in deploying capital in software
00:07:22 Risk of predicting future AI models
00:12:35 DeepSeek’s cost-effective AI model
00:14:22 Y Combinator’s wide-spread investment approach
00:15:27 AI investment attracting non-AI companies
00:17:12 Overabundance of funds in AI market
00:18:29 SoftBank’s hit or miss investment strategy
00:19:52 France’s failed investment strategies
00:22:32 AI progress is geographically dispersed
00:23:53 Mega investments in AI are a distraction
00:25:02 Governments’ lack of understanding in AI
00:27:00 Incremental progress in AI over 50 years
00:30:18 Control over AI may not grant superiority
00:31:40 Argument against super intelligence
00:33:54 Contributions come from diverse sources
00:39:26 Regulation prevents job creation
00:46:31 Innovation won’t happen in government programs
00:50:50 Pay attention to domain-specific innovations

Summary

In this episode of LokadTV, Conor Doherty and Joannes Vermorel discuss the recent AI investments announced by governments, including the Trump Administration’s $500 billion and the European Union’s 200 billion Euro commitments. Vermorel critiques these large-scale investments, arguing they are often inefficient and wasteful, with taxpayers bearing the costs. He emphasizes that successful innovations typically arise from focused, independent efforts rather than bureaucratic consortiums. Vermorel also questions the vague objectives of these investments and their impact on job creation, particularly in countries with regulatory issues. He advises focusing on specific, actionable innovations rather than state-led initiatives.

Extended Summary

In this episode of LokadTV, Conor Doherty, Director of Communication at Lokad, and Joannes Vermorel, CEO and Founder of Lokad, delve into the recent surge of AI investments announced by various governments. The discussion centers around the implications and effectiveness of these massive financial commitments, particularly the $500 billion investment announced by the Trump Administration and the European Union’s response with a 200 billion Euro investment.

Joannes Vermorel offers a critical perspective, drawing on France’s historical approach to state-led strategic investments, known as “État stratège.” He argues that such large-scale investments are often inefficient and prone to waste. Vermorel points out that while these investments include significant private sector contributions, the reality is that taxpayers will bear the brunt of the costs through concessions like tax breaks.

The conversation highlights the inherent challenges in deploying large sums of money efficiently, especially in the AI sector. Vermorel emphasizes that successful market innovations typically do not emerge from bureaucratic consortiums but rather from focused, independent efforts. He cites examples like the iPhone and ChatGPT, which were not products of consortiums but of singular, dedicated entities.

Conor Doherty provides context, noting that the Trump Administration’s $500 billion investment includes $100 billion upfront, with the rest promised over time. Similarly, the European Commission’s investment includes both public and private funds. Despite these clarifications, Vermorel remains skeptical about the effectiveness of such investments, arguing that they often result in bureaucratic inefficiencies and fail to deliver meaningful advancements.

The discussion also touches on the ultimate goals of these investments, which remain vague and undefined. Vermorel criticizes the lack of clear objectives, suggesting that terms like “ethical AI” and “sustainable AI” are nebulous and do not provide a concrete direction for development.

Vermorel further argues that the AI field is characterized by rapid, unpredictable advancements, making it difficult to forecast future needs and technologies. He points out that the market is already saturated with investments in data centers from major companies like Microsoft, Google, and Amazon, questioning the necessity of additional government-led investments.

The conversation shifts to the broader implications of these investments, particularly in terms of job creation. Vermorel challenges the notion that such investments will create jobs, especially in countries with low unemployment rates like the US. He argues that the real issue in countries with higher unemployment rates, such as France, is regulatory friction, not a lack of investment in AI.

Conor Doherty brings up the perspective of Anthony Miller, who similarly criticizes the overregulation in France and its impact on the startup environment. Vermorel agrees, noting that the people most affected by unemployment in France are those with low education and skills unrelated to AI.

In conclusion, Vermorel advises supply chain directors and IT directors to remain focused on specific, actionable innovations within their domains rather than getting distracted by these large-scale, state-led investments. He predicts that meaningful advancements in AI will continue to emerge from independent efforts rather than government consortiums.

Overall, the episode provides a critical examination of the recent AI investment frenzy, questioning the effectiveness and long-term impact of such massive financial commitments. Vermorel’s insights offer a cautionary perspective on the role of government in driving technological innovation, emphasizing the importance of focused, independent efforts in achieving meaningful progress.

Full Transcript

Conor Doherty: Welcome back to LokadTV. So Joannes, today we’re here again to discuss honestly the topic that just keeps on giving these days: AI. In particular, we’re sitting here because there appears to be something of an international frenzy in terms of AI investment involving genuinely eye-watering amounts of both private and huge amounts of public funding into AI infrastructure. So, top of the hour, what’s your hot take on all of this?

Joannes Vermorel: What I saw in the news is the Trump Administration announcing a few days ago a $500 billion investment. And then, I think it was just two days ago, the European Union responded by saying, “Well, if they do that, then we will have our 200 billion Euro investment in AI.” My general take on that is, being French is quite relevant because the interesting thing is that France has been playing this game for decades. In fact, it has a name for it in French, it’s called “État stratège”, like the strategist state. Based on the track record of France playing this sort of game, I can say with high confidence that the quasi-totality of this money will be wasted, period.

Conor Doherty: Not to cut you off, but I do feel the need to jump in immediately to provide just a little bit of context here. Because I don’t want to simplify it to the point where it sounds like President Trump or Ursula von der Leyen are spending almost a trillion dollars of public money. Just to clarify, I do have some details on screen. As of the time of speaking, which is the 13th of February, Trump clarified the $500 billion is mostly private sector investment. That’s Stargate, the conglomeration between OpenAI, Oracle, and SoftBank in Japan. SoftBank is a Japanese company. Ursula von der Leyen, President of the European Commission, announced 200 billion Euros, with 150 billion being private and 50 billion coming from public funding. So the pattern is very clear: enormous sums of both private and public money, but it’s not just public money being spent here, to be clear.

Joannes Vermorel: Yes, but again, France has been playing these sorts of games for decades. It’s always the same pattern. They justify that they have private investors, but the reality is that concessions will be made in the form of tax breaks or plenty of other things. So the cost is real and will be heavily concentrated on taxpayers. The interesting thing is that why those projects fail is that it is extremely difficult to deploy large amounts of money efficiently. Either those projects were already going to happen, so you see here it’s just a matter of announcement. You just take a company that was already going to invest and claim that investment as part of your grand strategy for your country. But then, if it was already going to happen anyway, why do you need to put this investment into a basket and say, “Oh, it’s our strategic investment”?

The reality is that usually you end up in very bureaucratic settings with consortiums where you have so many companies bending together. Just look at what succeeds generally on the market. What we see is almost invariably not consortiums. The iPhone is a great success, but it was not the product of a venture with Google, Facebook, and whoever. It was a great success, but it was not some sort of consortium.

Same thing with OpenAI; it was widely successful with ChatGPT, but ChatGPT was not the product of a consortium. The list goes on. Generally speaking, when I look at these sorts of patterns, “État stratège” has been a game played for decades, more like half a century in France. You just take whatever the buzzword of the day happens to be, then you get around the table with influential names and big companies. You add tons of public money to justify the fact that private companies will make their money. You give tons of concessions, tax breaks, and whatnot. You end up with a game that is highly asymmetrical because the reality is that the funds promised by the administration will materialize very frequently, while private participants will just pull back.

What may very well happen is that when you have the Trump Administration saying $500 billion, a potential scenario is all the private investors just pull back if things start to be nonsensical, and yet billions of public money end up being spent.

Conor Doherty: I should point out that from my read of the situation, I think it was $100 billion promised upfront by the Trump Administration, with the promise of $400 billion more. To be fair, anyone familiar with large signings in football knows that $200 million is not paid upfront for a player; it’s partitioned out.

Joannes Vermorel: The thing is that software is really not an area where it’s very easy to deploy capital, and it’s maddeningly difficult if you want to deploy this large amount of money. That’s why they’re framing it as infrastructure. But even infrastructure, what does that even mean? The problem is that the chips you would say, “Oh, we need chips.” Okay, but which sort of chips? It depends on the algorithm. We have LLMs, we have the LLMs of 2025. Those LLMs are very different from the sort of machine learning models that were en vogue five years ago. What makes you think that five years from now, the models that will be in vogue will reflect the same sort of requirements as the ones we have today? It’s a very risky proposition.

Conor Doherty: To be fair, again, my understanding is it’s not only chips. The Stargate Alliance aims to build at least 20 data centers, most of them in Texas. The European Commission promised to build 12 AI hubs and a bunch of supercomputers. But to me, the question then is, before we talk about why these projects fail, which they may very well, my question is the antecedent question. What is the ultimate goal? What are people trying to accomplish? We can talk about why it fails, but what is the ultimate goal of building all of these data centers and supercomputers and AI hubs and deploying all of this money, however poorly? What is the ultimate goal there?

Joannes Vermorel: On paper, it’s super vague: “Let’s become an AI superpower.” But what does it even mean?

Conor Doherty: Exactly, that’s what I’m asking.

Joannes Vermorel: That’s the problem. Nobody knows. These communications are super vague. You end up with, “Oh, an AI superpower means having access to the most ethical AI.” Okay, what does that mean? The most sustainable AI? I have no clue what it means. Communications are just incredibly vague. It’s always the case. Whenever France has tried that with “État stratège” the communications are always by necessity extremely vague because you’re bringing companies that are extremely diverse, that don’t have the same strategies, that have very little in common. You’re bringing Oracle and SpaceX and expect that those people have anything in common. This is nonsense.

I think a large amount of participants know that it’s complete nonsense, but when you have a third party who promises to pour billions of dollars or Euros on you, why not? It would be a mistake to not just say, “Okay, if you’re willing to waste such a large amount of money, at least waste it on me.”

If we go into specific challenges like supply chain, it’s even less clear that it makes any sense at all. If we talk about AI in supply chain, we could give it a very specific meaning, which would be the automated execution of supply chains for all the mundane decision-making processes that happen in supply chains. That would be a clear thesis statement of what you want. We have extremely labor-intensive decision-making processes: when do I order, how much do I order, do I move my price up or down, do I move this inventory here or there, do I increase capacity or decrease capacity of all the things that may present a capacity limit.

Fine, you can certainly mechanize that with something akin to AI somehow. But again, is the bottleneck of that access to computational power? Do you need a lot of computational power? Do you need data centers? Is there any sign that the data centers we have are truly the bottleneck? Because when it comes to investing in data centers, there has been no shortage of investment. Microsoft, Google, Amazon have been relentlessly investing in data centers to cover the globe with data centers.

My take is that if Amazon thinks they need more data centers, I’m perfectly okay with that. They would just invest and produce more data centers. Same thing for Microsoft, same thing for Google, and all the rest. What really baffles me is the idea that interfering with this process with these mega investments is just going to make those markets more efficient. For me, that’s a very misguided take, especially for something like AI that is so multi-dimensional. It’s very difficult to see exactly what is missing, what the market will look like, what the technologies will look like five years from now. It is extremely fuzzy. It’s not like you invest a billion dollars in this and you will get the results that you want. The situation is much more fuzzy than that.

Conor Doherty: Talk about just throwing lashings of money again, be it public or private, at AI infrastructure. It’s more interesting, or it’s quite interesting, in the context of my knowledge of DeepSeek. For example, DeepSeek was produced not only does it use less energy than ChatGPT’s o1 model, but apparently was something like a fraction of the cost, something like 25 to 30 times cheaper. So the idea that the response to now we can get into the weeds about how it was produced, was it plagiarized, it’s not the point. The point was infrastructure actually it was produced for significantly less than it took to produce the o1 model. And then how was the response to that? Well, take half a trillion dollars and throw it at the problem.

Joannes Vermorel: Yes, and the interesting thing is that Mistral, a French company, had pretty much made the same sort of claims as DeepSeek about a year ago, saying, you know what, those LLMs, what we have right now, they are quite wasteful. We can most likely achieve comparable results, if not better, with just a tiny fraction of the computing resources. Mistral and the interesting thing is that it was already the position of Mistral, but suddenly, when it was a Chinese company doing the exact same thing, the markets lost their mind.

So my take was, well, it’s nice to see a market correction. But fundamentally, it illustrates the fact that, generally speaking, with software technologies, the path, the progress is extremely chaotic. It’s very difficult to know years in advance what will succeed. It’s very difficult to know how to deploy capital effectively. That’s why, by the way, it’s very interesting if you look at very successful startup incubators like, let’s say, Y Combinator. They have literally pioneered the approach of spreading money super wide, like taking thousands of startups, deploying a little bit of capital, like half a million dollars for each company, and seeing what pops up. And this sort of approach seems to be what generally works in the software industry, as opposed to let’s take a champion and pour billions on this champion. Historically speaking, when you look at companies, these investors who have been playing this mega investment approach, like SoftBank, SoftBank suffered massive losses on WeWork. So, I mean, they started with a 100 billion fund and they wasted something like tens of billions on WeWork.

Conor Doherty: Well, WeWork wasn’t even a software company.

Joannes Vermorel: I mean, they were presented as such, but it was essentially a rental agency. But that’s the problem. When you say we want to invest in AI and here are billions of dollars, I would say hundreds of billions of dollars for that, rest assured that every single company is going to present itself as an AI company. To my knowledge, for example, Oracle has nothing to do with AI. They have not been a contributor in this field. I can’t even be sure if they have ever done anything remotely relevant in the field of machine learning. But now they’re branding themselves as an AI company. And I’m pretty sure there will be plenty of other companies that have very weak track records in this area that will join the bandwagon. I’m pretty sure if I were to guess, I would say probably Salesforce will join them. Again, that’s the problem.

When we look at AI, the question is what is missing? Clearly, there are tons of things that are missing. If we want to apply that to AI for supply chain, there are tons of things that are missing. But capital, really, after years of quantitative easing where we had literally the easiest access to capital in probably human history, I do not think that, at least as an entrepreneur, the lack of funds was really the problem. What I see generally speaking when I look at the AI market is an overabundance of funds. Even when I just look at the private funds, if you pile on top of that public ones, you’re just going to double down on the problem of overabundance of cash.

Conor Doherty: Generally speaking, you’ve drawn a parallel there to startups in the sort of venture capitalist haze or frenzy that descends when people are around tech companies. I don’t know if you meant it explicitly, but were you applying that same framework to what governments like Japan, France, and the US are doing? You’re drawing a parallel there to the kinds of behavior that saw enormous tech companies, individual tech companies, burn hundreds of millions and billions. Are you equating these actions?

Joannes Vermorel: No, what I’m saying is that deploying capital is extremely difficult, and it is extremely difficult for private companies to spend it properly. Spending is easy, but to spend it profitably is extremely difficult. When you look at even private companies like SoftBank, at best you can say it’s hit or miss. It’s not clear that those, if you look at in aggregate, that’s a point that was made by Warren Buffett many times. Most of the times those funds end up underperforming the market. So with just a basic ETF, you would get better returns.

Now my worry is that if you add, you know, you have this problem or challenge of it’s very difficult to deploy capital in AI. It is even more difficult to deploy gigantic amounts because the larger the pot that you want to deploy, the more challenging it becomes. And now you compound the problem by having a series of governments who are creating all sorts of bureaucratic nightmares that are waiting to happen, having consortiums and whatnot, which historically have been proven time and again to deliver a lot of waste. If the state strategies were a sure thing to get rich, France would be the richest country on Earth. It’s nowhere close. Those things failed invariably, all of them.

Conor Doherty: Well, if I can just jump in on that point because again, when you’re talking about financial waste, obviously that’s our métier, understanding what you get, what the ROI is for every dollar spent. To stitch together this point with a previous comment that you’ve made before related to supply chain, you previously gave the example of the three systems of enterprise software. The first one is the system of records, and that’s essentially your ERP. You made the point before that, again, in most of the companies that you can find, people will dedicate approximately three-fourths of their IT budget to what is essentially a glorified ledger, a spreadsheet in essential terms. A spreadsheet like I’ll spend tens of millions of euros on spreadsheets, fine. And your position there typically, and I’m summarizing it, is that’s not particularly wise. Okay, but some amount of money needs to be spent on that because you do need a thing, some amount of money. And you made the point maybe 5% is appropriate. Well, taking that same sort of financial lens, obviously states, governments need to spend some amount of money on AI infrastructure. Half a trillion is maybe overkill, but what’s an appropriate amount?

Joannes Vermorel: But I really challenge this idea of why should governments spend any money on topics like that. It seems like the recipe for tons of tax money to be wasted. I don’t mind, I’m not paying my taxes in the US, but then apparently the EU is catching up and they just want to waste tons of European tax money on that. And there, I will be impacted.

For me, what is very strange is why do you think that in the first place governments should even be involved in that? It’s very strange. Is it something where probably you need the support of government? If you look at the entire history of computer science, especially when it comes to those elusive things like artificial intelligence, none of those things ever emerged from bureaucratic entities run by governments. The progress is very incremental, it’s incredibly dispersed geographically. We have seen with DeepSeek, a team of quants in China may push the state of the art forward, and next time it can be another team in Germany, in Sweden, anywhere.

And again, I challenge the idea that what we need is a lack of funds because clearly there is an abundance of funds. If you think that states should get involved because there is not enough funds, as an entrepreneur in the software business, I would say think again. I am personally contacted probably like five times a day by venture capitalists who want to invest in Lokad. Lacking funds is not the problem. You have all the capital that you want, but for me, even considering myself with some degree of expertise, if you give me even a hundred million on AI, it is very unclear where this money should even be invested. It is a very difficult proposition. So let’s compare that to a hundred billion dollars. That’s a problem. So my take is that those sort of mega investments are like a distraction. They are going to be a lot of waste, but there are also going to be a lot of distractions for all the parties involved.

Conor Doherty: Just to tease apart the idea that states whether or not states should be involved or whether or not they have an interest in this. There is a point to be made, and again, I am not an expert. I teach philosophy and I work in marketing at Lokad. However, I have read Nick Bostrom’s book “Superintelligence,” which is about 10 or maybe 11 years old. When he wrote that, he outlined the potential. I know we’re not talking about superintelligence, so bear with me. The idea being though that most people don’t know what superintelligence is. Most people don’t know how close we are or are not to that. What they know is…

There is still the potential, as outlined by Bostrom, of the devastating impact of being on the wrong side of a nation that has that. So my question is, how much of this is essentially a state-level fear of not developing this infrastructure sufficiently relative to potential enemies?

Joannes Vermorel: But what makes you think that the government or anybody in those coalitions of governments has any clue to answer this question? AI is not something simple like building a gigantic wall to defend ourselves, something very tangible with a clear goal. We’re talking about something incredibly elusive. Just imagine replacing AI with a contest to write the best novel—most poetic, most interesting, most captivating. Do you really think that pouring billions into this challenge would mechanically produce the best novel? No. You would certainly get a lot of participants, but it would immediately devolve into a bureaucratic nightmare with no chance of producing something beautiful.

AI is incredibly elusive. Part of the problem with large language models (LLMs) is that we don’t really understand the nature of their limitations. If we understood what is truly blocking us from achieving general artificial intelligence, we would have a clear development path. The problem is that, over the last 50 years, every generation of machine learning models has revealed something fundamental that was misunderstood about intelligence. Every revolution has been incremental, letting us realize there was something profound about intelligence we were missing.

By closing those gaps, progress has been made, and nowadays we have things giving us spectacular results. But the community in general—experts, researchers in AI—finds it extremely fuzzy where we should go. You’re throwing things at the wall to see what sticks, and you’re doing that blindfolded. It’s not clear that pouring money into the problem can make things worse. OpenAI was arguably quite distracted because they had too much money and focused on mega models, so they were a bit late to the party when it came to making the same LLMs but much leaner.

There is this myth that just because you can see a direction, like AI being the future, anyone has a clue on how to reach that. The interesting thing about the market is that you will have thousands of people taking risks, trying stuff, and ultimately, from this gigantic competition, something will emerge as a winner. That’s very good. When you have state strategy, you end up with things like France’s Minitel. France tried to invent their own internet, which was run by the state, and it was a complete catastrophe.

The idea that the state can steer humanity forward can only happen on problems that are extremely well understood and straightforward, where a pure brute force approach will work. But if you have something multi-dimensional, it’s very difficult. You might end up with a situation where AI essentially becomes open source, with no value. Your AI core could be open source and free, and all the value would be built on what you do with this AI. It’s not clear that having complete control over AI will grant any superiority to anybody. Just think of it like mathematics. Imagine one country produces all the mathematicians who prove all the theorems. This country would be a mathematical superpower, but once those theorems are proven, everyone can enjoy the results. Being a mathematical superpower doesn’t translate into actual power or wealth. There is another error of reasoning: if you make the breakthrough, you capture the added value. Not necessarily. Just like being a mathematical superpower doesn’t equate to leveraging all this knowledge to do something that makes countries richer.

Conor Doherty: What you’ve essentially outlined is the argument Bostrom made against superintelligence. I know we’re not discussing superintelligence, but prior to achieving superintelligent AI, which he posited would be somewhere in the middle of this century, there are steps that must be completed. Whoever has that is essentially a superpower. If you have the monopoly on a valued skill set, you are a superpower. Similarly, whoever progresses the farthest in AI could be the most powerful, and that might influence some decisions.

Joannes Vermorel: Yes, but that is pure speculation. There will be no monopoly. Once the community reaches a certain level of understanding, what was an exclusive superpower or competency of a few people becomes a commodity. Right now, AI developments seem to follow the same patterns seen in computer science over the last 50 years. It’s very incremental, with tons of diverse contributors. Contributions come from thousands of places. You have many papers providing substantial contributions, but the person doing this contribution might only have one significant paper.

Humanity will keep progressing in AI just like in mathematics, but there is no monopoly. Nobody possesses the knowledge of mathematics. Some countries have more mathematicians than others, but does it confer any real superpower expressed in quality of living, access to material goods, and whatnot? Absolutely not. It’s very unclear that these investments will capture that. We are investing in hardware for the next 5 to 10 years for models we don’t know what they will look like. There is a significant chance that it will be mistakes and money will be wasted. If Microsoft invests in more data centers and gets it wrong, it will be the shareholders hitting that loss. The idea that bureaucrats will deploy hundreds of billions of dollars or euros efficiently on AI is a distraction.

Conor Doherty: Again, I’m not an economist, but I’ve read a decent amount about decision-making theory and how if you compare people’s perceptions of small numbers and big numbers, they are radically different. For example, if I were to say to you, CEO of the company, last year we spent $12,000 on coffee in the breakroom, you might think that’s an insane amount of money just because coffee is a quotidian thing, a daily thing. Should it be $10,000? Should it be $12,000? Should it be $5,000? I’m going to investigate that. But if I said your ERP upgrade will cost you $250 million, it sounds about right. I don’t know how much that should cost. Similarly, to build 20 data centers, it’s going to cost half a trillion dollars. There’s this tyranny when it comes to big numbers. I think that’s even more compounded when you have people involved in the decision-making process who might not have a lot of mechanical sympathy, an understanding of what’s going on under the hood. Then they’re told, “Here is essentially a blank check,” because half a trillion dollars is effectively a blank check. Write whatever number you think is appropriate to build that. My question then is, how realistic is the expectation that that money can be deployed sensibly and will be of any benefit to the public in terms of job creation?

Joannes Vermorel: In my opinion, the expectation should be extremely low on this front. If you look at job statistics in the US, they show that they enjoy full employment, quasi-full employment if you put aside people who went to jail. So the idea that you will create jobs when you have full employment is strange. You might say, “Oh, we are going to have much better jobs,” but we have to think if it’s really a realistic take. If someone is working in a pizzeria, you might have a demand for a database administrator, and this job would be better paid. But if the person is still working in a pizzeria, doing pizzas, and not as a much better-paid job administrating databases, it’s probably a matter of competency.

For me, the argument of job creation is completely orthogonal, especially in countries like the US that have very low unemployment rates. It’s an argument of dubious merit to justify any investments. If we go to Europe, where we have higher unemployment rates, the reality is that most of those unemployment rates are caused by regulations. If what prevents people from having a job is a piece of regulation, the proposition that making a massive investment on something is going to solve this problem is incorrect. Those problems are completely independent. Until you solve the regulation that is preventing people from having a job in the first place, you’re not going to have those people employed.

For the audience who is not familiar with Europe, in many countries like France, Spain, Italy, it is almost impossible for companies to let go of their employees, to fire them. As a result, all the companies have to be extremely conservative when it comes to hiring. That creates a lot of friction, and to a large extent, unemployment can be explained by these sorts of frictions. Countries that do not have this sort of friction, like Switzerland, have a much lower rate of unemployment. For me, the argument is when you have investment, the idea that you’re going to create jobs is a very weak proposition, especially when it’s tax money funding those jobs. It means you take money from regular people on one hand to give it to other people on the other hand. When it’s a private investment, it creates opportunity. But if we talk about public funds, it’s money taken from taxpayers to be moved and given to other people at the end of the day.

Conor Doherty: If I can jump in on that point, something humorous happened while I was on that note. It builds on this point because I wanted to be accurate with the information. I wanted to look up something that Anthony Miller had said on LinkedIn. Anthony Miller, a friend of the channel, has a fantastic blog. I highly recommend people check it out at Wiser LogTech. I opened up LinkedIn to get that exact information, and at the top of my feed was actually Anthony Miller posting about this exact same thing. A fresh post, but the one I wanted to draw attention to was when he said a few days ago, and get your perspective on it. He made a very similar point that in terms of particularly France, it applies broadly across European states but certainly France because we’re all in France, him included. He made the point that this 200 billion funding, however it gets divided across states, won’t necessarily lead to job creation in France. The particular reason he cited was that France overall, as far as startups are concerned, is overregulated and perhaps a bit inhospitable to the startup environment. I’m curious, do you share that level of skepticism?

Joannes Vermorel: Yes, I do. If you look at who is unemployed in France, the answer is not people who have IT skills or computer science skills. Those people are all employed. If we look at people who have valuable market skills, especially when it comes to things that are AI-related or AI-adjacent, we have 100% employment. When we look in France, who are the people who are most unemployed? The answer is essentially young people with low education. We have some 20% unemployment for people who are below 25 years old. France has currently overall 7% unemployment according to Google. But when you look at people who are below 25, it’s like 20%. You have to take into account that in France, we have 200,000 people studying sociology at any given time.

France is producing a vast amount of people who are unqualified for anything and everything. Will AI solve this problem? I don’t think so. If we look at why those people are unemployed, the answer is because they’ve spent five years learning sociology, which does not give them any skill that can be used realistically in any company for any purpose. The fact that France suddenly invests in AI is not going to solve this problem. You were not employed before because your skill had no market value. AI doesn’t fundamentally change that.

If we talk about job creation, what people have in mind is that it’s going to create jobs for the people who don’t have jobs. The reality is that people who have decent skills in France are employed almost 100%.

Conor Doherty: To be clear, in many cases, anyone who has those skills in Europe, particularly France, if they’re sufficiently fluent in English, they’re often poached by startups in America. There’s a brain drain effect.

Joannes Vermorel: If we discuss the creation of jobs, it’s the idea that if you’re just creating a job for someone who already has a job, you’re just shuffling laterally. The question would be, are we going to solve the situation of people who are unemployed? My answer is there are tons of people in Europe who are unemployed, but those reasons have nothing to do with AI and what AI can do for Europe, the US, or the world. Even if AI is very successful, it will not change anything for those people who do not have jobs for the exact same reason that people in the US who have been convicted, went to jail multiple times, and struggle immensely to get a job. Even if the country is immensely prosperous, the fate of those people is not going to be fundamentally different just because we have an incredibly good AI doing a lot of valuable things for companies.

Conor Doherty: I don’t want to put words in your mouth, but as a summary of that position, are you effectively of the opinion that from the average taxpayer’s perspective in Europe, there’s no value add to this?

Joannes Vermorel: There is no value add to this in Europe. I don’t think there is either in the US. If we look more specifically at supply chains, I think supply chain directors do not look up to those programs as the place where innovation will happen. It would just be a gigantic waste of resources. Those consortiums will be wasteful, bureaucratic, and they will not spearhead the next generation of AI.

Those consortiums can do even more damage than just wasting Euros of taxes and dollars of taxes. They are also a gigantic distraction that can distort the perception of regular company executives, fooling them into believing that this is the place where innovation will happen, where value will be created. My take, and I would be willing to bank money on it, is that it’s not going to be the place where innovation happens. Innovation will keep happening, just not there.

Conor Doherty: Well, it’s interesting because often when I try to press you to make predictions about what will happen, you keep the cards very close to your chest. You’ve been very firm in your prognosticating today.

Joannes Vermorel: Yeah, I mean, the interesting thing is that if you want me to predict where innovation will take place, I don’t know. But we can still rule out a few places. Is the next AI revolution coming from North Korea? Unlikely, very unlikely. Is it going to come from governmental bureaucrats? Very unlikely, just like North Korea. So, you see, it’s not because I cannot make a precise prediction that I can’t rule out a few things that are extremely unlikely when you look at history.

Conor Doherty: So, 12 months from now, you don’t think that Europe will have built anything, let alone achieved enormous progress?

Joannes Vermorel: It can, but if it does, it will not be thanks to those investments. Innovation can be very erratic, and it can happen pretty much anywhere. Some countries have a lot of very skilled manpower. For example, Switzerland is a hotspot for this sort of talent. The odds that such a company would emerge in Switzerland are not bad.

Conor Doherty: Tax havens?

Joannes Vermorel: Yes, but not just tax havens. For example, the ETH University is excellent. There are plenty of places with excellent technical universities. France, yes, France has many. So, my take is that the primary candidate is the US, just because they have the most momentum, the biggest communities, and the most experts. But even if progress keeps coming from the US, my prognostic is that it will have nothing to do with those mega investments steered by the federal administrations. Success will most likely happen in the US because they have been dominating the field for quite a few decades. But will this success be the consequence of those mega investments involving their federal government? I generally do not think so.

Conor Doherty: All right, well, we’ve been going for almost an hour, so I’m going to start wrapping up. In terms of closing thoughts, what would you like to say before we finish?

Joannes Vermorel: To supply chain directors or IT directors watching us, don’t get distracted. Those things are just a waste of time and, unfortunately, a waste of your money, but you can’t do anything about that part because it’s your taxes.

Conor Doherty: We are not advocating anything illegal.

Joannes Vermorel: But at least what you can do is make sure it’s not a complete distraction for you. My suggestion is to pay attention to what is happening but don’t get distracted by those mega investments. Most likely, nothing will come out of them. Stay attentive to things that are very specific to your domain and seem to be getting traction in doing something real for your use cases instead of chasing super intelligence and whatnot.

Conor Doherty: All right, Joannes, I don’t have any other questions. Thank you very much for your time. I’ve enjoyed the conversation, and I hope others have as well. Thank you for watching, and we’ll see you next time.