00:00:00 Introduction to quantum physics and its role in existing technologies.
00:01:03 Olivier Ezratty’s journey into quantum computing and his extensive research.
00:04:16 Launch of the Quantum Energy Initiative for environmentally conscious quantum technology development.
00:06:11 Differences between quantum physics in current technologies and future quantum computing.
00:08:51 The non-existence of nothingness and vacuum fluctuations in quantum physics.
00:10:32 Vacuum and ether in quantum physics.
00:11:52 Enterprise software and mechanical sympathy.
00:14:16 Quantum advantage threshold and uncertain progress.
00:16:19 Importance of understanding quantum technologies.
00:18:43 Quantum technologies’ potential applications.
00:20:24 Introduction to quantum sensing and its applications.
00:21:19 Quantum communications for security and power improvement.
00:24:01 Quantum sensing for precision measurements in various fields.
00:26:36 Positive usage of quantum gravity sensors in satellites for geodesic studies.
00:28:15 Importance of holistic perspectives in understanding quantum technology.
00:30:11 Discussing quantum supremacy and its limitations.
00:32:02 Explanation of classical bits and their role in computing.
00:33:10 Introducing qubits and their differences from classical bits.
00:35:04 Delving into the mathematical aspects of qubits.
00:37:33 Explaining the power of qubits and their exponential growth in information space.
00:40:01 Clarifying misconceptions about quantum computing.
00:43:45 Quantum computing and Big Data challenges.
00:45:54 Addressing noise in quantum computing: shallow algorithms and error correction.
00:47:46 Current state of quantum computing and IBM’s latest 433-qubit system.
00:49:53 Exploring error correction in quantum computing.
00:51:37 Discussing the possibility of using noisy operations in machine learning.
00:52:59 Reviewing the limitations of quantum machine learning.
00:57:25 Temperature control in superconducting qubits and silicon qubits.
00:59:49 Comparing ion trap qubits and topological qubits.
01:00:53 Neutral atoms, laser cooling, and Magneto Optical trap technology.
01:03:31 Envy centers and potential room temperature quantum computing.
01:05:46 Discussing complexity in the field of quantum technology.
01:07:58 Approaching trust and identifying reliable sources in quantum technology.
01:10:30 Discussing examples of unique silicon qubit technology.
01:12:35 Comparing quantum computing to enterprise software supply chain.
01:14:37 Role of serendipity in meeting and learning from scientists.
01:16:36 Tips for navigating and deciphering scientific papers.
01:22:47 Intrinsic goodness of forecasting and difficulty in measuring it.
01:24:00 Complexity of scientific publications and understanding them.
01:25:17 Openness and obfuscation in the quantum computing ecosystem.
01:28:01 The role of market analysts and potential biases in the field.
01:33:46 Discussing a good mix in research teams for innovation.
01:34:54 Quantum computing and its timeline for development.
01:37:56 Challenges in predicting the future of quantum computing.
01:39:41 The importance of staying educated in the rapidly changing field of quantum computing.
01:40:33 Personal projects in the field.
01:43:15 Discussing diverse ways of working and contributing to the ecosystem.
01:44:22 The value of writing exercises for personal and organizational growth.
01:45:37 Techniques for organizing and updating content, including maintaining databases.
01:48:00 Suggestions for CEOs and CTOs to understand Quantum Computing and its potential applications.
01:50:28 Recommended formats for learning about Quantum Computing, such as conferences and YouTube presentations.


Quantum technology expert Olivier Ezratty discusses with Joannes Vermorel the potential of quantum computing, communication, and sensing. Quantum computing aims to harness quantum phenomena like superposition and entanglement to perform tasks beyond classical computers’ capabilities. Quantum communication has applications beyond security, such as the quantum internet and distributed quantum computing. Quantum sensing can measure physical properties with unprecedented precision. Despite progress in the field, there is still a significant gap between theoretical knowledge and practical implementation. The timeline for widespread adoption remains uncertain, with experts estimating 10-15 years before quantum technology reaches its full potential.

Extended summary

In this interview, host Joannes Vermorel, founder of Lokad, discusses quantum computing and enterprise software with Olivier Ezratty, a quantum technology expert. Ezratty has worked in the field for more than two decades and authored a comprehensive report (Understanding Quantum Technologies - more than 1000 pages) on quantum technologies.

Ezratty first became interested in quantum computing after learning about Google, NASA, and D-Wave’s collaboration on a computer that could perform tasks 100 million times faster than a regular laptop. He initially aimed to deliver a simple one-hour conference on the topic, but his work eventually culminated in the creation of an extensive 1,100-page book on quantum technologies. Ezratty has since become involved in various roles within the field, including teaching, government work, advising, and launching the “Quantum Energy Initiative” to address the environmental impact of quantum technologies.

In discussing the development of quantum computing, Ezratty highlights the role of quantum physics in existing technologies. While all current technology is based on quantum physics, quantum computing aims to harness different phenomena from the field. Three specific mechanisms central to quantum computing are superposition of quantum states, entanglement, and the ability to control individual nanoparticles. These mechanisms have not been utilized in the same way in previous technologies.

The interview also touches on the nature of “nothingness” in the context of quantum physics. Vacuum fluctuations, in which particles are created and destroyed, demonstrate that nothingness does not exist, and particles are always in motion due to these fluctuations.

In the realm of enterprise software, there has been a general disinterest in computing hardware, as it has been expected to improve exponentially without any changes from software vendors. This attitude persists, despite the slower progress of quantum computing compared to classical computing. The ultimate goal for quantum computing is to reach a “quantum advantage” or “threshold,” where quantum computers can perform tasks that classical computers cannot do efficiently. The timeline for achieving this threshold remains uncertain.

Quantum technologies can be categorized into different paradigms, including quantum computing, quantum communication, and quantum sensing. Each paradigm has its own timeline for potential implementation, with some possibly making an impact in less than five years, while others may take 10 to 20 years. It is essential for people involved in technology and industry to stay educated on these developments to understand their potential impact.

Quantum computing aims to enable computations that cannot be done classically, potentially faster, better, and with less energy consumption. Quantum communication, on the other hand, has applications beyond improving security. It can help create a quantum internet and enable distributed quantum computing. Furthermore, quantum communication can lead to more precise quantum sensors, which can significantly improve the precision of various measurements.

Quantum sensing can measure various physical properties such as gravity, pressure, temperature, time frequency, and magnetism with much greater precision than currently possible. While quantum sensors may be bulkier than existing IoT sensors, their increased precision can have numerous applications, such as detecting what’s beneath the ground, identifying tunnels, finding water sources, and even military applications like detecting nuclear submarines.

There are also positive applications for quantum sensing, such as putting a quantum gravity sensor on a satellite to study the Earth’s movement and the impact of climate change. Scientific progress has often been driven by the availability of new classes of sensors, and quantum sensing has the potential to open up new avenues for research and understanding.

Ezratty then explains the concept of quantum supremacy, a term coined by John Preskill in 2011. Quantum supremacy refers to a situation where a quantum computer can perform a calculation that is impossible for classical computers to achieve in a reasonable amount of time. However, the current quantum supremacy achieved by Google and others is not performing calculations as we are used to in enterprise software. Instead, it is more of a random number generator with no real data input or output. When Google attempted to use its quantum system for useful calculations, it could only utilize 15 of its 53 qubits. These 15 qubits can be emulated more efficiently on a personal laptop.

The discussion then shifts to the fundamental building block of classical computing: the bit. A bit is the smallest unit of information, represented as either a 0 or 1. In contrast, a qubit, the fundamental unit of quantum computing, can be described as both a mathematical and physical object. Physically, a qubit is a two-level system (TLS) that can exist in two energy levels simultaneously, thanks to the quantum properties of superposition. Mathematically, qubits are represented by two complex numbers (coefficients) that describe their superposed state.

The power of quantum computing comes from the fact that the information space handled by qubits grows exponentially with each additional qubit. This is in contrast to classical computing, where adding bits has a linear effect on memory size. For example, a system with 100 qubits can handle an information space of 2^100 complex numbers, which is significantly larger than classical systems can manage.

The interviewees also touch upon the Schrödinger equation, which is used to describe the wave-like behavior of quantum objects, like qubits. When two waves corresponding to different energy levels of a qubit are combined, they create a third wave. This phenomenon is central to the concept of superposition in quantum mechanics.

Ezratty explains that there are two main advantages of quantum computing: speed and space. Quantum computers can explore a vast computational space and solve complex problems that scale exponentially with the number of variables. However, the speed advantage comes from the algorithms used and the ability to reduce the number of operations required for computation compared to classical computing.

Another point of discussion is the difficulty in feeding data into a quantum computer. This is due to the slow nature of quantum gate operations and the limitations of current quantum systems. Ezratty mentions that hybrid algorithms, which combine classical and quantum computing, are being used to address this issue.

Noise is another significant challenge in quantum computing. Current qubits generate a considerable amount of error, and error correction is necessary to make the computations useful. There are two ways to address this: shallow algorithms, which have a low number of gates and operations and can tolerate noise, and error-correcting codes that use redundancy to correct errors at each operation.

Quantum error mitigation is another approach being explored, which uses machine learning to train the system to understand and correct errors after the entire computation is done. This method is expected to extend the capacity of noisy quantum computing systems, although the threshold for useful quantum computing for enterprise applications has not yet been reached.

The interview also touches upon the types of algorithms that can be implemented in near-term quantum systems. These include chemical simulations, optimization algorithms, and quantum machine learning. However, each of these application areas has its own set of challenges and limitations.

Ezratty emphasizes that the science of understanding quantum speed-ups is still in the making, as there is a significant gap between theoretical knowledge and practical implementation. While progress is being made, a lot of work remains to be done to create truly useful quantum computers capable of providing real-world advantages over classical systems.

The conversation then turns to the interaction between qubits and classical electronics. Qubits, the basic units of quantum computing, can be controlled by classical electronics, with photons being sent to the qubit to change their state. The discussion then moves to the need for extremely low temperatures for quantum computing. Most quantum computing technologies require cold environments, with superconducting qubits needing around 15 millikelvins. The cooling process can be complex and requires a multi-stage approach.

Silicon qubits, or silicon spin qubits, are mentioned as an alternative that can operate at slightly higher temperatures, between 100 millikelvins and one Kelvin. Another technology discussed is the control of individual photons at room temperature using waveguides. While cooling is still necessary at both ends of the system, it is not required in between.

The topic then shifts to neutral atoms, which can be cooled and positioned using lasers in a technique known as the Magneto Optical trap. This process results in a temperature in the nano Kelvin range, although cooling is still necessary for the pump that removes atoms from the chamber.

Another quantum technology discussed is NV centers, which have potential applications in computing and sensing. An Australian company, Quantum Reliance, has developed a five-qubit system that operates at room temperature, although its scalability is uncertain.

The conversation highlights the complexity and diversity of quantum technologies, with many different types of qubits and cooling requirements. Ezratty emphasizes the importance of meeting with a diverse range of scientists, engineers, and computer scientists to gain a better understanding of the field.

Ezratty highlights the importance of reading scientific papers and seeking diverse perspectives from experts in various subfields of quantum technology. Despite the complexity and constant evolution of the field, it is essential to continually update one’s knowledge to keep pace with developments.

Ezratty shares his experiences in learning about quantum technology and meeting various scientists and experts in the field. He emphasizes the importance of serendipity in connecting with people who can provide valuable insights and information. As he navigates the quantum technology landscape, Ezratty looks for clues in scientific papers and vendor communications to understand the state of the art.

In the interview, Vermorel draws parallels between the quantum technology field and his own area of expertise, supply chain optimization. Both fields feature a vast array of niche perspectives, vendors, and competing philosophies. Vermorel highlights the importance of having an adversarial mindset when evaluating claims and looking for untold costs or hidden drawbacks.

Ezratty points out that understanding the metrics used in quantum technology is crucial in assessing the quality of qubits and the performance of quantum computers. However, finding consistent metrics can be challenging due to differing measurement techniques and benchmarks in the field. He also notes that the recent availability of quantum computers on the cloud has made it easier for researchers to benchmark and compare different systems in a consistent manner.

Despite the complexity of the field and the difficulties in understanding scientific publications, Ezratty believes that the quantum technology ecosystem is fairly open. He acknowledges that vendors may sometimes exaggerate their performances but maintains that the field is generally accessible to those willing to invest time and effort in learning about it.

Vermorel and Ezratty discuss the impact of large corporations on the field, noting that they often attract venture capital but can also be prone to corporate distortions. They also touch on the role of market analysts, who often become biased due to financial incentives from vendors, potentially distorting the industry’s development.

Ezratty explains how some quantum computing technologies might offer practical advantages within the next few years, such as analog quantum computers. However, the timeline for widespread adoption remains uncertain, with many experts estimating 10-15 years before the technology reaches its full potential.

One of the major challenges in scaling quantum computing technology is moving from hundreds of qubits to millions, which poses significant engineering and energy challenges. The field is characterized by a wide range of competing technologies, making it difficult to predict which will ultimately prove successful.

Ezratty points out that there is currently a great deal of creativity and innovation within the field, particularly in error correction techniques. Despite the skepticism surrounding the feasibility of achieving millions of entangled qubits, he believes that the ingenuity of engineers and scientists may eventually lead to breakthroughs.

The interview covers the importance of staying informed about developments in quantum technology. As the field is constantly changing, being educated on the fly is crucial for understanding the significance of new announcements and breakthroughs. Ezratty shares his personal interest in the field and his plans for future projects, highlighting the intellectual challenge and excitement surrounding quantum technology.

Olivier mentions that he is currently working on the sixth edition of his book, writing scientific papers, and engaging in activities that empower the French and European quantum ecosystem. He is also involved in teaching, training, and running two podcast series with Fanny Piat, who has become a quantum leader at OVHcloud. Olivier’s ultimate goal is to contribute to the success of the French and European quantum ecosystem.

Both speakers emphasize the importance of writing as a way to structure and share thoughts. Joannes believes that the exercise of writing is incredibly beneficial, even if the material is never published. This belief is echoed by Olivier, who shares some of his organizational techniques, such as using a Word document with the same table of contents as his book to keep track of updates and new information.

Olivier also maintains a variety of databases, including a list of Nobel Prize winners in quantum physics, quantum companies, and qubit fidelities. He believes that staying organized and reusing content in a clever way is crucial when working independently.

As for suggestions to CEOs and CTOs of companies facing opaque fields like quantum computing, Olivier recommends reading his book to get an idea of what quantum computing could bring to their businesses. He emphasizes the importance of not solely relying on the press but seeking specialized opinions and diversifying sources of information.

Attending conferences, watching educational videos on YouTube, and participating in events that provide in-depth understanding of quantum technology are also recommended for those interested in the field. Ultimately, Olivier believes that a good grasp of the current state and potential of quantum systems can be achieved through various educational formats, such as talks or presentations lasting between one and two hours.

Full Transcript

Joannes Vermorel: Welcome to Lokad TV. I’m Joannes Vermorel, the CEO and founder of Lokad, and today I have Olivier Ezratty as a guest. Olivier has been a technologist and a futurist for more than two decades, as far as I can tell, and I say that as a high form of praise. He has a very peculiar methodology, which consists of picking a very important and broad topic and trying to make sense of it. The topic of the day for this episode will be quantum computing and enterprise software. It just so happens that Olivier, in his very peculiar style, has produced a couple of years ago an absolutely gigantic report of 1100 pages plus about all those quantum technologies.

I will actually confess right away to the audience that my own knowledge of quantum mechanics is about the first 200 pages of a book called “Introduction to Quantum Mechanics” by Griffiths, which is basically a textbook intended for students. So, I won’t claim that I am an expert, but we will work through this journey. And to get started, maybe Olivier, could you tell us a little more about how, as I understand, something like five to six years ago, you went into this journey of quantum? Did you decide one morning, “I am going to become an expert in the field” and then end up producing probably the biggest compendium that I’ve ever seen on this topic, which is a massive report, but it’s more like a massive book, actually?

Olivier Ezratty: Well, I didn’t plan what I would do in quantum. It started about eight years ago, in 2015, when I discovered the fact that Google, NASA, and D-Wave were communicating about this kind of weird computer that D-Wave was producing. They were communicating about some stuff being run 100 million times faster than a regular laptop, so it kind of puzzled me. What I found surprising back then, and I think it’s still true today, is that all the scientific papers that were describing that computer and what Google was doing with it were unbelievably complex. I was sure that all those people writing about this computer didn’t understand anything about it, so I said to myself, “Maybe someday I will understand that.”

So, I decided in 2016 to be in a position in 2018 to do just one hour of a very simple conference. I teamed up with a friend named Fran Ibuto, and I will tell our story later. We decided to do that conference in 2018, and then I wrote 18 posts on my blog. It became the root of my book, which was 300-350 pages. Then, I switched to English for the first edition, the fourth edition, and then the fifth edition, published last September 2022, which indeed has more than one thousand pages.

In between, I’ve done tons of things in that world. I’m working with researchers, I’m a teacher in different schools, I’m working with the government on various activities, I’m a trainer in corporations, I’m an advisor in many situations, and I’m an expert for Bpifrance, among other things. I’m even working with the government at the ministerial level to design future iterations of the French plan.

The most important thing I also launched last year is the so-called “Quantum Energy Initiative.” It’s a research initiative launched with a couple of friends in research, particularly Alexis Feb, who’s a dear friend based in Singapore now. We launched this initiative to make sure that people creating quantum computers and other quantum technologies are caring about the environmental impact of those technologies early on in the design phase. So we want to make sure that a quantum computer, a scalable one, won’t consume more power than what comes out of a nuclear plant. And there’s some work to do that. I was reading your reports, which, by the way, I read about the first 300 pages, and then I actually skimmed the rest. I jumped to the last section on quantum sensing, which is very interesting. So I apologize to the audience, I’m doing something that many people do in talk shows, which is to talk about books they haven’t read. So, I’ve read it partially. One of the things that was very interesting was, I knew intellectually, but I had never connected the dots, that transistors are actually a quantum effect. This is the field effect, and so that’s what you argue in the very first section of your report.

Joannes Vermorel: Although quantum computing has become more recently a trendy buzzword, it turns out that when we think about regular computing, it is already rooted in the first quantum revolution that dates way back in the 1950s. The hard drives that we have with giant magneto resistance are also a quantum effect. That’s for the spin-drive, so the recent ones, the ones that have terabytes and above of storage. All existing technologies are based on quantum physics.

Olivier Ezratty: Yes, I mean all of it. Even astrophysics, like the James Webb telescope, is using quantum physics. Fiber optics for telecommunication is quantum physics. Everything is quantum physics at the electron, atom, or photon scale. The phenomena are not the same, though. The phenomena from quantum physics that we are using in existing technologies are not the same that we want to use in quantum computing. That’s where there’s a small difference. In quantum physics that is used today, we mostly use the fact that we understand well the way light interacts with matter. So a photon displacing an electron and creating current, that gives you a solar panel, for example.

Olivier Ezratty: In transistor technology, there’s a very strong understanding of the energy levels in semiconducting materials like silicon. In quantum technologies of the second revolution, particularly quantum computing, we use three very specific mechanisms that we didn’t use so far. One is the superposition of quantum states, which is a real phenomenon with a mathematical and a physical interpretation that’s quite difficult to figure out, by the way. The second is entanglement, the fact that some particles can have a common past and a common future. They form like a single particle, and that is the source of a lot of power in computing, communication, and even sensing.

Olivier Ezratty: Then we have the fact that we now experimentally can control individual nanoparticles. We couldn’t do that in a transistor with billions of electrons moving in and out, or in a laser with billions of photons. Now we are able to generate, control, and measure a single electron, a single photon, and a single atom. We can even control one atom in a vacuum with a laser. That’s new, and that’s what we do now in quantum technologies.

Joannes Vermorel: Yes, although my own understanding is that even when you start to try to understand what exactly one atom is, it starts to become a bit blurry. You know, what’s one? It can be in one position, but it moves a little bit. It’s impossible to have a non-moving particle because it’s always moving a little bit. Otherwise, the Heisenberg principle wouldn’t work. I looked at quantum physics and discovered that it’s a very broad field. The most amazing thing I discovered is that vacuum doesn’t exist.

Olivier Ezratty: Yes, that’s right. It means that there is no such thing as nothingness in space anywhere in the world. For example, if you do an experiment with a closed box, you use a so-called ultra-high vacuum pump and remove all the atoms. Then you cool it to a very low temperature, let’s say a couple of nano-Kelvin, to make sure there’s nothing inside – no microwaves, no electromagnetic waves, nothing. If you measure inside that, you will see that there are some particles being created and destroyed. It’s called vacuum fluctuations. And this nothingness doesn’t exist, which is so amazing.

Joannes Vermorel: That’s fascinating. One of my own particular interests is the history of science, and the very funny thing is that this largely rehabilitates the concept of ether. At the beginning of the 20th century, people got rid of that idea to make way for the vacuum, because there was this idea that nature doesn’t like a vacuum. So people managed to get rid of this old school idea and say, “Okay, we have an actual vacuum now.” And the ether, which was the old term, got pushed as being basically old, obsolete science.

The interesting thing is that we went from “nature doesn’t like a vacuum, so we need this ether” to another generation of scientists saying “no, we have a vacuum that explains tons of things.” And it did. And now we are back to, “Well, it turns out that when you measure things even more precisely, you realize that the vacuum was actually a better understanding than what people thought was the ether before.”

Olivier Ezratty: Exactly, because the vacuum fluctuations are at a very low, quantum limit. It’s a very low phenomenon. You can also use an experiment with the Casimir effect, where two gold plates are very close together. If you put those two plates in a vacuum at a very low temperature, they will be attracted to each other, and this is due to that vacuum fluctuation. But it’s not a kind of spontaneous energy, because if they stick together, then you have to pull them apart and add some energy to separate them. So the second principle of thermodynamics is always preserved; it still works. But still, you have this kind of permanent movement, and it explains why you can’t have a particle like an atom or an electron that doesn’t move. It’s always moving a little bit.

Joannes Vermorel: So, if we go back to this idea of quantum computing and enterprise software, one of the things that strikes me as an enterprise software entrepreneur is that my peers generally have a disinterest in mechanical sympathy. What I mean by mechanical sympathy, and again I’m not speaking of people in general but specifically the field of enterprise software, is that due to the fact that computing hardware has been progressing for decades so frantically, there has been a general disinterest in computing hardware. It was really, I would say, cause and effect. If you have computing hardware and you expect it to get a thousand times better within a decade, and you, as a vendor, don’t need to change anything in what you’re doing, then why should you care? You just sit down, enjoy the ride, let other people do their magic, and your software, no matter how inefficient, will solve the problem for you. That was, I believe, the mainstream attitude for a lot of people doing enterprise software and engineers.

Olivier Ezratty: It still is, for good and bad reasons. The good reasons are that the idea of a quantum computer was born about 40 years ago, and the progress made was important but not at the same speed as classical computing. If you take the first computer, created back in 1946, the ENIAC, and add 40 years, it makes 1986. In 1986, you had microcomputing and the Mac, so there was a ton of progress. We went from the mainframe to the mini computer, then to workstations and PCs all within 40 years. In the same timeframe for quantum, we still have prototypes. It’s more sluggish and slower, but if you talk to a physicist, they will tell you that there has been tremendous progress in the last 10 years, just not at a scale that makes it transformative for the industry.

The big question is when we will reach the so-called quantum advantage or threshold, which is a situation where quantum computers will be able to do things that you can’t do efficiently on a classical computer. We don’t really know. Some paradigms may bring value in fewer than five years, while others may need more time, maybe 10 to 20 years. There’s a lot of uncertainty. One of the reasons it makes sense to be interested in quantum computing is that you don’t know at which pace it will progress or transform industries. If and when it works, it may be hugely transformative and change many industries where you manage complicated optimization problems. Those problems could be solved more efficiently with quantum computers. So, you have to at least learn and understand where we are, even if you are skeptical or cautious about the pace of advancement in that industry.

You have to be able to decipher announcements from IBM, Google, and others. If you don’t have the intellectual skills to do that, you may miss something or be misled into thinking it works better or worse than it actually does. You need to be educated on any new trend, just as you need to be educated on the metaverse or cryptocurrencies, even though you may not need to get involved in them.

Joannes Vermorel: I consider myself one of those entrepreneurs with a deep mechanical sympathy. I can’t speak for every single employee at Lokad, but personally, I have a deep interest in all the physical layers that power the stuff we do. This understanding, I believe, is important, and it has tons of implications on the way we approach problems. When I see something where my gut feeling tells me that the hardware is going to make fantastic progress in this area, I say we can afford to have an approach that is completely different from this one. However, in other areas, we might be stuck. For example, the speed of light is not most likely going to be improved anytime soon, maybe never. This has very real consequences on what you can do in terms of distributed computing.

When we process so much data, there are things that are most likely never going to be very viable, like spreading our computational resources over the globe. For plenty of reasons, it’s much easier to concentrate all those things in one place. There are many reasons to think that there are some hard limits where it will never become advantageous to do it any other way.

Now, what is interesting, and I was reading your report, is that my first misunderstanding was that I was thinking in terms of quantum computing, although the proper term would be quantum technologies. There are several things that were very interesting to me, such as quantum communication, telecommunications, and quantum sensing. Can you give us a little bit of a survey of what are the grand ambitions to improve those frontlines? What are the frontlines where people are using this understanding of quantum mechanics to say we are potentially going to do things that were maybe not impossible before, or maybe they were, or to do it way better?

Olivier Ezratty: The simplest way to describe quantum computing is that it’s supposed to enable us to do some computation that you can’t do classically, so maybe faster or better at some point, and maybe also with less energy being consumed. That’s one of the advantages of quantum computing.

Quantum communication, on the other hand, is both ways. It can be perceived as a way to improve the security of communication because one of the technologies within quantum communication is the so-called QKD, or Quantum Key Distribution. It’s a way to distribute encryption keys that are more secure than classical digital keys that we use with RSA protocols and stuff like that on the open internet. But beyond that, quantum communication is way more sophisticated than just security. It will help, in the future, create a so-called Quantum Internet or quantum network that connects quantum computers together, and it will enable, at some point, distributed quantum computing.

It can also enable the creation of more precise quantum sensors because if you have different quantum sensors that are connected continually through a quantum network, you can improve them. These are sensors that enable the improvement of the precision of measuring whatever physical parameter you want to measure with it. It can be gravity, pressure, temperature, time, frequency, magnetism – everything can potentially be measured with better precision thanks to quantum sensing. So, you have tons of applications there.

Joannes Vermorel: That’s interesting because, again, we have these bodies of technologies which are chasing very different goals. I mean, very different ambitions.

Olivier Ezratty: Yes, we have computing, which is really about a new algorithmic paradigm. We want to have the physical substrate for different kinds of problems. But we also have quantum communications, which allow entirely new classes of security measures. That’s interesting because it goes beyond security.

Joannes Vermorel: Oh yes, and it goes way beyond.

Olivier Ezratty: Security is just one aspect. There are other, more classical solutions to improve security, such as post-quantum cryptography. But quantum communication, beyond quantum security, is way more interesting. It’s far-fetched and more in the future because there are a lot of technologies that don’t exist yet, like quantum repeaters. When that works, we’ll be able to do very powerful things, like communicating between two quantum computers. It can enable everything all together.

First, you can improve the power if you have two quantum computers connected with a quantum link. It will multiply the power of those two systems more than just adding – it’s more exponential, which is completely unlike what you get with classical computers. Second, if you have two quantum computers connected to a quantum link, you improve the security of that connection. If somebody intercepts the fiber optics that connect those systems, they can’t get anything. It’s the best obfuscation system that can enable safe communication between two parties.

You could have a lighter quantum client connected to a large quantum system on the other end of the line, and it would enable very safe communication. By the way, there’s a protocol called “The Blind Quantum Computing” that does that. It was invented by a couple of researchers, including one who lives in France. Her name is Anne, and she was the co-inventor of that protocol more than 15 years ago.

And quantum sensing is also something that I didn’t even realize was a thing.

Joannes Vermorel: When you say more precise measurements, could you give us a sense of the usual things that we want to measure, like magnetism or gravity? Do you see potential in this area on a scale that is incredibly small?

Olivier Ezratty: What I know about quantum sensors is that they are more bulky than the existing IoT sensors that we have right now, but they add several orders of magnitude of precision. So, in some cases, it’s very useful. If you want to measure gravity with a much better position, it can help you detect what’s beneath the ground. So it can be useful in many situations. A typical situation is how do you measure, how do you detect tunnels when you reshape your city? How do you detect water? It could also be used to detect oil, even though I’m not sure we should look for more oil. There are even military applications potentially because if you can couple highly precise magnetic detection and gravity detection, you do those two things together; maybe you could detect a nuclear submarine under the sea. So that can change a lot of nuclear deterrent strategies for many countries. There are many implications there. Magnetism could be used at a nanoscale, also. There are so-called NV centers sensors, which use a defect in a diamond structure, a very small defect, just missing one atom of carbon, another one replaced by a nitrogen atom, and a couple of free electrons moving in and out in the hole. That can be used with lasers to detect very small changes in magnetism, and it can be used for MRI, for example. It can be used to detect variations of the electromagnetic field in the brain. It could be used to do biological exams at the atomic level. So there are huge advancements in both the nanoscale at the atom level or at a macro scale with gravity sensing.

And as maybe a little bit of a tangent, when you discuss, I was smiling internally when you say, “Oh, we have this technology, and it could detect submarines,” which I never thought about the case. But yes, if you have a mass detector or something that would act a bit like an infrared camera but just give you the mass density of stuff around it, yes, it makes perfect sense. By the way, there are more positive usages. If you put a quantum gravity sensor in a satellite that’s moving around the Earth, you can do a lot of geodesic studies. You can understand how the Earth is moving. You can detect the impact of climate change on the Earth’s surface and water. It can have a lot of very positive use cases to understand what’s happening on Earth.

Joannes Vermorel: Exactly. I mean, most of the scientific progress has been driven to a large extent by the availability of new classes of sensors. And that brings me on a small tangent. Olivier Ezratty has been known in France for decades now, and before doing reports on quantum tech, Olivier had a gigantic report about startups and AI. The one on startups had a very profound importance for me because it was a highly influential document for me on actually doing Lokad. And I think one of the things that makes your documents so very fringe and odd and twisted and unusual is that you have this super holistic take which blurs completely the usual lines. For example, when I said what I’ve known about quantum mechanics is what I read in a book, “Introduction to Quantum Mechanics,” this book from Griffiths is beautifully written. It starts from the first page with the Schrödinger equation and just derives tons of stuff from that. It’s an incredibly beautiful approach, but also incredibly narrow-minded. No offense to those professors who are doing that, they are doing a beautiful job. What is very interesting with your reports is that you bring so many different perspectives, as if you were trying to collect as many angles as you can, like history, economics, incentives, regulation, sustainability, mathematics, and so on. You have this completely diverse structure, and to connect to enterprise software, I believe that’s very interesting.

In the world of enterprise software, and most of the audience of this channel is in supply chains, supply chains are always operated through layers of enterprise software. You don’t interact physically with the supply chain, you have tons of layers of indirections to get things done. One of the problems is that you have all sorts of layers of discourse from all the vendors who all have their stuff to say. I was interested in quantum computing for a long time, and I see that a lot of claims are made, sometimes grandiose claims, like Google has achieved Quantum Supremacy. Just by the term of it, it looks impressive. Supremacy, okay.

Olivier Ezratty: They didn’t invent the word, by the way.

Joannes Vermorel: Oh, yeah?

Olivier Ezratty: I spoke to the guy who invented the word about two weeks ago. His name is John Preskill, he’s a teacher and a very famous academic in Caltech in California. He coined that word, I think, back in 2011. Google used that wording, but it described a situation where a quantum computer is able to do some calculation that you can’t do classically in a reasonable time, but whether it’s useful or not. It happens that Quantum Supremacy from Google and others from China is not doing a calculation as we used to do it in enterprise software. There’s no data as the input, there’s no data at the output, it’s just a kind of random number generator, and you have to check that the sampling of the generator is about the same in classical emulation than in the quantum system. But there’s no real calculation.

Interestingly, when Google had to use its own system to do a useful calculation, they couldn’t use the 53 qubits that they used for the supremacy experiment, which by the way, produced a good result in only 0.14% of the time. That’s the chance you have of a good result. They could use only up to 15 qubits out of 53, and 15 qubits can be emulated more efficiently, meaning faster on your own laptop. So that’s interesting. At some point, they said they are doing stuff that could take thousands of years to be executed on a classical computer or even the largest one, and on the other hand, when they do useful stuff, it sucks.

Joannes Vermorel: Maybe for the context for the audience: a bit is something that is the classical version, just a zero and one, and it’s basically the fundamental building block of very low-level information you could create on Earth.

Olivier Ezratty: Exactly.

Joannes Vermorel: This is a very discreet, elegant view rooted into basic mathematics. I think the audience has a very good grasp of what a bit is, but maybe not. Most programmers have no idea how a processor works.

Olivier Ezratty: Yes, but let’s assume that there is some general understanding of a bit in the audience, just for the sake of the constitution of this episode.

Joannes Vermorel: I’m sure our audience has a very good grasp of what a bit is, but most programmers have no idea how a processor works. Let’s assume that there is some general understanding of a bit, just for the sake of this episode.

Olivier Ezratty: Exactly, so we have the basic logic and whatnot. When we go into the realm of qubits, there is so much confusion because I’ve read online everything and its contrary about those qubits. Maybe if you could give us a gist of the salient insight into what makes a qubit a qubit and how does it completely diverge from the classical part.

Joannes Vermorel: Interestingly, a qubit can be described as a mathematical object or a physical object, but they are intertwined. That’s the same thing from the physical standpoint.

Olivier Ezratty: Let’s start with the physical aspect. A qubit is a so-called TLS, a two-level system. It’s a quantum object that has two levels, like an atom that has two energy levels: a ground state with no excitement and an excited state. In the real world, there are many different excited states in an atom. You can control these two levels of energy by lasers or other means. For example, you can control an electron’s spin, which is quantized, so it can only be up or down in a given direction, giving you two values. If you take a photon, it can have different polarizations.

There are also compound objects like superconducting loops. A superconducting qubit is not a single object; it’s billions of electrons circulating in a loop. In that loop, which is kept at a very low temperature, there’s a barrier called a Josephson Junction. This barrier enables the creation of a tunnel effect, which results in a strange phenomenon where you can have a superposition of two different energy levels or phases and amplitudes of the current circulating in the loop, creating a two-level system.

Being a quantum system, a qubit can be superposed in two different states. You can have simultaneously the ground state and the excited state of an atom, a superposition of the spin up or spin down of an electron, or a superposition of different polarizations of a photon.

Now, if you look at the mathematical part, the superposition can be expressed as a weight for the zero and the one, which correspond to the ground state or the excited state. These coefficients, usually called Alpha and Beta, happen to be complex numbers and must be normalized. You could describe the superposition of those two states in a qubit as two numbers. Frequently, they are described in the so-called Bloch sphere, a sphere where a vector describes the state of your qubit. When the vector is at the North Pole, it means that you are zero; when it is at the South Pole, it’s one. All intermediate positions elsewhere, like on the equator, correspond to a superposed state of some part of zero and some part of one. If you are in the southern hemisphere, it means that you have more ones than zeros; if you are in the northern hemisphere, it means you have more zeros than ones. If you turn around the equator, it means you have a different phase of the signal. Actually, I found out that it was interesting to make a comparison between a qubit and just an electronic signal. When you manage a networking signal like a sinusoidal signal, you have a phase and amplitude, and a qubit is more or less like this. It’s a phase and amplitude, and you measure those two values with the two values that describe your qubit.

So, what is this superposition? Superposition comes from the fact that all those quantum objects that you are dealing with in quantum physics can behave as a particle or as a wave, depending on how you observe them or you manage them. The best way to understand what a qubit is, is to look at the wave behavior of those quantum objects. It’s easy to understand that if you have two waves corresponding to a ground state and an excited state, you can add the two waves, and it makes a third wave. That is based on the Schrödinger’s equation, by the way. A solution to Schrödinger’s equation for the ground state is one solution, a solution for the excited state is another solution, and it happens that since it’s linear algebra being used in Schrödinger’s equation, a linear combination of those two solutions makes another solution.

So that’s a mathematical view of Schrödinger’s equation. A linear combination of two waves makes another wave, like two piano notes, a do and a C, it gives another note. But that doesn’t tell you where the power comes from. The power comes from the fact that if you have several qubits next to each other and you can connect them, the data space you are handling is growing exponentially with the number of qubits. So, it means that if you add one qubit, a third qubit, a fourth qubit, each time you multiply by two the space of data. Let’s say you have 100 qubits. If you have 100 qubits, that compound quantum object is handling an information space which size is 2 power 100 complex numbers. It’s a lot of data, but it’s analog. You manage an analog space of data, but it’s a huge space that grows much faster than with classical bits.

Joannes Vermorel: I think one thing that really differs from the classic paradigm is that when people say a bit, they think of something where adding bits is very much an additive process. When you add bits of memory, it’s linear. You have twice as much memory, it’s cool, so you can have, you know, twice as many tabs of Slack open in your computer, whatever. But it’s fundamentally completely linear. And here, what you’re saying is, because obviously, at face value, we have computer systems where people don’t even speak in bits because the numbers would be so gigantic. First, they speak in bytes, which are packs of eight, and then people don’t even speak in bytes, usually they speak in megabytes, gigabytes, or terabytes. The numbers we are used to are absolutely gigantic. But because it takes gigantic numbers in the classical sense to do things of real interest, you’re not impressed by saying, “I have something that would be 53 bits.” People would say, “Well, you know, there’s not nothing really. I mean, it’s at the time of the ENIAC, yes, but nowadays you had more memory.”

Olivier Ezratty: Yeah, exactly. That was already thousands of bits. So, it feels underwhelming, but that’s missing the point. The point, if I understand correctly, is that when people say they have 20, 50, 60, or 100 qubits, they mean they have a system where they are all completely entangled. They are part of one system, and they can do stuff together. Two systems that are, let’s say, 50 qubits plus 50 qubits is absolutely not the same as 100 qubits.

Joannes Vermorel: Exactly. But there are a lot of misconceptions there.

Olivier Ezratty: For example, you can be misled into thinking that the speed of quantum computing comes from the computing space. That’s not true. There’s a space advantage and a speed advantage, and they are connected together, but they are different. Indeed, if you have n qubits, you have a computing space of 2^n complex numbers. So, if you are precise, it makes 2^(n+1) real numbers or floating-point numbers if you were talking in a computing sense. But that doesn’t explain why you can compute faster.

Computing faster means you have a number of operations called quantum gates that are not growing as fast as in classical computing. So, in classical computing, the kinds of problems we are interested in solving with quantum computing are the problems that scale exponentially. There are many combinatorial problems that scale exponentially with the number of variables, and we want those problems to scale not exponentially in computing time on the quantum computer. So, it means you have to have a fewer number of operations scaling, let’s say, linearly or logarithmically, or even polynomially, and not exponentially on the quantum computer, where it scales exponentially on the classical computer. And then you have constants that may make it troubling to make the comparisons, but still.

The length of the algorithm is what determines the speed of the quantum computer. The algorithm is using a lot of entanglements, so the connections between the qubits. You have to find a way to assemble an algorithm that will be efficient, and that’s where the science of quantum computing lies, and it’s complementary to the size.

And there’s another aspect that’s also not a misconception but something that’s not fairly well-known. When you measure the result of your algorithm at the end, you get n bits, not n qubits. So you get a 0 or a 1 for each of the 100 qubits that you have. So you get a small amount of information at the end. So you manage a wealth of information, 2^100 complex numbers during your computing, and at the end, you get just 100 bits, classical bits.

So you say, why all this trouble? It means that the quantum computing power comes from the capacity to explore a large space of information but, in the end, to yield a smaller result. So let’s say you want to factorize a large number. Factorization is using a complicated algorithm that explores space using the Shor algorithm, that’s one of the solutions to that. In the end, it gives you a small number, a number which is made of bits.

So it explains the thing, and also, in many algorithms, what you do is you compute your algorithm several times, and you do the average of the results to get a floating number for each of the qubits that you have. Another misconception is that quantum computing is good for big Big Data.

Joannes Vermorel: Yes, that’s why I was getting at it because obviously it doesn’t work. The way I understand that was also something where I was thinking it looks to me like, pretty much by design, unless we can somehow engineer qubit systems that would be able to have like tera qubits or something, which would be kind of insane. I mean, having like billions and billions of qubits, yeah, but until we get there, we see that we have kind of a bottleneck to even channel data into the system.

Putting data in a quantum computer is a big problem. It’s still a research field because a quantum gate that puts some data in a qubit takes some time. It’s very slow, by the way. I was reading something like 10 kilohertz or something, you know, the sort of, yeah, with order of magnitude, IBM right now is between 2 kilohertz and 10 kilohertz, meaning the number of cycles of operations per second. It’s not very fast.

Olivier Ezratty: Yes, it was even trapped ion, which is an alkaline system coming from IonQ or Honeywell in the US, it’s even slower. So, it’s not very fast to put information there. Most of the time, we use so-called hybrid algorithms where the complicated data path is done by classical algorithms, and then you feed the quantum algorithm with the pure bitwise, the compressed data which doesn’t require a lot of control gates. Then the computing explores that huge space of information and it yields a small result.

But there is something else which is bothering right now. When we design a quantum algorithm, most of the time, we think about a perfect mathematical object, this mathematical qubit which is doing linear algebra. It’s kind of vector multiplying matrix and getting a vector that’s just matrix and vector multiplication controlling when you get the math. The problem is the qubits that we have today and we’ll have in the future, they are noisy. They generate a significant error around each computation. So, you have to get the data there averagely.

In existing quantum systems, every operation is generating a 1% error. So, it means you should compute simply one operation at the end, zero good result. It’s kind of simplistic, but it gives you an idea. Many algorithms that are supposed to bring some exponential acceleration, they need about 10 power 9 or 10 power 14 operations. So, it won’t work if you have too much noise. We end up being in a situation where we have to find a workaround around this noise.

There are two ways being explored. One is to find ways to create algorithms that can support that noise, they are called shallow algorithms. These are algorithms which have a low number of gates and operations so that it doesn’t reach the level where everything breaks down. The other way is to use the so-called controlled operations. Controlled operations are a way to create sociological qubits, so these are qubits that, viewed from the outside, have good quality. That’s the one that we need for a given algorithm, but to obtain that result, those logical bits are made of many redundant physical qubits, and the redundancy is huge.

The current plans are saying that to have a very good quality qubit, we need 10,000 qubits. To get a quantum advantage from a pure mathematical standpoint, you should have at least 50 qubits, more like 100. By the way, it’s between 50 and 100. So 100 logical qubits times 10,000 qubits makes 1 million qubits. So you need 1 million physical qubits of very good quality to create a real useful quantum computer that brings some quantum advantage. Right now, the latest record is from IBM. They announced last November, and they will release in a couple of weeks online, a system that has 433 qubits. But those qubits have very low fidelities, probably less than 99% fidelity. So it means more than 1% error for each operation, so it’s not suitable to do anything, I would say, right now or anything very useful. It’s a step in a long roadmap from IBM that makes a lot of sense, but it’s an intermediate step. So there’s a big difference between 433 qubits and 1 million very high-quality qubits that could implement error correction to get this kind of real quantum advantage. There’s a lot of work to be done there.

There’s another solution that exists; it comes from IBM, by the way, and Google and others. They use a new method created a couple of years ago, named quantum error mitigation. Mitigation is different than correction. Correction is when you correct errors at each operation through redundancy. Mitigation is a bit different; it’s a way to use, by the way, AI, so it uses a lot of machine learning. You train your system to understand the phenomena of error in your system, and you do some kind of post-selection correction. So you compute your results many times, and after some training, you are able to correct the results, but after the whole computation is done. It’s supposed to extend the capacity of the so-called noisy quantum computing systems. The same guy who invented the “quantum supremacy” nickname invented another nickname called “NISQ,” which means Noisy Intermediate-Scale Quantum computer. He devised that name exactly five years ago in 2018, so John Preskill again. And so-called noisy systems with quantum error mitigation are supposed to enable useful quantum computing for enterprises. But we have not yet reached that threshold; it may be in a couple of years, but there are some uncertainties there.

Joannes Vermorel: It piqued my curiosity. And again, this is highly speculative. For me, it’s going through shallow algorithms, which is probably the short-term perspective to just make it work. The long-term view of error correction, there might be also other approaches that say, “I have a substrate, a physical substrate that does operations that are noisy. Maybe just play along with the noise, with stuff where having errors is not that much of a problem.” And maybe in machine learning, I see that there are plenty of steps where, for example, there are papers about that show that one of the bottlenecks of, I would say, one of the modern flavors of AI, which is deep learning, is that you end up with matrix multiplication that is consuming a lot of resources. There have been some very interesting papers that show that, well, matrix multiplication is what we want from a mathematical perspective, but do we really need that operationally? I mean, the precision in that precision or even, it just so happened that it’s those sort of things that work in deep learning, but maybe the reason why they work is completely only tangentially related to the fact that we are doing a pure linear version in some cases, we reduce power consumption by reducing the precision, exactly from 16 bit to 8 bit, even to 1 bit in some cases, for limited systems. Do you see places where people are just playing with operators that are very powerful in their own ways, even if they are noisy, to do things that are maybe speculative? So, those things are probably not even being done, but do you see areas where people would actually solve with quantum computing problems that were not even deemed particularly interesting? It’s very frequently the fact that you have the means to do it that makes it interesting.

Olivier Ezratty: I would say the answer is mostly no, and I will detail. There are mainly three kinds of algorithms that you could implement in near-term systems. The first kind is chemical simulation, where you simulate Schrödinger’s equation, look at the orbitals of the electrons in a molecule, and try to understand the structure of a molecule. You need to find its ground state, the lowest energy level, and all the molecules. That actually requires a lot of precision. So, it doesn’t work well, and it requires a lot of precision, particularly if you want to do better than classical computing. There are already systems based on tensor networks and different techniques for chemical simulation on classical systems, which work well, but they are limited. If you want to simulate a more complicated molecule on a quantum system, you need to have very good precision.

The second kind of algorithm is optimization algorithms, more or less binary optimization, like the SAT problem, max cut, and many different search algorithms or the famous traveling person problem. Those systems don’t like errors as well.

The last one is quantum machine learning (QML). Those are the kinds of systems where you may be tolerant of some form of noise. But as far as I know, there are some limitations in what you can do with quantum machine learning. One is that all these algorithms have a very large classical part and a very small quantum part. The second is that feeding the data into the system is very costly. So far, quantum machine learning is one of the areas where, in the near-term systems, there are not many proofs of a real speedup in computing time. It’s still an open research area.

That’s true for everything in quantum computing, but understanding where the real speedups are for each of the categories I described, and even for the categories that are made for logical qubits that are ever created, is still in the making. You have a lot of theory, but the theory has to be confronted with the reality of the hardware, the reality of all the overhead of quantum error correction, and all the other overhead. Even the length of the gates is a consideration because, depending on the type of qubit you use, the gate is not the same length.

For example, if you take a superconducting qubit, which is dominant today, the gate length for a single qubit operation is about 20 nanoseconds, which is kind of short. But the gate length for a two-qubit gate usually is a couple of hundred nanoseconds. And then you have the electronics that control the gate, because the gate is not quantum. The gate comes from the emission of a signal that comes from a classical electronic device. The signal is generated as a kind of microwave pulse which has a duration, and it is generated by classical electronics, either at room temperature or sometimes at a very cold temperature. That system has lag, it takes some time to generate the data and then that system has to be controlled by a classical system because a quantum computer, in most cases, is always a classical computer that controls classical electronics, generating some kind of photon. The photon can be in a microwave regime, so let’s say five gigahertz, or in the optical regime, in the visible spectrum or infrared spectrum, not UV usually. And those photons are sent to the qubit, they change their state, and then we send other kinds of photons or whatever frequency on the qubit. You see what is emitted by the qubit, you look at the signal, you turn the signal from analog to digital, you look at the signal and then you get an idea if it’s a zero or one. So, you have this kind of loop between classical computing, classical electronics, and the qubit one way, and the other way around.

Joannes Vermorel: That brings me to a question that is, again, just to test a bit my understanding. It means also, I didn’t realize actually, that quantum computers were so controlled at the gate level by electronics. But to my understanding, if you want to have any of those beautiful emergent properties of quantum mechanics, you need to be at super low temperature, pretty much.

Olivier Ezratty: It depends. Most of the time, it’s true, but there are a lot of differences between the kinds of qubits. The qubits that are the most stringent for temperature are superconducting qubits, you need about 15 millikelvins. And so, that means whenever you have a classical electronic system that controls that, it’s going to add energy and maybe warm it up a little bit. That’s why you have to control the level of energy that is being spent at each of the layers because you don’t get right away to 15 millikelvins. It’s a big cylinder, usually, so you start at 50 kelvins, then you go down to 4 kelvins, then 1 kelvin, then 100 millikelvins, then 50 millikelvins. So, there are many stages to get to that temperature, and you make sure that every time you’ve got an electronic signal that gets down in that loop, you reduce the number of photons. You have attenuation of the signal to get rid of the extra photons at the right level to make sure that what heats up at the 15 millikelvin level is reduced to the minimum. And you have amplifiers the other way, you have one amplifier at this stage that is used to amplify the system for the qubit readout. But that’s superconducting qubits. Then you have so-called silicon qubits or silicon spin qubits. Those are using semiconductor systems, they use the spin of the electron, and they can run at a higher temperature. But when I say higher, it’s instead of 15 millikelvins, it’s a range between 100 millikelvins and 1 kelvin. It’s still very cold. It’s way below nitrogen being liquid, which is 77K. It’s even below hydrogen being liquid. It’s even below hydrogen being liquid. It’s helium, yes. Helium is above one Kelvin, and there are two isotopes of helium, helium-3 and helium-4, which have different temperatures for getting there. So basically, it’s not your home freezer.

Joannes Vermorel: Yeah, the freezer costs more than one million euros, so it’s quite expensive. But there are other technologies which are different. Let’s take photons, for example.

Olivier Ezratty: If you want to control photons in a processor, it can be room temperature, but you still need some cryogenics because most of the time you need to cool the source of the photon, which is mostly in most cases based on some semiconductor effect that has to be cooled. So let me give you an example from France. We have a startup called Candela. They have their own photon source that is based on a so-called three-five semiconductor system, which is based on gallium arsenide and aluminum, with many layers and Bragg mirrors and so-called quantum dots inside. This tiny thing has to be cooled to about 4 Kelvin to generate a stream of individual photons that are then used in computing. Then the individual photons go into a circuit with waveguides that are at room temperature, and at the end, you need to detect the photons one by one. So at room temperature, you can have a system where you have photons that are controlled individually.

Joannes Vermorel: Oh, that’s interesting. I didn’t know that the waveguides can interact with each other.

Olivier Ezratty: Yes, and at the end, you need to count the number of photons you have on each waveguide. The photon detector itself has to be cooled because the most promising technology for detecting photons individually is based on a superconducting effect, and those systems also are cooled to about 4 Kelvin. So you need cooling at both ends of the system, but not in between. Now let’s take neutral atoms; it’s a very different beast.

Joannes Vermorel: On the datasheet of those vendors, they say no cooling is needed, but it’s not true.

Olivier Ezratty: What they do is they need to control the atoms, to put them at a given place in a vacuum. To do that, they use lasers in three directions and use a so-called Magneto-Optical Trap technology that was invented by Jean Dalibard, who was one of the Ph.D. students of Alain Aspect. This technique is being used to control the position of the atom but not to cool the system. They use another system with another laser and a different kind of special device that controls the position of the atom individually. When the atoms are cooled and positioned very well, their temperature is in the nano-Kelvin range. Surprisingly, you have not used a fridge; you have just used a pump to remove atoms in the system and lasers. So it’s laser-based cooling.

Joannes Vermorel: But it still feels counter-intuitive because you are seemingly adding energy by throwing photons, but actually, the net effect is cooling.

Olivier Ezratty: Yes, using the Doppler effect. The Doppler effect is a way, if you have an atom coming to you and you send it some energy with a photon, the photon is like a ball, it will push the atom in the other direction. It will slow down. And those atoms that were moving the other way around, they won’t receive the energy because of the Doppler effect. It will be lower energy, so it won’t affect them. So, on average, all the atoms coming to you are being slowed down; the others, not so. If you do that in six generations, it will progressively slow down the movement of all the atoms. And you don’t use a mechanical effect; it’s only light that slows down the atoms. But still, it’s cooling the system because what is temperature? The temperature is a measure of the movement of the atoms in a given medium, so it’s still cooling.

And what’s interesting is what they discovered, for example, in Pasqal, the French company, or in Q-CTRL, with the competitor based in Harvard in the U.S. They discovered that when they add more than a couple hundred atoms in that vacuum chamber, to make sure that they do create a very high-quality vacuum, they have to put some cooling in the pump. So now, the cooling is not on the qubits themselves; it’s on the pump that removes the atoms from the chamber. That engineering, I love that. This kind of thing, that’s real engineering.

And so, the last one I could mention is the technology called NV centers. We mentioned that for sensors, but it can be used also for computing. And there is a company based in Australia named Quantum Brilliance, and actually, it’s a German-Australian company. And that company has already created a five-qubit system that works at room temperature. I’m not sure it could scale very well, I’m not sure about it, but still, that technology could potentially work at room temperature.

Joannes Vermorel: What I really love about this discussion is that it shows that whenever you have vendors, you have incentives to show off, and for me, as someone who has some degree of curiosity for that, what I see not being vested in the field is an endless stream of incredible claims. And so, what I see is very interesting is that you have so many claims, and interestingly, it’s not that people lie. That’s the interesting thing; it can mean a lot of different things, or there can be a lot of caveats that come in so many different conditions or whatever.

Olivier Ezratty: Exactly. And it’s fine. I mean, also, you can’t tell whenever you, as a vendor yourself, you know, you can’t tell everything every single time. You have to make choices; you have to simplify things.

Joannes Vermorel: Exactly, I mean, I can’t say, you know, all there is to know about every single thing that we do. And here is, by the way, the copy of the source code and a copy of all the experiences that we did. I mean, in theory, you could potentially conceive that. In practice, it is worse because that would be so much more noise.

So how, and my perception is that this field of quantum tech is quite opaque, you know, at least to outsiders. Your report is shedding tons of light, and the interesting thing is that what interests me the most, specifically, although that’s a little bit of a tangent, is how do you operate when you’re dealing with stuff that is arguably very complicated? I think we can agree that it’s not simple. There are a lot of claims, a lot of noise, and because enterprise software is pretty much the same sort of stuff, you know, somebody claims that they are doing something fantastic. Yes, in a way, but it comes with tweaks, and there are dependencies and costs. You’re operating in a field that is, in a way, very complex. We can argue that in supply chain, the complexity is just accidental, you know, it’s just people doing things in ways that are probably much more complicated than they need to. So that’s actually reducing complexity in quantum computing. You’re dealing with the universe, which is just, you know, it is what it is. So, it’s less accidental, but it doesn’t really address the fact that things become very difficult nonetheless.

Olivier Ezratty: How do you make sense of progress in this field?

Joannes Vermorel: I mean, you’ve said that you talk to people, but one of the things that baffles most of my prospects is that everybody is kind of a vendor in those sorts of games. So, how do you identify who is to be trusted? Because, for example, there are so many people that if I went into this realm of quantum, there is so much to take in. How do you sort out the fraud from the non-fraud? Yes, you told me that there is one solution, which is to talk to a Nobel Prize winner in physics, but they’re not omniscient. So that’s one way to solve the problem, but how do you identify the people that can act as a relay for you to gain understanding, because there are so many possible frauds, blatant claims, and you have just so much time also to wade through that.

Olivier Ezratty: How do you navigate?

Joannes Vermorel: Basically, I try to meet as many scientists as possible, mostly in fundamental research. I try to improve the diversity of the people I meet with, so typically, it’s important to have meetings with both physicists and engineers, as well as people who are more in the algorithms and computer science part. Even though I should meet more people in that space, I think I meet more physicists than computer science folks right now. In your past life, you have been meeting more high performance computing or supercomputing people, which is a different zone in the classical computing.

I try to read as many scientific papers as possible and understand the language. That’s the first thing to do when you read a new paper. But it’s an ongoing game. It’s an everyday challenge. One of the reasons why it’s complicated, I would say, and it’s a plus for the industry right now, is the diversity. When I mean diversity, you have at least 20 to 30 different kinds of qubits right now. When in transistors and classical chipsets, you have only one CMOS kind of transistor. There are some variations, but the difference between the transistor in your iPhone, your Mac, your PC, or your server is one percent. It’s more or less the same technology. It’s the same technique; it’s always silicon NP doping and gates. That’s always the same kind of thing in quantum computing. You have a number of different technologies. That’s amazing. And sometimes, on one given technology, there are only, let’s say, 50 people in the world who know about it. Let me give you an example. Two weeks ago, I was in Las Vegas, not for the CES, I stopped that three years ago. I was in Las Vegas for the APS meeting, the American Physics Society. That’s the largest group of physicists in the world, 13,000 physicists in the same location.

Olivier Ezratty: And I met with a company, they came to me. The company name is Iroquo, nobody knows them very well. They are based in the US, in the Chicago region, and they are doing a silicon qubit, not a spin qubit. So they control the spin of an electron, but that electron is lying on a helium substrate, a cold liquid helium, on top of which there are electron spins. It’s weird, I mean it’s very weird. And why do they use that? Because the helium is isolating the spins from the surrounding circuits. It’s one of the many, many weird technologies you have around. And for each of the technologies, you have only a few scientists who can tell you what it is about and what are the pros and cons.

You have to live with that, so you have to live with uncertainty. You have to live with partial information. You have to have some gut feeling, and you also have to have a broad engineering knowledge of all the dimensions. For example, one of the things I discovered recently, partly with working as part of being a co-founder of the Quantum Energy Initiative with Alexa and other researchers, I found out it was very important to look at the electronics. Just the quality of the electronics that control the lasers or control the microwaves that are sent to the qubit is as important as the qubit themselves.

And so, electronics is not commonplace. I mean, when you’re a computer scientist, you don’t know anything about the equation. So I had to come back to Fourier transforms and understanding signal theory, understanding what is jitter, so the variations you have in the phase, the amplitude, or the frequency of a signal generated by classical electronics. Understanding the power that is needed to generate a microwave, understanding the attenuation, the filtering, all of that, and electronic engineering is influencing the engineering of the quantum computing system.

Joannes Vermorel: And for me, I think the very interesting lesson is that, you see, I would argue that if I take my own field, enterprise software, supply chain interest, there is also a bewildering array of niche perspectives, niche vendors, and whatnot. Just for example, to look at the problem of, let’s say, supply chain, there are probably 20 competing philosophies on how to approach a problem. There is like the mainstream one, flow casting, DDMRP, SNOP. I mean, those are literally different viewpoints, and there are dozens. And then there are plenty of vendors.

What really interests me is that in what you describe, the possibility to forge a relatively educated opinion, a relatively accurate opinion on whether those things work while you do not proceed yourself through a direct control experiment. So, you see, you did not set up a lab with the stuff to say, “Does this paper replicate?” You know, because you see there’s this kind of naive sort of thinking that the only way to know whether what this vendor says is true is to kind of do the experiment and test it. Yes, you can test your software online. But with enterprise software, sometimes the problem is that even if you want to do a test, you have to deploy them in many places at once. I mean, it’s super impractical. That’s why most vendors don’t even offer a free trial because it doesn’t even make sense. You would need to deploy the thing in 20 different locations to even get started.

Olivier Ezratty: The interesting thing, and I think that’s something where I’m very much a believer of the approach, is to go to one person that makes a claim, let them defend it, and then go to another, ideally somebody who has a very diverse perspective and conflict, and through this, you learn something else. In my case, I meet on a regular basis a lot of scientists, and there’s a lot of serendipity involved. Sometimes I meet some people who say, “Oh, you should meet that guy or this lady,” and then I meet them and they teach me something. For example, I was in Grenoble back in November last year, and I met probably 15 scientists in one day. I was puzzled because I met some people and they were working on so-called topological qubits, an area where Microsoft is famous for being the only vendor betting on that kind of qubit. I found these four persons in Grenoble and I said, “Okay, but who do you work with?” And they said, “Oh, we work with that guy in the U.S.” I knew the name of that guy because I knew he was a guy who managed to have a retraction of a paper in Nature from people from Microsoft. He’s based at Pittsburgh University. So, I learned a lot by meeting those guys.

Joannes Vermorel: Back to your report, I’m stealing something that I found in your report, you see, is that in your report, for example, in the very first section, you give hints on how to read scientific papers. And that’s very interesting because those papers come with 20 authors, and then you don’t know if all those people are relevant. And then you say, okay, the name of the first person is a Ph.D. student who actually did the work, all the other people are basically people that supported the work in their own way but tangentially, and then the last guy is actually the supervisor or the lab director, who may or may not really understand what’s going on in the paper.

Olivier Ezratty: The interesting thing is that you’ve uncovered something very interesting, which is how to get clues and how to navigate. And it’s not magic; you have things where, when you have these sorts of clues, you can navigate the field, and they are very simple. I don’t know if I described that in that part of my book, but finding a hole, let’s say you have a lab which says, “Ah, I discovered a new kind of qubit that’s better.” Okay, better in what? So you try to find the numbers that they publish, and most of the time, they won’t publish all the other numbers. Let’s say they say it’s stabilized for a so-called T1 of a number of microseconds, but surprisingly, they don’t give any number for the qubitilities. Maybe they are not so good there. And sometimes, you even don’t have the number of qubits of the experiment, which means that they don’t have many qubits. Sometimes you can find a kind of hint with the missing information.

It’s typical in quantum computing, most of the time partly with vendor communication. I know a company, I won’t name them, based in North America. They showcase the quality of their qubits but they don’t give the number. And it’s interesting because if they don’t give a number, it means two things: it means that the number is very low, and it means also that the fidelities they have with their qubits are misleading. Because usually, when you have a large number of qubits, it’s more challenging to have a good fidelity of the qubits, and with a small number, so if you give a good fidelity but you don’t say it’s only three or four or five qubits, you’re misleading people. That’s a very interesting example.

Joannes Vermorel: Yeah, because you see, I think at the root, we are dealing with humans who are intelligent and have the same sort of tendencies that other humans in other endeavors. If I go to supply chain software, which is a specific interest of mine, I see that although the clues are not the same, they exist just as well. They are different, for example, if a vendor has no screenshots, there is a near certainty that their UI looks terrible. Because if the UI looked great, they would have tons of screenshots. If the algorithms they have are just a glorified version of moving average, they don’t talk about it. They just say, “We have super advanced AI,” which is just moving average to do the forecast. But then they would just say that. On the contrary, if people have something, they would have endless sections on their website about it. Conversely, if their software is super slow, they won’t talk about speed at all. They will emphasize that they completely understand the mindset of this vertical, but then they don’t speak at all about their software being slow. So, I really like this idea of looking for those more meta aspects. Basically, it means that you need to have this sort of adversarial mindset. If somebody tells me something that is remarkable in a way, your first filter is to decide whether this thing is remarkable enough to go chasing for this person. But that means that then your instrument would say, “What is the most likely price to pay for this claim? What is the untold cost?”

Olivier Ezratty: Well, there’s something else that makes it complicated in quantum computing. You have to learn about the kinds of metrics that exist, the way they are measured, and also the variety of benchmark techniques. In quantum computing, there are many such things. There are strong efforts being pushed by standardization bodies like ISO and others, and we, as France, are participating in these efforts. But you also need a lot of education to understand how those things are being measured. For example, I found out that the measurement of the quality of the qubits is not really the same for solid-state qubits, like semiconductor-based or superconducting qubits, and the guys managing trapped ions. They use different metrics, and you have to understand why it’s different. So, you have to understand the numbers. Just getting a clue about the numbers they’re used is very important. I tried to make a graph recently, a log-log scale graph of the quality of qubits. It was a pain because it was difficult to get the right number in a consistent way. So, for example, if you measure the quality of qubits, you have to make sure that the quality is measured with so-called randomized benchmarking, which is kind of a more or less standardized way of computing the quality of the qubits. You have to be very careful; you can be misled by the numbers.

Joannes Vermorel: Absolutely. I mean, in supply chain, it’s all over the place. I mean, in things that are very mundane, for example, one of the questions that people are asking is, how accurate is your forecasting system? The problem is that it’s incredibly dependent on the accuracy of the data you have as inputs. So obviously, there are no numbers that make sense like that because the answer is, well, it depends on your data. The only way that the community has found to kind of get a sense of who is more accurate is to actually have something like a Kaggle competition, and then people compete. But fundamentally, we have problems in how we define something that would be like an intrinsic measurement of the forecasting capabilities.

Olivier Ezratty: What has changed recently, though, is that we have more quantum computers and existing information available on the cloud. Sometimes access is costly, but whatever. You have those systems with IBM, Amazon, Microsoft, and even Google has one IonQ system. So, I think worldwide, you have about 60 computers available in the cloud. It means that people can benchmark them, and you start to have very interesting scientific papers showcasing comparisons within those different systems through benchmarking that has been done in a consistent way. You start to have some educated guesses of where they really are. That’s interesting, and it’s positive. It’s an open ecosystem.

Joannes Vermorel: But it’s open and also complicated. You need to have a lot of scientific background to judge the scientific content. There are many scientific publications, even vendors are publishing papers, but just reading a paper is a pain. It’s so complicated sometimes. I remember four years ago when I discovered the 70 pages of Google’s quantum supremacy paper, I laughed. The reason I was laughing was, who can get an idea of what’s in that paper given the number of things you have in the paper? You have quantum physics, algorithms, comparisons with computing, electronics, cryogenics, and so many different things in 70 pages and graphs very difficult to understand.

Olivier Ezratty: I remember four years ago, probably I couldn’t understand about 5 to 10 percent of the paper. Now I think I’m in excess of 50 percent. Not the whole paper, but it takes a while. Every time I reread the paper, I got something new because I read something else elsewhere or got some training or watched some videos. It’s still open, but you can be open and closed simultaneously because complexity is obfuscation. The lack of comparisons can be a form of obfuscation as well. So, for example, if you want to reconcile data coming from very different vendors, you need to have either somebody who did a paper consolidating that data or you do it on your own, like I did for this chart I’ve been doing recently. There is still room for data integration, I would say, the capacity to gather data from very different sources and to figure out where we are really about this. I’m currently writing two papers on that.

Joannes Vermorel: From my perspective, it’s absolutely necessary work and incredibly useful to do this work. But it also comes with long-term terrible incentives. You doing this work, I have known you, but I think what makes you so unique is that you’re not swayed easily by any vendors. It takes a very specific mindset.You’ve been a vendor in the past. You’ve been at Microsoft, and you’ve been part of the game. I think it gave you the sort of intellectual antibodies. Microsoft is what it is, not a black and white opinion. It’s made of many people, it’s gray like any collection of 200,000 plus humans. You have very good people, very bad people, and whatnot. I think it gives you some sort of intellectual antibodies on the sort of corporate issues that tend to emerge from large collections of humans.

Olivier Ezratty: Yes, because they have to raise funding.

Joannes Vermorel: Exactly. Being attractive to a VC in that space requires more effort for a large corporation. The point I want to make here is that if you play this role of trying to be an expert, there are companies, and I’m not going to give the name of the companies that start with a G in the realm of enterprise software, that are super prominent market analysts. The way I see it is that the long-term incentive, if you’re a market analyst, a bit like what you’re doing, is to become outsourced press management for the vendors. That’s literally what’s happening in the sphere of enterprise software, especially supply chain.

Olivier Ezratty: Right, and what I see is that people who play that role quickly earn a lot more money. As an analyst, they will always claim they make most of their revenue from clients where they do the explanation, but the reality is that vendors are paying more to have a biased expert that will just say what the vendor would prefer this third-party analyst to tell the market at large. So you end up with this distortion.

Joannes Vermorel: In your case, your report is really as good as it can get in having a non-biased assessment of something that is super complicated and changing. But what I see that also interests me is that in fields where quantum computing does not have many established vendors, people who should be playing your role have become corrupted and end up serving whatever messaging comes from the vendors.

You are doing this sort of work with a lot of help, but pretty much on your own. What strikes me is that modern corporations tend to underplay what a single person can do given just a couple of years. If we look at quantum computing, it baffles companies, and they typically approach the problem by taking consultants and spending a lot of money on having a team of 20 people for three months. But you are proof that the sort of understanding that you can gain by going all in on one intelligent, motivated person, given years, can be just as effective.

Olivier Ezratty: Yes, and I should add a couple more contact points. One is the customers themselves, since IBM and others are trying to push their new technology with large customers. There are many large customers in the world who have evaluated it, and some have even signed papers published by customers in France, like Total, EDF, and MBDA. If you meet those scientists from those companies, you also get a lot of insights because they have tested different technologies and real algorithms on real business problems.

Joannes Vermorel: I would absolutely love to see that in supply chain, clients producing papers. What we have right now are case studies that are complete advertisements. The point with case studies is that it’s just a piece of information formatted for advertising.

Olivier Ezratty: I’m involved in a project where the good mix is to have people from research, people from the vendor space, and a customer. If you can have a research team with those three moving parts, it creates a good combination. It works well if it’s local, for example, if the research teams, startups, and customers are all in the same country or location. That’s very helpful to build a new way to approach research and apply the research in a new domain.

Joannes Vermorel: So, maybe proceeding further with the timelines we are looking at, quantum computing has been in the making for decades. There are very fundamental reasons why we can be hopeful, in the sense that it’s literally the way the universe itself is working. The beauty of quantum mechanics is that it made the universe richer in a way compared to what was before. So suddenly, you had things where you could do things that from the old perspective were just impossible. It constrains but also enables tons of things. We are already leveraging a lot of pieces of that, such as the transistors and the giant magneto-resistance for spin disks and whatnot. So, what sort of timeline do you see for the emergent industrialization of this second wave of computing technologies? Can we even bank on something?

Olivier Ezratty: Well, I can tell you what people say about it. The best answer is, “I don’t know.” Most of the time, you get a Gaussian curve centered at 15 years. That’s where people think we’ll have the big quantum computer that can do things that can’t be done on classical computers. It’s a bit naive because it’s an average view of where it could come from.

I think there are some situations with some computing paradigms, like the so-called analog quantum computers, which are different from the gate-based quantum computers. These analog computers could bring some quantum advantage in the next few years, not needing to wait for 10 to 15 years.

The NISQ (Noisy Intermediate-Scale Quantum) quantum advantage, with the noisy systems that we have today, is uncertain. I have no idea if we will reach that. It depends on the quality of the qubits that companies like IBM produce in the future. IBM is the largest company in the world investing in that space, and they have leading technology in superconducting qubits. They may be in a position to make a big change in the next 18 months, which is a very short time frame. They may lead us, as a community, to an area where we can start to do useful things with quantum computers.

But then, it has to scale, and the challenge is to move from a couple hundred qubits to millions of qubits. That’s a huge challenge, both on the physics side, engineering, and energetics. Everything is challenging there. Another thing that makes it difficult to make a prediction is the existence of so many different kinds of technologies. Let’s say, for example, Microsoft succeeds in developing topological qubits with Majorana fermions in a couple of years. Many people are skeptical, but if they succeed, they may change the landscape very quickly.

So, you may have a Gaussian curve slow trend, but you may also have surprises. You may find new algorithm designs or new error correction designs. I’ve never seen so much creativity in the last two years in error correction. There’s a guy in France named Anthony, and it’s amazing what they do. They invent error-correcting codes that can cope with lesser quality qubits, for example. They know that if you adjust the connectivity between the qubits, you can improve the efficiency of the error correction code. Improving the connectivity is difficult, but not impossible or that hard.

There are so many tweaks in the technology, so many workarounds, and so many variations that there’s always some hope. However, I know some people, both in France and abroad, who are very skeptical. There’s a scientifically grounded skepticism on why it’s going to be difficult to reach the level where we have millions of qubits entangled with each other. But still, you can believe in the imagination and ingenuity of engineers and scientists. There are so many different options being looked at that we’ll see.

The real answer is, we don’t know. But we have to be educated on the fly about how things are changing. We have to be educated to be able to interpret new announcements and figure out if they’re important or not. That’s the beauty of this field; that’s why I’m still there. It’s always changing, always moving, and maybe it’s intellectually challenging.

Joannes Vermorel Going back and maybe going to a bit of a close, but coming back to the very beginning of this interview, you were mentioning the stuff that captured your interest in a very practical way. What are you doing next? What is your own personal roadmap in this field? What are the things that capture your time and attention right now?

Olivier Ezratty: Oh, I have many things on my plate. One is, I’m the co-founder of the Quantum Energy Initiative, which is not yet a formal organization, but it’s a community of researchers worldwide. We organized our first workshop in Singapore in November, with worldwide leading scientists from all over the world. We have to launch our website, launch a community, launch a YouTube channel, making sure that the energetics of those seconds is really taken care of because we are in a world of limited resources. We can’t avoid that, and we have to explain that to scientists and vendors that it is impossible to put on the market a new technology that will add some more resource consumption without caring about that. Bitcoin did that, yes, but we think it makes a lot of sense when you bring a new technology that is mysterious, that is complicated with unknown use cases, to push the industry and the whole ecosystem to behave as a responsible innovation ecosystem.

The second thing is, I have to start writing the sixth edition of my book, which is going to consume a lot of bandwidth. I’m starting to write scientific papers, so I wrote my first paper for peer review in a physics journal on superconducting qubits. I will see if it’s accepted; it’s not yet done. I have to be always engaged in empowering the local ecosystem at the French and European level, so I’m starting to have a lot of contacts in Europe now. And I continue to help startups here in France, but informally. I run two series of podcasts on top of that with Fanny Bouton, and she started like me five years ago in the quantum space. Now she’s the quantum leader of OVH Cloud, a leading European cloud operator, and she launched the cloud offering of that operator for quantum, so it’s a very nice story.

I probably forgot many things, but I have many customers, I’m doing training, and I’m teaching at Épitech. Everything I do is feeding the other part of what I do, like being a trainer, teaching quantum computing forces you to structure your thoughts. Writing the book is likewise; you structure your thoughts, you share your thoughts. Writing papers, interviewing people in podcasts where you meet people, it encourages you to meet a diverse set of people. Working with customers, I try to have the most diverse way of working; that’s my way of life. I would like to be a small contributor in the success of the French and European ecosystem. That’s the kind of end goal. I would like to contribute to the success of my friends in research, particularly for the Quantum Energy Initiative. I would like to have a so-called sovereign quantum cloud in France through OVH Cloud that I’m helping as well. So there are various things which are more or less about helping the ecosystem and me learning, sharing in an open way.

Joannes Vermorel: I believe that your approach, which consists of writing it down yourself, is incredibly virtuous, not just in propagating the knowledge but just even if you were not publishing it at all. Just going through the motion of putting this thing together is an incredible exercise. I think it’s also one of the lessons for my enterprise clients. Many large companies who are into decade-long undertakings should be taking the long view of just doing that, even for themselves. Managers should try to collect their own life work of understanding their field so that the company gets better. That’s the interesting thing – people would say, “Oh, but maybe those people are going to leave us two years from now.” But when I talk to, let’s say, a supply chain director, those are still the sort of positions where people have been in the same company for 30 years. So, this is a bit of an excuse as opposed to just recognizing the value of doing the writing exercise just for yourself.

Olivier Ezratty: I’ve always done that since I was young, and I think that’s also the way of life, a superpower. You have to be a little bit organized. I have some simple organization tricks to reuse information in various places and take notes. For example, the way I update my book is a bit special, but not that special. What I do is, I have a small Word document with the same table of contents as my book, and that’s where I put all the updates I receive on a daily basis – new archive papers, news, or announcements. It’s put in the right location at the right place, like a twin of my book. It’s smaller, of course, only containing updates. And then, when I update my book, I’ve got everything already sorted by topic.

So, let’s say I want to update the algorithm part; there’s just an algorithm chapter already with all the links. And since I have some customers for whom I do techno screening and some news that are not published, I also have a lot of written explanations on the news that I can use to update my book. When you do everything on your own, you have to be organized and reuse the content in a clever way. I also do a lot of charts that I update continuously. I’ve got my own database of companies in quantum technology, an Excel sheet with lots of tables, and so on. I even have a database of all the Nobel Prize winners in quantum physics, a database on companies, a database on qubit fidelities – everything that can become a database is in my Excel spreadsheet.

Joannes Vermorel: And then, if you don’t know, you ask ChatGPT, and maybe it’s going to answer and provide you some data. For me, I’m not that organized, but I’m trying to cultivate a written understanding of my own field. As parting words, what would be your suggestion to CEOs or CTOs of companies facing very opaque fields? They can’t go all in on quantum computing like you do. What would be your suggestion to those people with regards to quantum computing?

Olivier Ezratty: My suggestion would be to look at my book, of course, without reading the whole thing, but looking at what’s inside. If you are a bank or in the chemical industry or transportation, there’s always a chapter for you in my book because there’s a long part listing all the use cases identified, even though they don’t work yet. It gives you an idea of what quantum computing could bring for your business. There are chapters for 20 different industries in my book, even defense and intelligence, so you will find something relevant there. Then, you can also listen to the podcast I’m running with Fanny. We do about one to two podcasts a month. But don’t just read the press. When I say the press, I mean whatever the press it is. I’m not criticizing the press, given the formats that you have in most journals, even scientific journals, it’s impossible to get a good clue of where we are, really.

You have to meet the people, you have to see specialized people, whoever they are. You will see also that as a customer, you have to diversify your sources of information. What I say is not the same as others, and there are different opinions. It’s still opinions based on science, it’s not just conspirationist opinions, but you have to get different views. I would say you need to have optimistic, pessimistic, or in-between views of where we are, really. Like, I don’t know where we are.

And the shortest way is to attend a conference where I or others are explaining stuff in one or two hours. There’s the last one I do many conferences, and many of them are on YouTube, so in French or English. But the best formats, I would say, are when I’m asked to explain quantum computing in less than one hour. It’s not that good, maybe it’s too short. If you go on YouTube, you will find some formats where I have the opportunity either alone or not alone. I did something with Elena, for example, in December, two years ago, in Bordeaux. It’s a very nice event. I did another one with Mod veneer and Fanny Botton at North in June 2022. Those are the kinds of events between one hour and two hours which are, I would say, good for education.

Recently, I did another one for Limited Universal with Mark DJ, two hours. So it was a one-hour and 20-minutes presentation; it’s quite long, and then 40 minutes Q&A. I’d say that’s the right format to get a good grasp of where we are and what we could do with those systems.

Joannes Vermorel: It was really a pleasure having you. This is a very interesting field for me. To the audience, well, stay tuned. See you next time.