00:00:00 Brief summary of research paper by Harvard
00:00:32 ChatGPT-4 study with BCG consultants
00:01:31 Exploring AI’s impact on productivity
00:02:31 AI’s displacement potential and critique
00:03:47 AI’s broader effects on employment

Summary

Conor Doherty of Lokad scrutinizes a Harvard study on AI’s impact on white-collar jobs, revealing nuanced effects. The research, involving 758 consultants, assesses AI’s role in enhancing productivity, particularly in supply chain management. It finds that AI boosts performance in certain tasks, especially with training, but may falter in complex scenarios. Doherty critiques the study’s narrow view of AI as an adjunct to human labor, arguing for its potential to fully automate tasks, thus revolutionizing productivity and redefining white-collar work. He warns of overconfidence in job security, as AI’s full capabilities could dramatically alter the employment landscape.

Full Transcript

Conor Doherty: Navigating the jagged technological frontier, a research paper released by Harvard Business School aims to provide insight into the effects of AI on both productivity and ultimately employment.

Today, I will look at three things: one, the paper’s hypothesis and methods; two, the paper’s main findings and conclusions; and three, Lokad’s take. Let’s get started.

The paper explores the implications of AI on complex, realistic, and knowledge-intensive tasks. The AI was ChatGPT-4, a large language model.

The implicit hypothesis was that introducing AI into the workflows of highly skilled professionals will result in productivity gains. The 758 subjects involved were consultants from Boston Consulting Group (BCG).

The researchers divided subjects into two groups, with each group receiving a unique task. One task focused on analytical skills, creative writing, persuasiveness, and writing skills, while the other focused on problem-solving by combining quantitative and qualitative data.

All participants initially completed a baseline task without AI. Following this, subjects were subdivided into three groups: a control group without AI, a group with AI, and a third group with both AI and training on how to best use it.

Subjects had anywhere from 30 to 90 minutes to complete their tasks, which were designed to mimic those found at high-level consulting firms.

Even though the tasks were designed to be of similar difficulty the effects of AI were quite different leading to what the researchers called a jagged technological frontier.

This refers to the AI’s ability to significantly improve human performance for some tasks, ones inside the frontier, but degrade human performance for others, ones outside the frontier.

For inside the frontier tasks, AI led to significant productivity gains. Both AI groups outperformed the control group, and the group that received additional AI training performed best overall.

The productivity boost was especially potent for people who scored in the lower half of the initial baseline task, suggesting that AI might be particularly beneficial for lower-skilled workers.

However, for outside the frontier tasks, such as the combination of quantitative and qualitative analysis, the group that received both AI and training performed worst. In fact, the control group outperformed both AI groups, with a significant difference noted between the control group and the group that received both AI and training.

The authors suggest that for tasks inside the frontier, AI can dramatically improve both productivity and quality, and possibly even displace humans. However, for tasks deemed outside the frontier, AI can be much less effective.

Though the paper is very accessibly written, it is predicated on a very serious methodological flaw. It did not explore the productivity gained through AI automation. Instead, it treated AI as a sort of co-pilot to be guided by humans.

This is deeply flawed because it artificially constricted AI’s capabilities, particularly when combined with other techniques like retrieval-augmented generation.

AI can automatically compose prompts with all necessary information retrieved from a database, like those kept at high-level consulting firms. This would far surpass the output of a single person and will be far more profitable when deployed at scale.

Even if the quality of the answers were only as good as that of the control group, they would still be dramatically cheaper due to the very low cost of large language models (LLMs) compared to the six-figure salary of a single Harvard graduate.

This is the real effect of AI on both supply chain and employment: the unprecedented productivity gains through automation for both quantitative and qualitative tasks deployed at scale. This would very likely surpass humans in terms of quality and certainly in terms of return on investment.

By ignoring this obvious use case, the paper provides a false and arguably dangerous sense of security to people already concerned about AI’s potential effect on supply chain and other analytical jobs.