Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology


Paper by Nikhil Agarwal, Alex Moehring, Pranav Rajpurkar & Tobias Salz: “While Artificial Intelligence (AI) algorithms have achieved performance levels comparable to human experts on various predictive tasks, human experts can still access valuable contextual information not yet incorporated into AI predictions. Humans assisted by AI predictions could outperform both human-alone or AI-alone. We conduct an experiment with professional radiologists that varies the availability of AI assistance and contextual information to study the effectiveness of human-AI collaboration and to investigate how to optimize it. Our findings reveal that (i) providing AI predictions does not uniformly increase diagnostic quality, and (ii) providing contextual information does increase quality. Radiologists do not fully capitalize on the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating. The observed errors in belief updating can be explained by radiologists’ partially underweighting the AI’s information relative to their own and not accounting for the correlation between their own information and AI predictions. In light of these biases, we design a collaborative system between radiologists and AI. Our results demonstrate that, unless the documented mistakes can be corrected, the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI…(More)”.

AI and the automation of work


Essay by Benedict Evans: “…We should start by remembering that we’ve been automating work for 200 years. Every time we go through a wave of automation, whole classes of jobs go away, but new classes of jobs get created. There is frictional pain and dislocation in that process, and sometimes the new jobs go to different people in different places, but over time the total number of jobs doesn’t go down, and we have all become more prosperous.

When this is happening to your own generation, it seems natural and intuitive to worry that this time, there aren’t going to be those new jobs. We can see the jobs that are going away, but we can’t predict what the new jobs will be, and often they don’t exist yet. We know (or should know), empirically, that there always have been those new jobs in the past, and that they weren’t predictable either: no-one in 1800 would have predicted that in 1900 a million Americans would work on ‘railways’ and no-one in 1900 would have predicted ‘video post-production’ or ‘software engineer’ as employment categories. But it seems insufficient to take it on faith that this will happen now just because it always has in the past. How do you know it will happen this time? Is this different?

At this point, any first-year economics student will tell us that this is answered by, amongst other things, the ‘Lump of Labour’ fallacy.

The Lump of Labour fallacy is the misconception that there is a fixed amount of work to be done, and that if some work is taken by a machine then there will be less work for people. But if it becomes cheaper to use a machine to make, say, a pair of shoes, then the shoes are cheaper, more people can buy shoes and they have more money to spend on other things besides, and we discover new things we need or want, and new jobs. The efficient gain isn’t confined to the shoe: generally, it ripples outward through the economy and creates new prosperity and new jobs. So, we don’t know what the new jobs will be, but we have a model that says, not just that there always have been new jobs, but why that is inherent in the process. Don’t worry about AI!The most fundamental challenge to this model today, I think, is to say that no, what’s really been happening for the last 200 years of automation is that we’ve been moving up the scale of human capability…(More)”.

Artificial Intelligence, Big Data, Algorithmic Management, and Labor Law


Chapter by Pauline Kim: “Employers are increasingly relying on algorithms and AI to manage their workforces, using automated systems to recruit, screen, select, supervise, discipline, and even terminate employees. This chapter explores the effects of these systems on the rights of workers in standard work relationships, who are presumptively protected by labor laws. It examines how these new technological tools affect fundamental worker interests and how existing law applies, focusing primarily as examples on two particular concerns—nondiscrimination and privacy. Although current law provides some protections, legal doctrine has largely developed with human managers in mind, and as a result, fails to fully apprehend the risks posed by algorithmic tools. Thus, while anti-discrimination law prohibits discrimination by workplace algorithms, the existing framework has a number of gaps and uncertainties when applied to these systems. Similarly, traditional protections for employee privacy are ill-equipped to address the sheer volume and granularity of worker data that can now be collected, and the ability of computational techniques to extract new insights and infer sensitive information from that data. More generally, the expansion of algorithmic management affects other fundamental worker interests because it tends to increase employer power vis à vis labor. This chapter concludes by briefly considering the role that data protection laws might play in addressing the risks of algorithmic management…(More)”.

The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.

From the Economic Graph to Economic Insights: Building the Infrastructure for Delivering Labor Market Insights from LinkedIn Data


Blog by Patrick Driscoll and Akash Kaura: “LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Since its inception in 2015, the Economic Graph Research and Insights (EGRI) team has worked to make this vision a reality by generating labor market insights such as:

In this post, we’ll describe how the EGRI Data Foundations team (Team Asimov) leverages LinkedIn’s cutting-edge data infrastructure tools such as Unified Metrics PlatformPinot, and Datahub to ensure we can deliver data and insights robustly, securely, and at scale to a myriad of partners. We will illustrate this through a case study of how we built the pipeline for our most well-known and oft-cited flagship metric: the LinkedIn Hiring Rate…(More)”.

Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity


Book by Daron Acemoglu and Simon Johnson” A thousand years of history and contemporary evidence make one thing clear. Progress depends on the choices we make about technology. New ways of organizing production and communication can either serve the narrow interests of an elite or become the foundation for widespread prosperity.

The wealth generated by technological improvements in agriculture during the European Middle Ages was captured by the nobility and used to build grand cathedrals while peasants remained on the edge of starvation. The first hundred years of industrialization in England delivered stagnant incomes for working people. And throughout the world today, digital technologies and artificial intelligence undermine jobs and democracy through excessive automation, massive data collection, and intrusive surveillance.

It doesn’t have to be this way. Power and Progress demonstrates that the path of technology was once—and may again be—brought under control. The tremendous computing advances of the last half century can become empowering and democratizing tools, but not if all major decisions remain in the hands of a few hubristic tech leaders.With their breakthrough economic theory and manifesto for a better society, Acemoglu and Johnson provide the vision needed to reshape how we innovate and who really gains from technological advances…(More)”.

Machines of mind: The case for an AI-powered productivity boom


Report by Martin Neil Baily, Erik Brynjolfsson, Anton Korinek: “ Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, AI will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work. However, official statistics will only partially capture the boost in productivity because the output of knowledge workers is difficult to measure. The rapid advances can have great benefits but may also lead to significant risks, so it is crucial to ensure that we steer progress in a direction that benefits all of society…(More)”.

AI in Hiring and Evaluating Workers: What Americans Think


Pew Research Center survey: “… finds crosscurrents in the public’s opinions as they look at the possible uses of AI in workplaces. Americans are wary and sometimes worried. For instance, they oppose AI use in making final hiring decisions by a 71%-7% margin, and a majority also opposes AI analysis being used in making firing decisions. Pluralities oppose AI use in reviewing job applications and in determining whether a worker should be promoted. Beyond that, majorities do not support the idea of AI systems being used to track workers’ movements while they are at work or keeping track of when office workers are at their desks.

Yet there are instances where people think AI in workplaces would do better than humans. For example, 47% think AI would do better than humans at evaluating all job applicants in the same way, while a much smaller share – 15% – believe AI would be worse than humans in doing that. And among those who believe that bias along racial and ethnic lines is a problem in performance evaluations generally, more believe that greater use of AI by employers would make things better rather than worse in the hiring and worker-evaluation process. 

Overall, larger shares of Americans than not believe AI use in workplaces will significantly affect workers in general, but far fewer believe the use of AI in those places will have a major impact on them personally. Some 62% think the use of AI in the workplace will have a major impact on workers generally over the next 20 years. On the other hand, just 28% believe the use of AI will have a major impact on them personally, while roughly half believe there will be no impact on them or that the impact will be minor…(More)”.

Workforce ecosystems and AI


Report by David Kiron, Elizabeth J. Altman, and Christoph Riedl: “Companies increasingly rely on an extended workforce (e.g., contractors, gig workers, professional service firms, complementor organizations, and technologies such as algorithmic management and artificial intelligence) to achieve strategic goals and objectives. When we ask leaders to describe how they define their workforce today, they mention a diverse array of participants, beyond just full- and part-time employees, all contributing in various ways. Many of these leaders observe that their extended workforce now comprises 30-50% of their entire workforce. For example, Novartis has approximately 100,000 employees and counts more than 50,000 other workers as external contributors. Businesses are also increasingly using crowdsourcing platforms to engage external participants in the development of products and services. Managers are thinking about their workforce in terms of who contributes to outcomes, not just by workers’ employment arrangements.

Our ongoing research on workforce ecosystems demonstrates that managing work across organizational boundaries with groups of interdependent actors in a variety of employment relationships creates new opportunities and risks for both workers and businesses. These are not subtle shifts. We define a workforce ecosystem as:

A structure that encompasses actors, from within the organization and beyond, working to create value for an organization. Within the ecosystem, actors work toward individual and collective goals with interdependencies and complementarities among the participants.

The emergence of workforce ecosystems has implications for management theory, organizational behavior, social welfare, and policymakers. In particular, issues surrounding work and worker flexibility, equity, and data governance and transparency pose substantial opportunities for policymaking.

At the same time, artificial intelligence (AI)—which we define broadly to include machine learning and algorithmic management—is playing an increasingly large role within the corporate context. The widespread use of AI is already displacing workers through automation, augmenting human performance at work, and creating new job categories…(More)”.

The Technology/Jobs Puzzle: A European Perspective


Blog by Pierre-Alexandre Balland, Lucía Bosoer and Andrea Renda as part of the work of the Markle Technology Policy and Research Consortium: “In recent years, the creation of “good jobs” – defined as occupations that provide a middle-class living standard, adequate benefits, sufficient economic security, personal autonomy, and career prospects (Rodrik and Sabel 2019; Rodrik and Stantcheva 2021) – has become imperative for many governments. At the same time, developments in industrial value chains and in digital technologies such as Artificial Intelligence (AI) create important challenges for the creation of good jobs. On the one hand, future good jobs may not be found only in manufacturing, ad this requires that industrial policy increasingly looks at services. On the other hand, AI has shown the potential to automate both routine and also non-routine tasks (TTC 2022), and this poses new, important questions on what role humans will play in the industrial value chains of the future. In the report drafted for the Markle Technology Policy and Research Consortium on The Technology/Jobs Puzzle: A European Perspective, we analyze Europe’s approach to the creation of “good jobs”. By mapping Europe’s technological specialization, we estimate in which sectors good jobs are most likely to emerge, and assess the main opportunities and challenges Europe faces on the road to a resilient, sustainable and competitive future economy.The report features an important reflection on how to define job quality and, relatedly “good jobs”. From the perspective of the European Union, job quality can be defined along two distinct dimensions. First, while the internationally agreed definition is rather static (e.g. related to the current conditions of the worker), the emerging interpretation at the EU level incorporates the extent to which a given job leads to nurturing human capital, and thereby empowering workers with more skills and well-being over time. Second, job quality can be seen from a “micro” perspective, which only accounts for the condition of the individual worker; or from a more “macro” perspective, which considers whether the sector in which the job emerges is compatible with the EU’s agenda, and in particular with the twin (green and digital) transition. As a result, we argue that ideally, Europe should avoid creating “good” jobs in “bad” sectors, as well as “bad” jobs in “good” sectors. The ultimate goal is to create “good” jobs in “good” sectors….(More)”