The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.

Politicians love to appeal to common sense – but does it trump expertise?


Essay by Magda Osman: “Politicians love to talk about the benefits of “common sense” – often by pitting it against the words of “experts and elites”. But what is common sense? Why do politicians love it so much? And is there any evidence that it ever trumps expertise? Psychology provides a clue.

We often view common sense as an authority of collective knowledge that is universal and constant, unlike expertise. By appealing to the common sense of your listeners, you therefore end up on their side, and squarely against the side of the “experts”. But this argument, like an old sock, is full of holes.

Experts have gained knowledge and experience in a given speciality. In which case politicians are experts as well. This means a false dichotomy is created between the “them” (let’s say scientific experts) and “us” (non-expert mouthpieces of the people).

Common sense is broadly defined in research as a shared set of beliefs and approaches to thinking about the world. For example, common sense is often used to justify that what we believe is right or wrong, without coming up with evidence.

But common sense isn’t independent of scientific and technological discoveries. Common sense versus scientific beliefs is therefore also a false dichotomy. Our “common” beliefs are informed by, and inform, scientific and technology discoveries…

The idea that common sense is universal and self-evident because it reflects the collective wisdom of experience – and so can be contrasted with scientific discoveries that are constantly changing and updated – is also false. And the same goes for the argument that non-experts tend to view the world the same way through shared beliefs, while scientists never seem to agree on anything.

Just as scientific discoveries change, common sense beliefs change over time and across cultures. They can also be contradictory: we are told “quit while you are ahead” but also “winners never quit”, and “better safe than sorry” but “nothing ventured nothing gained”…(More)”

Detecting Human Rights Violations on Social Media during Russia-Ukraine War


Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War

Opportunities and Challenges in Reusing Public Genomics Data


Introduction to Special Issue by Mahmoud Ahmed and Deok Ryong Kim: “Genomics data is accumulating in public repositories at an ever-increasing rate. Large consortia and individual labs continue to probe animal and plant tissue and cell cultures, generating vast amounts of data using established and novel technologies. The human genome project kickstarted the era of systems biology (1, 2). Ambitious projects followed to characterize non-coding regions, variations across species, and between populations (3, 4, 5). The cost reduction allowed individual labs to generate numerous smaller high-throughput datasets (6, 7, 8, 9). As a result, the scientific community should consider strategies to overcome the challenges and maximize the opportunities to use these resources for research and the public good. In this collection, we will elicit opinions and perspectives from researchers in the field on the opportunities and challenges of reusing public genomics data. The articles in this research topic converge on the need for data sharing while acknowledging the challenges that come with it. Two articles defined and highlighted the distinction between data and metadata. The characteristic of each should be considered when designing optimal sharing strategies. One article focuses on the specific issues surrounding the sharing of genomics interval data, and another on balancing the need for protecting pediatric rights and the sharing benefits.

The definition of what counts as data is itself a moving target. As technology advances, data can be produced in more ways and from novel sources. Events of recent years have highlighted this fact. “The pandemic has underscored the urgent need to recognize health data as a global public good with mechanisms to facilitate rapid data sharing and governance,” Schwalbe and colleagues (2020). The challenges facing these mechanisms could be technical, economic, legal, or political. Defining what data is and its type, therefore, is necessary to overcome these barriers because “the mechanisms to facilitate data sharing are often specific to data types.” Unlike genomics data, which has established platforms, sharing clinical data “remains in a nascent phase.” The article by Patrinos and colleagues (2022) considers the strong ethical imperative for protecting pediatric data while acknowledging the need not to overprotections. The authors discuss a model of consent for pediatric research that can balance the need to protect participants and generate health benefits.

Xue et al. (2023) focus on reusing genomic interval data. Identifying and retrieving the relevant data can be difficult, given the state of the repositories and the size of these data. Similarly, integrating interval data in reference genomes can be hard. The author calls for standardized formats for the data and the metadata to facilitate reuse.

Sheffield and colleagues (2023) highlight the distinction between data and metadata. Metadata describes the characteristics of the sample, experiment, and analysis. The nature of this information differs from that of the primary data in size, source, and ways of use. Therefore, an optimal strategy should consider these specific attributes for sharing metadata. Challenges specifics to sharing metadata include the need for standardized terms and formats, making it portable and easier to find.

We go beyond the reuse issue to highlight two other aspects that might increase the utility of available public data in Ahmed et al. (2023). These are curation and integration…(More)”.

There’s a model for governing AI. Here it is.


Article by Jacinda Ardern: “…On March 15, 2019, a terrorist took the lives of 51 members of New Zealand’s Muslim community in Christchurch. The attacker livestreamed his actions for 17 minutes, and the images found their way onto social media feeds all around the planet. Facebook alone blocked or removed 1.5 million copies of the video in the first 24 hours; in that timeframe, YouTube measured one upload per second.

Afterward, New Zealand was faced with a choice: accept that such exploitation of technology was inevitable or resolve to stop it. We chose to take a stand.

We had to move quickly. The world was watching our response and that of social media platforms. Would we regulate in haste? Would the platforms recognize their responsibility to prevent this from happening again?

New Zealand wasn’t the only nation grappling with the connection between violent extremism and technology. We wanted to create a coalition and knew that France had started to work in this space — so I reached out, leader to leader. In my first conversation with President Emmanuel Macron, he agreed there was work to do and said he was keen to join us in crafting a call to action.

We asked industry, civil society and other governments to join us at the table to agree on a set of actions we could all commit to. We could not use existing structures and bureaucracies because they weren’t equipped to deal with this problem.

Within two months of the attack, we launched the Christchurch Call to Action, and today it has more than 120 members, including governments, online service providers and civil society organizations — united by our shared objective to eliminate terrorist and other violent extremist content online and uphold the principle of a free, open and secure internet.

The Christchurch Call is a large-scale collaboration, vastly different from most top-down approaches. Leaders meet annually to confirm priorities and identify areas of focus, allowing the project to act dynamically. And the Call Secretariat — made up of officials from France and New Zealand — convenes working groups and undertakes diplomatic efforts throughout the year. All members are invited to bring their expertise to solve urgent online problems.

While this multi-stakeholder approach isn’t always easy, it has created change. We have bolstered the power of governments and communities to respond to attacks like the one New Zealand experienced. We have created new crisis-response protocols — which enabled companies to stop the 2022 Buffalo attack livestream within two minutes and quickly remove footage from many platforms. Companies and countries have enacted new trust and safety measures to prevent livestreaming of terrorist and other violent extremist content. And we have strengthened the industry-founded Global Internet Forum to Counter Terrorism with dedicated funding, staff and a multi-stakeholder mission.

We’re also taking on some of the more intransigent problems. The Christchurch Call Initiative on Algorithmic Outcomes, a partnership with companies and researchers, was intended to provide better access to the kind of data needed to design online safety measures to prevent radicalization to violence. In practice, it has much wider ramifications, enabling us to reveal more about the ways in which AI and humans interact.

From its start, the Christchurch Call anticipated the emerging challenges of AI and carved out space to address emerging technologies that threaten to foment violent extremism online. The Christchurch Call is actively tackling these AI issues.

Perhaps the most useful thing the Christchurch Call can add to the AI governance debate is the model itself. It is possible to bring companies, government officials, academics and civil society together not only to build consensus but also to make progress. It’s possible to create tools that address the here and now and also position ourselves to face an unknown future. We need this to deal with AI…(More)”.

OECD Recommendation on Digital Identity


OECD Recommendation: “…Recommends that Adherents prioritise inclusion and minimise barriers to access to and the use of digital identity. To this effect, Adherents should: 

 1. Promote accessibility, affordability, usability, and equity across the digital identity lifecycle in order to increase access to a secure and trusted digital identity solution, including by vulnerable groups and minorities in accordance with their needs; 

2. Take steps to ensure that access to essential services, including those in the public and private sector is not restricted or denied to natural persons who do not want to, or cannot access or use a digital identity solution; 

3. Facilitate inclusive and collaborative stakeholder engagement throughout the design, development, and implementation of digital identity systems, to promote transparency, accountability, and alignment with user needs and expectations; 

4. Raise awareness of the benefits and secure uses of digital identity and the way in which the digital identity system protects users while acknowledging risks and demonstrating the mitigation of potential harms; 

5. Take steps to ensure that support is provided through appropriate channel(s), for those who face challenges in accessing and using digital identity solutions, and identify opportunities to build the skills and capabilities of users; 

6. Monitor, evaluate and publicly report on the effectiveness of the digital identity system, with a focus on inclusiveness and minimising the barriers to the access and use of digital identity…

Recommends that Adherents take a strategic approach to digital identity and define roles and responsibilities across the digital identity ecosystem…(More)”.

Digital Freedoms in French-Speaking African Countries


Report by AFD: “As digital penetration increases in countries across the African continent, its citizens face growing risks and challenges. Indeed, beyond facilitated access to knowledge such as the online encyclopedia Wikipedia, to leisure-related tools such as Youtube, and to sociability such as social networks, digital technology offers an unprecedented space for democratic expression. 

However, these online civic spaces are under threat. Several governments have enacted vaguely-defined laws, allowing for random arrests.

Several countries have implemented repressive practices restricting freedom of expression and access to information. This is what is known as “digital authoritarianism”, which is on the rise in many countries.

This report takes stock of digital freedoms in 26 French-speaking African countries, and proposes concrete actions to improve citizen participation and democracy…(More)”

From the Economic Graph to Economic Insights: Building the Infrastructure for Delivering Labor Market Insights from LinkedIn Data


Blog by Patrick Driscoll and Akash Kaura: “LinkedIn’s vision is to create economic opportunity for every member of the global workforce. Since its inception in 2015, the Economic Graph Research and Insights (EGRI) team has worked to make this vision a reality by generating labor market insights such as:

In this post, we’ll describe how the EGRI Data Foundations team (Team Asimov) leverages LinkedIn’s cutting-edge data infrastructure tools such as Unified Metrics PlatformPinot, and Datahub to ensure we can deliver data and insights robustly, securely, and at scale to a myriad of partners. We will illustrate this through a case study of how we built the pipeline for our most well-known and oft-cited flagship metric: the LinkedIn Hiring Rate…(More)”.

AI and Global Governance: Modalities, Rationales, Tensions


Paper by Michael Veale, Kira Matus and Robert Gorwa: “Artificial intelligence (AI) is a salient but polarizing issue of recent times. Actors around the world are engaged in building a governance regime around it. What exactly the “it” is that is being governed, how, by who, and why—these are all less clear. In this review, we attempt to shine some light on those questions, considering literature on AI, the governance of computing, and regulation and governance more broadly. We take critical stock of the different modalities of the global governance of AI that have been emerging, such as ethical councils, industry governance, contracts and licensing, standards, international agreements, and domestic legislation with extraterritorial impact. Considering these, we examine selected rationales and tensions that underpin them, drawing attention to the interests and ideas driving these different modalities. As these regimes become clearer and more stable, we urge those engaging with or studying the global governance of AI to constantly ask the important question of all global governance regimes: Who benefits?…(More)”.

Artificial Intelligence in the COVID-19 Response


Report by Sean Mann, Carl Berdahl, Lawrence Baker, and Federico Girosi: “We conducted a scoping review to identify AI applications used in the clinical and public health response to COVID-19. Interviews with stakeholders early in the research process helped inform our research questions and guide our study design. We conducted a systematic search, screening, and full text review of both academic and gray literature…

  • AI is still an emerging technology in health care, with growing but modest rates of adoption in real-world clinical and public health practice. The COVID-19 pandemic showcased the wide range of clinical and public health functions performed by AI as well as the limited evidence available on most AI products that have entered use.
  • We identified 66 AI applications (full list in Appendix A) used to perform a wide range of diagnostic, prognostic, and treatment functions in the clinical response to COVID-19. This included applications used to analyze lung images, evaluate user-reported symptoms, monitor vital signs, predict infections, and aid in breathing tube placement. Some applications were used by health systems to help allocate scarce resources to patients.
  • Many clinical applications were deployed early in the pandemic, and most were used in the United States, other high-income countries, or China. A few applications were used to care for hundreds of thousands or even millions of patients, although most were used to an unknown or limited extent.
  • We identified 54 AI-based public health applications used in the pandemic response. These included AI-enabled cameras used to monitor health-related behavior and health messaging chatbots used to answer questions about COVID-19. Other applications were used to curate public health information, produce epidemiologic forecasts, or help prioritize communities for vaccine allocation and outreach efforts.
  • We found studies supporting the use of 39 clinical applications and 8 public health applications, although few of these were independent evaluations, and we found no clinical trials evaluating any application’s impact on patient health. We found little evidence available on entire classes of applications, including some used to inform care decisions such as patient deterioration monitors.
  • Further research is needed, particularly independent evaluations on application performance and health impacts in real-world care settings. New guidance may be needed to overcome the unique challenges to evaluating AI application impacts on patient- and population-level health outcomes….(More)” – See also: The #Data4Covid19 Review