WikiProject AI Cleanup


Article by Emanuel Maiberg: “A group of Wikipedia editors have formed WikiProject AI Cleanup, “a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia.”

The group’s goal is to protect one of the world’s largest repositories of information from the same kind of misleading AI-generated information that has plagued Google search resultsbooks sold on Amazon, and academic journals.

“A few of us had noticed the prevalence of unnatural writing that showed clear signs of being AI-generated, and we managed to replicate similar ‘styles’ using ChatGPT,” Ilyas Lebleu, a founding member of WikiProject AI Cleanup, told me in an email. “Discovering some common AI catchphrases allowed us to quickly spot some of the most egregious examples of generated articles, which we quickly wanted to formalize into an organized project to compile our findings and techniques.”…(More)”.

Machines of Loving Grace


Essay by Dario Amodei: “I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one…(More)”.

G7 Toolkit for Artificial Intelligence in the Public Sector


OECD Toolkit: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”.

What AI Can Do for Your Country


Article by Jylana L. Sheats: “..Although most discussions of artificial intelligence focus on its impacts on business and research, AI is also poised to transform government in the United States and beyond. AI-guided disaster response is just one piece of the picture. The U.S. Department of Health and Human Services has an experimental AI program to diagnose COVID-19 and flu cases by analyzing the sound of patients coughing into their smartphones. The Department of Justice uses AI algorithms to help prioritize which tips in the FBI’s Threat Intake Processing System to act on first. Other proposals, still at the concept stage, aim to extend the applications of AI to improve the efficiency and effectiveness of nearly every aspect of public services. 

The early applications illustrate the potential for AI to make government operations more effective and responsive. They illustrate the looming challenges, too. The federal government will have to recruit, train, and retain skilled workers capable of managing the new technology, competing with the private sector for top talent. The government also faces a daunting task ensuring the ethical and equitable use of AI. Relying on algorithms to direct disaster relief or to flag high-priority crimes raises immediate concerns: What if biases built into the AI overlook some of the groups that most need assistance, or unfairly target certain populations? As AI becomes embedded into more government operations, the opportunities for misuse and unintended consequences will only expand…(More)”.

Digital Distractions with Peer Influence: The Impact of Mobile App Usage on Academic and Labor Market Outcomes


Paper by Panle Jia Barwick, Siyu Chen, Chao Fu & Teng Li: “Concerns over the excessive use of mobile phones, especially among youths and young adults, are growing. Leveraging administrative student data from a Chinese university merged with mobile phone records, random roommate assignments, and a policy shock that affects peers’ peers, we present, to our knowledge, the first estimates of both behavioral spillover and contextual peer effects, and the first estimates of medium-term impacts of mobile app usage on academic achievement, physical health, and labor market outcomes. App usage is contagious: a one s.d. increase in roommates’ in-college app usage raises own app usage by 4.4% on average, with substantial heterogeneity across students. App usage is detrimental to both academic performance and labor market outcomes. A one s.d. increase in own app usage reduces GPAs by 36.2% of a within-cohort-major s.d. and lowers wages by 2.3%. Roommates’ app usage exerts both direct effects (e.g., noise and disruptions) and indirect effects (via behavioral spillovers) on GPA and wage, resulting in a total negative impact of over half the size of the own usage effect. Extending China’s minors’ game restriction policy of 3 hours per week to college students would boost their initial wages by 0.7%. Using high-frequency GPS data, we identify one underlying mechanism: high app usage crowds out time in study halls and increases absences from and late arrivals at lectures…(More)”.

Use of large language models as a scalable approach to understanding public health discourse


Paper by Laura Espinosa and Marcel Salathé: “Online public health discourse is becoming more and more important in shaping public health dynamics. Large Language Models (LLMs) offer a scalable solution for analysing the vast amounts of unstructured text found on online platforms. Here, we explore the effectiveness of Large Language Models (LLMs), including GPT models and open-source alternatives, for extracting public stances towards vaccination from social media posts. Using an expert-annotated dataset of social media posts related to vaccination, we applied various LLMs and a rule-based sentiment analysis tool to classify the stance towards vaccination. We assessed the accuracy of these methods through comparisons with expert annotations and annotations obtained through crowdsourcing. Our results demonstrate that few-shot prompting of best-in-class LLMs are the best performing methods, and that all alternatives have significant risks of substantial misclassification. The study highlights the potential of LLMs as a scalable tool for public health professionals to quickly gauge public opinion on health policies and interventions, offering an efficient alternative to traditional data analysis methods. With the continuous advancement in LLM development, the integration of these models into public health surveillance systems could substantially improve our ability to monitor and respond to changing public health attitudes…(More)”.

Deliberative Technology: Designing AI and Computational Democracy for Peacebuilding in Highly-Polarized Contexts


Report by Lisa Schirch: “This is a report on an international workshop for 45 peacebuilders, co-hosted by Toda Peace Institute and the University of Notre Dame’s Kroc Institute for International Peace Studies in June 2024.  Emphasizing citizen participation and collective intelligence, the workshop explored the intersection of digital democracy and algorithmic technologies designed to enhance democratic processes. Central to the discussions were deliberative technologies, a new class of tools that facilitate collective discussion and decision-making by incorporating both qualitative and quantitative inputs, supported by bridging algorithms and AI. The workshop provided a comprehensive overview of how these innovative approaches and technologies can contribute to more inclusive and effective democratic processes, particularly in contexts marked by polarization and conflict…(More)”

Asserting the public interest in health data: On the ethics of data governance for biobanks and insurers


Paper by Kathryne Metcalf and Jathan Sadowski : “Recent reporting has revealed that the UK Biobank (UKB)—a large, publicly-funded research database containing highly-sensitive health records of over half a million participants—has shared its data with private insurance companies seeking to develop actuarial AI systems for analyzing risk and predicting health. While news reports have characterized this as a significant breach of public trust, the UKB contends that insurance research is “in the public interest,” and that all research participants are adequately protected from the possibility of insurance discrimination via data de-identification. Here, we contest both of these claims. Insurers use population data to identify novel categories of risk, which become fodder in the production of black-boxed actuarial algorithms. The deployment of these algorithms, as we argue, has the potential to increase inequality in health and decrease access to insurance. Importantly, these types of harms are not limited just to UKB participants: instead, they are likely to proliferate unevenly across various populations within global insurance markets via practices of profiling and sorting based on the synthesis of multiple data sources, alongside advances in data analysis capabilities, over space/time. This necessitates a significantly expanded understanding of the publics who must be involved in biobank governance and data-sharing decisions involving insurers…(More)”.

Data’s Role in Unlocking Scientific Potential


Report by the Special Competitive Studies Project: “…we outline two actionable steps the U.S. government can take immediately to address the data sharing challenges hindering scientific research.

1. Create Comprehensive Data Inventories Across Scientific Domains

We recommend the Secretary of Commerce, acting through the Department of Commerce’s Chief Data Officer and the Director of the National Institute of Standards and Technology (NIST), and with the Federal Chief Data Officer Council (CDO Council) create a government-led inventory where organizations – universities, industries, and research institutes – can catalog their datasets with key details like purpose, description, and accreditation. Similar to platforms like data.gov, this centralized repository would make high-quality data more visible and accessible, promoting scientific collaboration. To boost participation, the government could offer incentives, such as grants or citation credits for researchers whose data is used. Contributing organizations would also be responsible for regularly updating their entries, ensuring the data stays relevant and searchable. 

2. Create Scientific Data Sharing Public-Private Partnerships

A critical recommendation of the National Data Action Plan was for the United States to facilitate the creation of data sharing public-private partnerships for specific sectors. The U.S. Government should coordinate data sharing partnerships with its departments and agencies, industry, academia, and civil society. Data collected by one entity can be tremendously valuable to others. But incentivizing data sharing is challenging as privacy, security, legal (e.g., liability), and intellectual property (IP) concerns can limit willingness to share. However, narrowly-scoped PPPs can help overcome these barriers, allowing for greater data sharing and mutually beneficial data use…(More)”

Can LLMs advance democratic values?


Paper by Seth Lazar and Lorenzo Manuali: “LLMs are among the most advanced tools ever devised for analysing and generating linguistic content. Democratic deliberation and decision-making involve, at several distinct stages, the production and analysis of language. So it is natural to ask whether our best tools for manipulating language might prove instrumental to one of our most important linguistic tasks. Researchers and practitioners have recently asked whether LLMs can support democratic deliberation by leveraging abilities to summarise content, as well as to aggregate opinion over summarised content, and indeed to represent voters by predicting their preferences over unseen choices. In this paper, we assess whether using LLMs to perform these and related functions really advances the democratic values that inspire these experiments. We suggest that the record is decidedly mixed. In the presence of background inequality of power and resources, as well as deep moral and political disagreement, we should be careful not to use LLMs in ways that automate non-instrumentally valuable components of the democratic process, or else threaten to supplant fair and transparent decision-making procedures that are necessary to reconcile competing interests and values. However, while we argue that LLMs should be kept well clear of formal democratic decision-making processes, we think that they can be put to good use in strengthening the informal public sphere: the arena that mediates between democratic governments and the polities that they serve, in which political communities seek information, form civic publics, and hold their leaders to account…(More)”.