The Formalization of Social Precarities


Anthology edited by Murali Shanmugavelan and Aiha Nguyen: “…explores platformization from the point of view of precarious gig workers in the Majority World. In countries like Bangladesh, Brazil, and India — which reinforce social hierarchies via gender, race, and caste — precarious workers are often the most marginalized members of society. Labor platforms made familiar promises to workers in these countries: work would be democratized, and people would have the opportunity to be their own boss. Yet even as platforms have upended the legal relationship between worker and employer, they have leaned into social structures to keep workers precarious — and in fact formalized those social precarities through surveillance and data collection…(More)”.

Global Contract-level Public Procurement Dataset


Paper by Mihály Fazekas et al: “One-third of total government spending across the globe goes to public procurement, amounting to about 10 trillion dollars a year. Despite its vast size and crucial importance for economic and political developments, there is a lack of globally comparable data on contract awards and tenders run. To fill this gap, this article introduces the Global Public Procurement Dataset (GPPD). Using web scraping methods, we collected official public procurement data on over 72 million contracts from 42 countries between 2006 and 2021 (time period covered varies by country due to data availability constraints). To overcome the inconsistency of data publishing formats in each country, we standardized the published information to fit a common data standard. For each country, key information is collected on the buyer(s) and supplier(s), geolocation information, product classification, price information, and details of the contracting process such as contract award date or the procedure type followed. GPPD is a contract-level dataset where specific filters are calculated allowing to reduce the dataset to the successfully awarded contracts if needed. We also add several corruption risk indicators and a composite corruption risk index for each contract which allows for an objective assessment of risks and comparison across time, organizations, or countries. The data can be reused to answer research questions dealing with public procurement spending efficiency among others. Using unique organizational identification numbers or organization names allows connecting the data to company registries to study broader topics such as ownership networks…(More)”.

The Ethics of Advanced AI Assistants


Paper by Iason Gabriel et al: “This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user – across one or more domains – in line with the user’s expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders…(More)”.

The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy Analysis


Article by Mehrdad Safaei and Justin Longo: “Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated/human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis…(More)”.

Unleashing collective intelligence for public decision-making: the Data for Policy community


Paper by Zeynep Engin, Emily Gardner, Andrew Hyde, Stefaan Verhulst and Jon Crowcroft: “Since its establishment in 2014, Data for Policy (https://dataforpolicy.org) has emerged as a prominent global community promoting interdisciplinary research and cross-sector collaborations in the realm of data-driven innovation for governance and policymaking. This report presents an overview of the community’s evolution from 2014 to 2023 and introduces its six-area framework, which provides a comprehensive mapping of the data for policy research landscape. The framework is based on extensive consultations with key stakeholders involved in the international committees of the annual Data for Policy conference series and the open-access journal Data & Policy published by Cambridge University Press. By presenting this inclusive framework, along with the guiding principles and future outlook for the community, this report serves as a vital foundation for continued research and innovation in the field of data for policy...(More)”.oeoMMrMrM..Andrew Hyde,Stefaan Verhulst[Opens in a new window] and

The AI That Could Heal a Divided Internet


Article by Billy Perrigo: “In the 1990s and early 2000s, technologists made the world a grand promise: new communications technologies would strengthen democracy, undermine authoritarianism, and lead to a new era of human flourishing. But today, few people would agree that the internet has lived up to that lofty goal. 

Today, on social media platforms, content tends to be ranked by how much engagement it receives. Over the last two decades politics, the media, and culture have all been reshaped to meet a single, overriding incentive: posts that provoke an emotional response often rise to the top.

Efforts to improve the health of online spaces have long focused on content moderation, the practice of detecting and removing bad content. Tech companies hired workers and built AI to identify hate speech, incitement to violence, and harassment. That worked imperfectly, but it stopped the worst toxicity from flooding our feeds. 

There was one problem: while these AIs helped remove the bad, they didn’t elevate the good. “Do you see an internet that is working, where we are having conversations that are healthy or productive?” asks Yasmin Green, the CEO of Google’s Jigsaw unit, which was founded in 2010 with a remit to address threats to open societies. “No. You see an internet that is driving us further and further apart.”

What if there were another way? 

Jigsaw believes it has found one. On Monday, the Google subsidiary revealed a new set of AI tools, or classifiers, that can score posts based on the likelihood that they contain good content: Is a post nuanced? Does it contain evidence-based reasoning? Does it share a personal story, or foster human compassion? By returning a numerical score (from 0 to 1) representing the likelihood of a post containing each of those virtues and others, these new AI tools could allow the designers of online spaces to rank posts in a new way. Instead of posts that receive the most likes or comments rising to the top, platforms could—in an effort to foster a better community—choose to put the most nuanced comments, or the most compassionate ones, first…(More)”.

Millions of gamers advance biomedical research


Article by McGill: “…4.5 million gamers around the world have advanced medical science by helping to reconstruct microbial evolutionary histories using a minigame included inside the critically and commercially successful video game, Borderlands 3. Their playing has led to a significantly refined estimate of the relationships of microbes in the human gut. The results of this collaboration will both substantially advance our knowledge of the microbiome and improve on the AI programs that will be used to carry out this work in future.

By playing Borderlands Science, a mini-game within the looter-shooter video game Borderlands 3, these players have helped trace the evolutionary relationships of more than a million different kinds of bacteria that live in the human gut, some of which play a crucial role in our health. This information represents an exponential increase in what we have discovered about the microbiome up till now. By aligning rows of tiles which represent the genetic building blocks of different microbes, humans have been able to take on tasks that even the best existing computer algorithms have been unable to solve yet…(More) (and More)”.

United against algorithms: a primer on disability-led struggles against algorithmic injustice


Report by Georgia van Toorn: “Algorithmic decision-making (ADM) poses urgent concerns regarding the rights and entitlements of people with disability from all walks of life. As ADM systems become increasingly embedded in government decision-making processes, there is a heightened risk of harm, such as unjust denial of benefits or inadequate support, accentuated by the expanding reach of state surveillance.

ADM systems have far reaching impacts on disabled lives and life chances. Despite this, they are often designed without the input of people with lived experience of disability, for purposes that do not align with the goals of full rights, participation, and justice for disabled people.

This primer explores how people with disability are collectively responding to the threats posed by algorithmic, data-driven systems – specifically their public sector applications. It provides an introductory overview of the topic, exploring the approaches, obstacles, and actions taken by people with disability in their ‘algoactivist’ struggles…(More)”.

Measuring the mobile body


Article by Laura Jung: “…While nation states have been collecting data on citizens for the purposes of taxation and military recruitment for centuries, its indexing, organization in databases and classification for particular governmental purposes – such as controlling the mobility of ‘undesirable’ populations – is a nineteenth-century invention. The French historian and philosopher Michel Foucault describes how, in the context of growing urbanization and industrialization, states became increasingly preoccupied with the question of ‘circulation’. Persons and goods, as well as pathogens, circulated further than they had in the early modern period. While states didn’t seek to suppress or control these movements entirely, they sought means to increase what was seen as ‘positive’ circulation and minimize ‘negative’ circulation. They deployed the novel tools of a positivist social science for this purpose: statistical approaches were used in the field of demography to track and regulate phenomena such as births, accidents, illness and deaths. The emerging managerial nation state addressed the problem of circulation by developing a very particular toolkit amassing detailed information about the population and developing standardized methods of storage and analysis.

One particularly vexing problem was the circulation of known criminals. In the nineteenth century, it was widely believed that if a person offended once, they would offend again. However, the systems available for criminal identification were woefully inadequate to the task.

As criminologist Simon Cole explains, identifying an unknown person requires a ‘truly unique body mark’. Yet before the advent of modern systems of identification, there were only two ways to do this: branding or personal recognition. While branding had been widely used in Europe and North America on convicts, prisoners and enslaved people, evolving ideas around criminality and punishment largely led to the abolition of physical marking in the early nineteenth century. The criminal record was established in its place: a written document cataloguing the convict’s name and a written description of their person, including identifying marks and scars…(More)”.

The False Choice Between Digital Regulation and Innovation


Paper by Anu Bradford: “This Article challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. This view, vigorously advocated by the tech industry, has shaped the public discourse in the United States, where the country’s thriving tech economy is often associated with a staunch commitment to free markets. US lawmakers have also traditionally embraced this perspective, which explains their hesitancy to regulate the tech industry to date. The European Union has chosen another path, regulating the digital economy with stringent data privacy, antitrust, content moderation, and other digital regulations designed to shape the evolution of the tech economy towards European values around digital rights and fairness. According to the EU’s critics, this far-reaching tech regulation has come at the cost of innovation, explaining the EU’s inability to nurture tech companies and compete with the US and China in the tech race. However, this Article argues that the association between digital regulation and technological progress is considerably more complex than what the public conversation, US lawmakers, tech companies, and several scholars have suggested to date. For this reason, the existing technological gap between the US and the EU should not be attributed to the laxity of American laws and the stringency of European digital regulation. Instead, this Article shows there are more foundational features of the American legal and technological ecosystem that have paved the way for US tech companies’ rise to global prominence—features that the EU has not been able to replicate to date. By severing tech regulation from its allegedly adverse effect on innovation, this Article seeks to advance a more productive scholarly conversation on the costs and benefits of digital regulation. It also directs governments deliberating tech policy away from a false choice between regulation and innovation while drawing their attention to a broader set of legal and institutional reforms that are necessary for tech companies to innovate and for digital economies and societies to thrive…(More)”.