Article by Mehrdad Safaei and Justin Longo: “Policy advising in government centers on the analysis of public problems and the developing of recommendations for dealing with them. In carrying out this work, policy analysts consult a variety of sources and work to synthesize that body of evidence into useful decision support documents commonly called briefing notes. Advances in natural language processing (NLP) have led to the continuing development of tools that can undertake a similar task. Given a brief prompt, a large language model (LLM) can synthesize information in content databases. This article documents the findings from an experiment that tested whether contemporary NLP technology is capable of producing public policy relevant briefing notes that expert evaluators judge to be useful. The research involved two stages. First, briefing notes were created using three models: NLP generated; human generated; and NLP generated/human edited. Next, two panels of retired senior public servants (with only one panel informed of the use of NLP in the experiment) were asked to judge the briefing notes using a heuristic evaluation rubric. The findings indicate that contemporary NLP tools were not able to, on their own, generate useful policy briefings. However, the feedback from the expert evaluators indicates that automatically generated briefing notes might serve as a useful supplement to the work of human policy analysts. And the speed with which the capabilities of NLP tools are developing, supplemented with access to a larger corpus of previously prepared policy briefings and other policy-relevant material, suggests that the quality of automatically generated briefings may improve significantly in the coming years. The article concludes with reflections on what such improvements might mean for the future practice of policy analysis…(More)”.
Unleashing collective intelligence for public decision-making: the Data for Policy community
Paper by Zeynep Engin, Emily Gardner, Andrew Hyde, Stefaan Verhulst and Jon Crowcroft: “Since its establishment in 2014, Data for Policy (https://dataforpolicy.org) has emerged as a prominent global community promoting interdisciplinary research and cross-sector collaborations in the realm of data-driven innovation for governance and policymaking. This report presents an overview of the community’s evolution from 2014 to 2023 and introduces its six-area framework, which provides a comprehensive mapping of the data for policy research landscape. The framework is based on extensive consultations with key stakeholders involved in the international committees of the annual Data for Policy conference series and the open-access journal Data & Policy published by Cambridge University Press. By presenting this inclusive framework, along with the guiding principles and future outlook for the community, this report serves as a vital foundation for continued research and innovation in the field of data for policy...(More)”.
The AI That Could Heal a Divided Internet
Article by Billy Perrigo: “In the 1990s and early 2000s, technologists made the world a grand promise: new communications technologies would strengthen democracy, undermine authoritarianism, and lead to a new era of human flourishing. But today, few people would agree that the internet has lived up to that lofty goal.
Today, on social media platforms, content tends to be ranked by how much engagement it receives. Over the last two decades politics, the media, and culture have all been reshaped to meet a single, overriding incentive: posts that provoke an emotional response often rise to the top.
Efforts to improve the health of online spaces have long focused on content moderation, the practice of detecting and removing bad content. Tech companies hired workers and built AI to identify hate speech, incitement to violence, and harassment. That worked imperfectly, but it stopped the worst toxicity from flooding our feeds.
There was one problem: while these AIs helped remove the bad, they didn’t elevate the good. “Do you see an internet that is working, where we are having conversations that are healthy or productive?” asks Yasmin Green, the CEO of Google’s Jigsaw unit, which was founded in 2010 with a remit to address threats to open societies. “No. You see an internet that is driving us further and further apart.”
What if there were another way?
Jigsaw believes it has found one. On Monday, the Google subsidiary revealed a new set of AI tools, or classifiers, that can score posts based on the likelihood that they contain good content: Is a post nuanced? Does it contain evidence-based reasoning? Does it share a personal story, or foster human compassion? By returning a numerical score (from 0 to 1) representing the likelihood of a post containing each of those virtues and others, these new AI tools could allow the designers of online spaces to rank posts in a new way. Instead of posts that receive the most likes or comments rising to the top, platforms could—in an effort to foster a better community—choose to put the most nuanced comments, or the most compassionate ones, first…(More)”.
Mass Data Sharing in Smart Cities
Report by Berenika Drazewska and Mark Findlay: “There are at least two ways of understanding the importance of this Report and its implications. The essential research purpose was to examine the nature of mass data sharing between private and public agencies in the commerce and administration of certain smart cities. With this knowledge the research speculated on and selectively exposed the governance challenges posed by this sharing for stakeholders, citizen/residents in particular, in various data relationships and arrangements. Predicting that good data governance policy and practices can address these challenges, the Report proposes a model strategy that grows from commitments where stakeholders will employ trusted data spaces to create respectful and responsible data relationships, where the benefits of data sharing can also be achieved without compromising any stakeholder interests…(More)”.
United against algorithms: a primer on disability-led struggles against algorithmic injustice
Report by Georgia van Toorn: “Algorithmic decision-making (ADM) poses urgent concerns regarding the rights and entitlements of people with disability from all walks of life. As ADM systems become increasingly embedded in government decision-making processes, there is a heightened risk of harm, such as unjust denial of benefits or inadequate support, accentuated by the expanding reach of state surveillance.
ADM systems have far reaching impacts on disabled lives and life chances. Despite this, they are often designed without the input of people with lived experience of disability, for purposes that do not align with the goals of full rights, participation, and justice for disabled people.
This primer explores how people with disability are collectively responding to the threats posed by algorithmic, data-driven systems – specifically their public sector applications. It provides an introductory overview of the topic, exploring the approaches, obstacles, and actions taken by people with disability in their ‘algoactivist’ struggles…(More)”.
The CFPB wants to rein in data brokers
Article by Gaby Del Valle: “The Consumer Financial Protection Bureau wants to propose new regulations that would require data brokers to comply with the Fair Credit Reporting Act. In a speech at the White House earlier this month, CFPB Director Rohit Chopra said the agency is looking into policies to “ensure greater accountability” for companies that buy and sell consumer data, in keeping with an executive order President Joe Biden issued in late February.
Chopra said the agency is considering proposals that would define data brokers that sell certain types of data as “consumer reporting agencies,” thereby requiring those companies to comply with the Fair Credit Reporting Act (FCRA). The statute bans sharing certain kinds of data (e.g., your credit report) with entities unless they serve a specific purpose outlined in the law (e.g., if the report is used for employment purposes or to extend a line of credit to someone).
The CFBP views the buying and selling of consumer data as a national security issue, not just a matter of privacy. Chopra mentioned three massive data breaches — the 2015 Anthem leak, the 2017 Equifax hack, and the 2018 Marriott breach — as examples of foreign adversaries illicitly obtaining Americans’ personal data. “When Americans’ health information, financial information, and even their travel whereabouts can be assembled into detailed dossiers, it’s no surprise that this raises risks when it comes to safety and security,” Chopra said. But the focus on high-profile hacks obscures a more pervasive, totally legal phenomenon: data brokers’ ability to sell detailed personal information to anyone who’s willing to pay for it…(More)”.
Measuring the mobile body
Article by Laura Jung: “…While nation states have been collecting data on citizens for the purposes of taxation and military recruitment for centuries, its indexing, organization in databases and classification for particular governmental purposes – such as controlling the mobility of ‘undesirable’ populations – is a nineteenth-century invention. The French historian and philosopher Michel Foucault describes how, in the context of growing urbanization and industrialization, states became increasingly preoccupied with the question of ‘circulation’. Persons and goods, as well as pathogens, circulated further than they had in the early modern period. While states didn’t seek to suppress or control these movements entirely, they sought means to increase what was seen as ‘positive’ circulation and minimize ‘negative’ circulation. They deployed the novel tools of a positivist social science for this purpose: statistical approaches were used in the field of demography to track and regulate phenomena such as births, accidents, illness and deaths. The emerging managerial nation state addressed the problem of circulation by developing a very particular toolkit amassing detailed information about the population and developing standardized methods of storage and analysis.
One particularly vexing problem was the circulation of known criminals. In the nineteenth century, it was widely believed that if a person offended once, they would offend again. However, the systems available for criminal identification were woefully inadequate to the task.
As criminologist Simon Cole explains, identifying an unknown person requires a ‘truly unique body mark’. Yet before the advent of modern systems of identification, there were only two ways to do this: branding or personal recognition. While branding had been widely used in Europe and North America on convicts, prisoners and enslaved people, evolving ideas around criminality and punishment largely led to the abolition of physical marking in the early nineteenth century. The criminal record was established in its place: a written document cataloguing the convict’s name and a written description of their person, including identifying marks and scars…(More)”.
AI-driven public services and the privacy paradox: do citizens really care about their privacy?
Paper by Based on privacy calculus theory, we derive hypotheses on the role of perceived usefulness and privacy risks of artificial intelligence (AI) in public services. In a representative vignette experiment (n = 1,048), we asked citizens whether they would download a mobile app to interact in an AI-driven public service. Despite general concerns about privacy, we find that citizens are not susceptible to the amount of personal information they must share, nor to a more anthropomorphic interface. Our results confirm the privacy paradox, which we frame in the literature on the government’s role to safeguard ethical principles, including citizens’ privacy…(More)”.
The impact of generative artificial intelligence on socioeconomic inequalities and
policy making
Paper by Valerio Capraro et al: “Generative artificial intelligence, including chatbots like ChatGPT, has the potential to both exacerbate and ameliorate existing socioeconomic inequalities. In this article, we provide a state-of-the-art interdisciplinary overview of the probable impacts of generative AI on four critical domains: work, education, health, and information. Our goal is to warn about how generative AI could worsen existing inequalities while illuminating directions for using AI to resolve pervasive social problems. Generative AI in the workplace can boost productivity and create new jobs, but the benefits will likely be distributed unevenly. In education, it offers personalized learning but may widen the digital divide. In healthcare, it improves diagnostics and accessibility but could deepen pre-existing inequalities. For information, it democratizes content creation and access but also dramatically expands the production and proliferation of misinformation. Each section covers a specific topic, evaluates existing research, identifies critical gaps, and recommends research directions. We conclude with a section highlighting the role of policymaking to maximize generative AI’s potential to reduce inequalities while
mitigating its harmful effects. We discuss strengths and weaknesses of existing policy frameworks in the European Union, the United States, and the United Kingdom, observing that each fails to fully confront the socioeconomic challenges we have identified. We contend that these policies should promote shared prosperity through the advancement of generative AI. We suggest several concrete policies to encourage further research and debate. This article emphasizes the need for interdisciplinary collaborations to understand and address the complex challenges of generative AI…(More)”.
Data-Driven Innovation in the Creative Industries
Open Access Book edited by Melissa Terras, Vikki Jones, Nicola Osborne, and Chris Speed: “The creative industries – the place where art, business, and technology meet in economic activity – have been hugely affected by the relatively recent digitalisation (and often monetisation) of work, home, relationships, and leisure. Such trends were accelerated by the global COVID-19 pandemic. This edited collection examines how the creative industries can be supported to make best use of opportunities in digital technology and data-driven innovation.
Since digital markets and platforms are now essential for revenue generation and audience engagement, there is a vital need for improved data and digital skills in the creative and cultural sectors. Taking a necessarily global perspective, this book explores the challenges and opportunities of data-driven approaches to creativity in different contexts across the arts, cultural, and heritage sectors. Chapters reach beyond the platforms and approaches provided by the technology sector to delve into the collaborative work that supports innovation around the interdisciplinary and cross-sectoral issues that emerge where data infrastructures and approaches meet creativity.
A novel intervention that uniquely centres the role of data in the theory and practice of creative industries’ innovation, this book is valuable reading for those researching and studying the creative economy as well for those who drive investment for the creative industries in a digitalised society…(More)”.