Invisible Rulers: The People Who Turn Lies into Reality


Book by Renée DiResta: “…investigation into the way power and influence have been profoundly transformed reveals how a virtual rumor mill of niche propagandists increasingly shapes public opinion. While propagandists position themselves as trustworthy Davids, their reach, influence, and economics make them classic Goliaths—invisible rulers who create bespoke realities to revolutionize politics, culture, and society. Their work is driven by a simple maxim: if you make it trend, you make it true.
 
By revealing the machinery and dynamics of the interplay between influencers, algorithms, and online crowds, DiResta vividly illustrates the way propagandists deliberately undermine belief in the fundamental legitimacy of institutions that make society work. This alternate system for shaping public opinion, unexamined until now, is rewriting the relationship between the people and their government in profound ways. It has become a force so shockingly effective that its destructive power seems limitless. Scientific proof is powerless in front of it. Democratic validity is bulldozed by it. Leaders are humiliated by it. But they need not be.
 
With its deep insight into the power of propagandists to drive online crowds into battle—while bearing no responsibility for the consequences—Invisible Rulers not only predicts those consequences but offers ways for leaders to rapidly adapt and fight back…(More)”.

Handbook on Public Policy and Artificial Intelligence


Book edited by Regine Paul, Emma Carmel and Jennifer Cobbe: “…explores the relationship between public policy and artificial intelligence (AI) technologies across a broad range of geographical, technical, political and policy contexts. It contributes to critical AI studies, focusing on the intersection of the norms, discourses, policies, practices and regulation that shape AI in the public sector.

Expert authors in the field discuss the creation and use of AI technologies, and how public authorities respond to their development, by bringing together emerging scholarly debates about AI technologies with longer-standing insights on public administration, policy, regulation and governance. Contributions in the Handbook mobilize diverse perspectives to critically examine techno-solutionist approaches to public policy and AI, dissect the politico-economic interests underlying AI promotion and analyse implications for sustainable development, fairness and equality. Ultimately, this Handbook questions whether regulatory concepts such as ethical, trustworthy or accountable AI safeguard a democratic future or contribute to a problematic de-politicization of the public sector…(More)”.

 

We need a social science of data


Article by Cristina Alaimo and Jannis Kallinikos: “The practical and technical knowledge of data science must be complemented by a scientific field that can respond to these challenges and trace their implications for social practice and institutions.

Determining how such a field will look is not the job of two people but, rather, that of a whole scientific and social discourse that we as a society have the obligation to develop and maintain. Students and data users must know the power and subtlety of the artefacts they study and employ.

Such a scientific field should also provide the basis for analysing the social relations and economic dynamics of data generation and use, which are closely associated with several social groups, professions, communities and firms….(More)”.

Data Statements: From Technical Concept to Community Practice


Paper by Angelina McMillan-Major, Emily M. Bender, and Batya Friedman: “Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine learning models, and other technical systems have led to the creation of documentation toolkits to facilitate transparency, diagnosis, and inclusion. This work takes the next step: to catalyze community uptake, alongside toolkit improvement. Specifically, starting from one such proposed toolkit specialized for language datasets, data statements for natural language processing, we explore how to improve the toolkit in three senses: (1) the content of the toolkit itself, (2) engagement with professional practice, and (3) moving from a conceptual proposal to a tested schema that the intended community of use may readily adopt. To achieve these goals, we first conducted a workshop with natural language processing practitioners to identify gaps and limitations of the toolkit as well as to develop best practices for writing data statements, yielding an interim improved toolkit. Then we conducted an analytic comparison between the interim toolkit and another documentation toolkit, datasheets for datasets. Based on these two integrated processes, we present our revised Version 2 schema and best practices in a guide for writing data statements. Our findings more generally provide integrated processes for co-evolving both technology and practice to address ethical concerns within situated technical communities…(More)”

Green Light


Google Research: “Road transportation is responsible for a significant amount of global and urban greenhouse gas emissions. It is especially problematic at city intersections where pollution can be 29 times higher than on open roads.  At intersections, half of these emissions come from traffic accelerating after stopping. While some amount of stop-and-go traffic is unavoidable, part of it is preventable through the optimization of traffic light timing configurations. To improve traffic light timing, cities need to either install costly hardware or run manual vehicle counts; both of these solutions are expensive and don’t provide all the necessary information. 

Green Light uses AI and Google Maps driving trends, with one of the strongest understandings of global road networks, to model traffic patterns and build intelligent recommendations for city traffic engineers to optimize traffic flow. Early numbers indicate a potential for up to 30% reduction in stops and 10% reduction in greenhouse gas emissions (1). By optimizing each intersection, and coordinating between adjacent intersections, we can create waves of green lights and help cities further reduce stop-and-go traffic. Green Light is now live in 70 intersections in 12 cities, 4 continents, from Haifa, Israel to Bangalore, India to Hamburg, Germany – and in these intersections we are able to save fuel and lower emissions for up to 30M car rides monthly. Green Light reflects Google Research’s commitment to use AI to address climate change and improve millions of lives in cities around the world…(More)”

Effects of Open Access. Literature study on empirical research 2010–2021


Paper by David Hopf, Sarah Dellmann, Christian Hauschke, and Marco Tullney: “Open access — the free availability of scholarly publications — intuitively offers many benefits. At the same time, some academics, university administrators, publishers, and political decision-makers express reservations. Many empirical studies on the effects of open access have been published in the last decade. This report provides an overview of the state of research from 2010 to 2021. The empirical results on the effects of open access help to determine the advantages and disadvantages of open access and serve as a knowledge base for academics, publishers, research funding and research performing institutions, and policy makers. This overview of current findings can inform decisions about open access and publishing strategies. In addition, this report identifies aspects of the impact of open access that are potentially highly relevant but have not yet been sufficiently studied…(More)”.

Superconvergence


Book by Jamie Metzl: “…explores how artificial intelligence, genome sequencing, gene editing, and other revolutionary technologies are transforming our lives, world, and future. These accelerating and increasingly interconnected technologies have the potential to improve our health, feed billions of people, supercharge our economies, store essential information for millions of years, and save our planet, but they can also―if we are not careful―do immeasurable harm.

The challenge we face is that while our ability to engineer the world around us is advancing exponentially, our processes for understanding the scope, scale, and implications of these changes, and for managing our godlike powers wisely, are only inching forward glacially…(More)”.

Artificial Intelligence Applications for Social Science Research


Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”

The Deliberative Turn in Democratic Theory


Book by Antonino Palumbo: “Thirty years of developments in deliberative democracy (DD) have consolidated this subfield of democratic theory. The acquired disciplinary prestige has made theorist and practitioners very confident about the ability of DD to address the legitimacy crisis experienced by liberal democracies at present at both theoretical and practical levels. The book advance a critical analysis of these developments that casts doubts on those certainties — current theoretical debates are reproposing old methodological divisions, and are afraid to move beyond the minimalist model of democracy advocated by liberal thinkers; democratic experimentation at the micro-level seems to have no impact at the macro-level, and remain sets of isolated experiences. The book indicates that those defects are mainly due to the liberal minimalist frame of reference within which reflection in democratic theory and practice takes place. Consequently, it suggests to move beyond liberal understandings of democracy as a game in need of external rules, and adopt instead a vision of democracy as a self-correcting metagame…(More)”.

Using Artificial Intelligence to Accelerate Collective Intelligence


Paper by Róbert Bjarnason, Dane Gambrell and Joshua Lanthier-Welch: “In an era characterized by rapid societal changes and complex challenges, institutions’ traditional methods of problem-solving in the public sector are increasingly proving inadequate. In this study, we present an innovative and effective model for how institutions can use artificial intelligence to enable groups of people to generate effective solutions to urgent problems more efficiently. We describe a proven collective intelligence method, called Smarter Crowdsourcing, which is designed to channel the collective intelligence of those with expertise about a problem into actionable solutions through crowdsourcing. Then we introduce Policy Synth, an innovative toolkit which leverages AI to make the Smarter Crowdsourcing problem-solving approach both more scalable, more effective and more efficient. Policy Synth is crafted using a human-centric approach, recognizing that AI is a tool to enhance human intelligence and creativity, not replace it. Based on a real-world case study comparing the results of expert crowdsourcing alone with expert sourcing supported by Policy Synth AI agents, we conclude that Smarter Crowdsourcing with Policy Synth presents an effective model for integrating the collective wisdom of human experts and the computational power of AI to enhance and scale up public problem-solving processes.

The potential for artificial intelligence to enhance the performance of groups of people has been a topic of great interest among scholars of collective intelligence. Though many AI toolkits exist, they too often are not fitted to the needs of institutions and policymakers. While many existing approaches view AI as a tool to make crowdsourcing and deliberative processes better and more efficient, Policy Synth goes a step further, recognizing that AI can also be used to synthesize the findings from engagements together with research to develop evidence-based solutions and policies. This study contributes significantly to the fields of collective intelligence, public problem-solving, and AI. The study offers practical tools and insights for institutions looking to engage communities effectively in addressing urgent societal challenges…(More)”