Small-Scale Deliberation and Mass Democracy: A Systematic Review of the Spillover Effects of Deliberative Minipublics


Paper by Ramon van der Does and Vincent Jacquet: “Deliberative minipublics are popular tools to address the current crisis in democracy. However, it remains ambiguous to what degree these small-scale forums matter for mass democracy. In this study, we ask the question to what extent minipublics have “spillover effects” on lay citizens—that is, long-term effects on participating citizens and effects on non-participating citizens. We answer this question by means of a systematic review of the empirical research on minipublics’ spillover effects published before 2019. We identify 60 eligible studies published between 1999 and 2018 and provide a synthesis of the empirical results. We show that the evidence for most spillover effects remains tentative because the relevant body of empirical evidence is still small. Based on the review, we discuss the implications for democratic theory and outline several trajectories for future research…(More)”.

Pooling society’s collective intelligence helped fight COVID – it must help fight future crises too


Aleks Berditchevskaia and Kathy Peach at The Conversation: “A Global Pandemic Radar is to be created to detect new COVID variants and other emerging diseases. Led by the WHO, the project aims to build an international network of surveillance hubs, set up to share data that’ll help us monitor vaccine resistance, track diseases and identify new ones as they emerge.

This is undeniably a good thing. Perhaps more than any event in recent memory, the COVID pandemic has brought home the importance of pooling society’s collective intelligence and finding new ways to share that combined knowledge as quickly as possible.

At its simplest, collective intelligence is the enhanced capacity that’s created when diverse groups of people work together, often with the help of technology, to mobilise more information, ideas and knowledge to solve a problem. Digital technologies have transformed what can be achieved through collective intelligence in recent years – connecting more of us, augmenting human intelligence with machine intelligence, and helping us to generate new insights from novel sources of data.

So what have we learned over the last 18 months of collective intelligence pooling that can inform the Global Pandemic Radar? Building from the COVID crisis, what lessons will help us perfect disease surveillance and respond better to future crises?…(More)”

Civic Space Scan of Finland


OECD Report: “At the global level, civic space is narrowing and thus efforts to protect and promote it are more important than ever. The OECD defines Civic Space as the set of legal, policy, institutional, and practical conditions necessary for non-governmental actors to access information, express themselves, associate, organise, and participate in public life. This document presents the Civic Space Scan of Finland, which was undertaken at the request of the Finnish government and is the first OECD report of its kind. OECD Civic Space Scans in particular assess how governments protect and promote civic space in each national context and propose ways to strengthen existing frameworks and practices. The Scan assesses four key dimensions of civic space: civic freedoms and rights, media freedoms and digital rights, the enabling environment for civil society organisations, and civic participation in policy and decision making. Each respective chapter of the report contains actionable recommendations for the Government of Finland. As part of the scan process, a citizens’ panel – also overseen by the OECD – was held in February 2021 and generated a wide range of recommendations for the government from a representative microcosm of Finnish society….(More)”.

The City as a Commons Reloaded: from the Urban Commons to Co-Cities Empirical Evidence on the Bologna Regulation


Chapter by Elena de Nictolis and Christian Iaione: “The City of Bologna is widely recognized for an innovative regulatory framework to enable urban commons. The “Regulation on public collaboration for the Urban Commons” produced more than 400 pacts of collaboration and was adopted by more than 180 Italian cities so far.

The chapter presents an empirical assessment of 280 pacts (2014-2016). The analytical approach is rooted in political economy (Polany 1944; Ahn & Ostrom 2003) and quality of democracy analysis (Diamond & Morlino, 2005). It investigates whether a model of co-governance applied to urban assets as commons impacts on the democratic qualities of equality and rule of law at the urban level. The findings suggest suggests that legal recognition of the urban commons is not sufficient if not coupled with an experimentalist policymaking approach to institutionally redesign the City as a platform enabling collective action of multi-stakeholder partnerships that should be entrusted with the task to trigger neighborhood-based sustainable development. Neighborhood scale investments that aim to seed community economic ventures emerge as a possible way to overcome the shortcomings of the first policy experiments. They also suggest the need for more investigation by scholars on the inclusiveness and diversity facets related to the implementation of urban commons policies….(More)”

Public Administration and Democracy: The Virtue and Limit of Participatory Democracy as a Democratic Innovation


Paper by Sirvan karimi: “The expansion of public bureaucracy has been one of the most significant developments that has marked societies particularly, Western liberal democratic societies. Growing political apathy, citizen disgruntlement and the ensuing decline in electoral participation reflects the political nature of governance failures. Public bureaucracy, which has historically been saddled with derogatory and pejorative connotations, has encountered fierce assaults from multiple fronts. Out of theses sharp criticisms of public bureaucracy that have emanated from both sides of the ideological spectrum, attempts have been made to popularize and advance citizen participation in both policy formulation and policy implementation processes as innovations to democratize public administration. Despite their virtue, empowering connotations and spirit-uplifting messages to the public, these proposed practices of democratic innovations not only have their own shortcomings and are conducive to exacerbating the conditions that they are directed to ameliorate but they also have the potential to undermine the traditional administrative and political accountability mechanisms….(More)”.

Spies Like Us: The Promise and Peril of Crowdsourced Intelligence


Book Review by Amy Zegart of “We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News” by Eliot Higgins: “On January 6, throngs of supporters of U.S. President Donald Trump rampaged through the U.S. Capitol in an attempt to derail Congress’s certification of the 2020 presidential election results. The mob threatened lawmakers, destroyed property, and injured more than 100 police officers; five people, including one officer, died in circumstances surrounding the assault. It was the first attack on the Capitol since the War of 1812 and the first violent transfer of presidential power in American history.

Only a handful of the rioters were arrested immediately. Most simply left the Capitol complex and disappeared into the streets of Washington. But they did not get away for long. It turns out that the insurrectionists were fond of taking selfies. Many of them posted photos and videos documenting their role in the assault on Facebook, Instagram, Parler, and other social media platforms. Some even earned money live-streaming the event and chatting with extremist fans on a site called DLive. 

Amateur sleuths immediately took to Twitter, self-organizing to help law enforcement agencies identify and charge the rioters. Their investigation was impromptu, not orchestrated, and open to anyone, not just experts. Participants didn’t need a badge or a security clearance—just an Internet connection….(More)”.

Engaging with the public about algorithmic transparency in the public sector


Blog by the Centre for Data Ethics and Innovation (UK): “To move the recommendation that we made in our review into bias in algorithmic decision-making forward, we have been working with the Central Digital and Data Office (CDDO) and BritainThinks to scope what a transparency obligation could look like in practice, and in particular, which transparency measures would be most effective at increasing public understanding about the use of algorithms in the public sector. 

Due to the low levels of awareness about the use of algorithms in the public sector (CDEI polling in July 2020 found that 38% of the public were not aware that algorithmic systems were used to support decisions using personal data), we opted for a deliberative public engagement approach. This involved spending time gradually building up participants’ understanding and knowledge about algorithm use in the public sector and discussing their expectations for transparency, and co-designing solutions together. 

For this project, we worked with a diverse range of 36 members of the UK public, spending over five hours engaging with them over a three week period. We focused on three particular use-cases to test a range of emotive responses – policing, parking and recruitment.  

The final stage was an in-depth co-design session, where participants worked collaboratively to review and iterate prototypes in order to develop a practical approach to transparency that reflected their expectations and needs for greater openness in the public sector use of algorithms. 

What did we find? 

Our research validated that there was fairly low awareness or understanding of the use of algorithms in the public sector. Algorithmic transparency in the public sector was not a front-of-mind topic for most participants.

However, once participants were introduced to specific examples of potential public sector algorithms, they felt strongly that transparency information should be made available to the public, both citizens and experts. This included desires for; a description of the algorithm, why an algorithm was being used, contact details for more information, data used, human oversight, potential risks and technicalities of the algorithm…(More)”.

Cultivating an Inclusive Culture Through Personal Networks


Essay by Rob Cross, Kevin Oakes, and Connor Cross: “Many organizations have ramped up their investments in diversity, equity, and inclusion — largely in the form of anti-bias training, employee resource groups, mentoring programs, and added DEI functions and roles. But gauging the effectiveness of these measures has been a challenge….

We’re finding that organizations can get a clearer picture of employee experience by analyzing people’s network connections. They can begin to see whether DEI programs are producing the collaboration and interactions needed to help people from various demographic groups gain their footing quickly and become truly integrated.

In particular, network analysis reveals when and why people seek out individuals for information, ideas, career advice, personal support, or mentorship. In the Connected Commons, a research consortium, we have mapped organizational networks for over 20 years and have frequently been able to overlay gender data on network diagrams to identify drivers of inclusion. Extensive quantitative and qualitative research on this front has helped us understand behaviors that promote more rapid and effective integration of women after they are hired. For example, research reveals the importance of fostering collaboration across functional and geographic divides (while avoiding collaborative burnout) and cultivating energy through network connections….(More)”

Mass, Computer-Generated, and Fraudulent Comments


Report by Steven J. Balla et al: “This report explores three forms of commenting in federal rulemaking that have been enabled by technological advances: mass, fraudulent, and computer-generated comments. Mass comments arise when an agency receives a much larger number of comments in a rulemaking than it typically would (e.g., thousands when the agency typically receives a few dozen). The report focuses on a particular type of mass comment response, which it terms a “mass comment campaign,” in which organizations orchestrate the submission of large numbers of identical or nearly identical comments. Fraudulent comments, which we refer to as “malattributed comments” as discussed below, refer to comments falsely attributed to persons by whom they were not, in fact, submitted. Computer-generated comments are generated not by humans, but rather by software algorithms. Although software is the product of human actions, algorithms obviate the need for humans to generate the content of comments and submit comments to agencies.

This report examines the legal, practical, and technical issues associated with processing and responding to mass, fraudulent, and computer-generated comments. There are cross-cutting issues that apply to each of these three types of comments. First, the nature of such comments may make it difficult for agencies to extract useful information. Second, there are a suite of risks related to harming public perceptions about the legitimacy of particular rules and the rulemaking process overall. Third, technology-enabled comments present agencies with resource challenges.

The report also considers issues that are unique to each type of comment. With respect to mass comments, it addresses the challenges associated with receiving large numbers of comments and, in particular, batches of comments that are identical or nearly identical. It looks at how agencies can use technologies to help process comments received and at how agencies can most effectively communicate with public commenters to ensure that they understand the purpose of the notice-and-comment process and the particular considerations unique to processing mass comment responses. Fraudulent, or malattributed, comments raise legal issues both in criminal and Administrative Procedure Act (APA) domains. They also have the potential to mislead an agency and pose harms to individuals. Computer-generated comments may raise legal issues in light of the APA’s stipulation that “interested persons” are granted the opportunity to comment on proposed rules. Practically, it can be difficult for agencies to distinguish computer-generated comments from traditional comments (i.e., those submitted by humans without the use of software algorithms).

While technology creates challenges, it also offers opportunities to help regulatory officials gather public input and draw greater insights from that input. The report summarizes several innovative forms of public participation that leverage technology to supplement the notice and comment rulemaking process.

The report closes with a set of recommendations for agencies to address the challenges and opportunities associated with new technologies that bear on the rulemaking process. These recommendations cover steps that agencies can take with respect to technology, coordination, and docket management….(More)”.

How volunteer observers can help protect biodiversity


The Economist: “Ecology lends itself to being helped along by the keen layperson perhaps more than any other science. For decades, birdwatchers have recorded their sightings and sent them to organisations like Britain’s Royal Society for the Protection of Birds, or the Audubon society in America, contributing precious data about population size, trends, behaviour and migration. These days, any smartphone connected to the internet can be pointed at a plant to identify a species and add a record to a regional data set.

Social-media platforms have further transformed things, adding big data to weekend ecology. In 2002, the Cornell Lab of Ornithology in New York created eBird, a free app available in more than 30 languages that lets twitchers upload and share pictures and recordings of birds, labelled by time, location and other criteria. More than 100m sightings are now uploaded annually, and the number is growing by 20% each year. In May the group marked its billionth observation. The Cornell group also runs an audio library with 1m bird calls, and the Merlin app, which uses eBird data to identify species from pictures and descriptions….(More)”.