‘For good measure’: data gaps in a big data world


Paper by Sarah Giest & Annemarie Samuels: “Policy and data scientists have paid ample attention to the amount of data being collected and the challenge for policymakers to use and utilize it. However, far less attention has been paid towards the quality and coverage of this data specifically pertaining to minority groups. The paper makes the argument that while there is seemingly more data to draw on for policymakers, the quality of the data in combination with potential known or unknown data gaps limits government’s ability to create inclusive policies. In this context, the paper defines primary, secondary, and unknown data gaps that cover scenarios of knowingly or unknowingly missing data and how that is potentially compensated through alternative measures.

Based on the review of the literature from various fields and a variety of examples highlighted throughout the paper, we conclude that the big data movement combined with more sophisticated methods in recent years has opened up new opportunities for government to use existing data in different ways as well as fill data gaps through innovative techniques. Focusing specifically on the representativeness of such data, however, shows that data gaps affect the economic opportunities, social mobility, and democratic participation of marginalized groups. The big data movement in policy may thus create new forms of inequality that are harder to detect and whose impact is more difficult to predict….(More)“.

Misinformation During a Pandemic


Paper by Leonardo Bursztyn et al: “We study the effects of news coverage of the novel coronavirus by the two most widely-viewed cable news shows in the United States – Hannity and Tucker Carlson Tonight, both on Fox News – on viewers’ behavior and downstream health outcomes. Carlson warned viewers about the threat posed by the coronavirus from early February, while Hannity originally dismissed the risks associated with the virus before gradually adjusting his position starting late February. We first validate these differences in content with independent coding of show transcripts. In line with the differences in content, we present novel survey evidence that Hannity’s viewers changed behavior in response to the virus later than other Fox News viewers, while Carlson’s viewers changed behavior earlier. We then turn to the effects on the pandemic itself, examining health outcomes across counties.

First, we document that greater viewership of Hannity relative to Tucker Carlson Tonight is strongly associated with a greater number of COVID-19 cases and deaths in the early stages of the pandemic. The relationship is stable across an expansive set of robustness tests. To better identify the effect of differential viewership of the two shows, we employ a novel instrumental variable strategy exploiting variation in when shows are broadcast in relation to local sunset times. These estimates also show that greater exposure to Hannity relative to Tucker Carlson Tonight is associated with a greater number of county-level cases and deaths. Furthermore, the results suggest that in mid-March, after Hannity’s shift in tone, the diverging trajectories on COVID-19 cases begin to revert. We provide additional evidence consistent with misinformation being an important mechanism driving the effects in the data. While our findings cannot yet speak to long-term effects, they indicate that provision of misinformation in the early stages of a pandemic can have important consequences for how a disease ultimately affects the population….(More)”.

To recover faster from Covid-19, open up: Managerial implications from an open innovation perspective


Paper by Henry Chesbrough: “Covid-19 has severely tested our public health systems. Recovering from Covid-19 will soon test our economic systems. Innovation will have an important role to play in recovering from the aftermath of the coronavirus. This article discusses both how to manage innovation as part of that recovery, and also derives some lessons from how we have responded to the virus so far, and what those lessons imply for managing innovation during the recovery.

Covid-19’s assault has prompted a number of encouraging developments. One development has been the rapid mobilization of scientists, pharmaceutical companies and government officials to launch a variety of scientific initiatives to find an effective response to the virus. As of the time of this writing, there are tests underway of more than 50 different compounds as possible vaccines against the virus.1 Most of these will ultimately fail, but the severity of the crisis demands that we investigate every plausible candidate. We need rapid, parallel experimentation, and it must be the test data that select our vaccine, not internal political or bureaucratic processes.

A second development has been the release of copious amounts of information about the virus, its spread, and human responses to various public health measures. The Gates Foundation, working with the Chan-Zuckerberg Foundation and the White House Office of Science and Technology Policy have joined forces to publish all of the known medical literature on the coronavirus, in machine-readable form. This was done with the intent to accelerate the analysis of the existing research to identify possible new avenues of attack against Covid-19. The coronavirus itself was synthesized early on in the outbreak by scientists in China, providing the genetic sequence of the virus, and showing where it differed from earlier viruses such as SARS and MERS. This data was immediately shared widely with scientists and researchers around the world. At the same time, GITHUB and the Humanitarian Data Exchange each have an accumulating series of datasets on the geography of the spread of the disease (including positive test cases, hospitalizations, and deaths).

What these developments have in common is openness. In fighting a pandemic, speed is crucial, and the sooner we know more and are able to take action, the better for all of us. Opening up mobilizes knowledge from many different places, causing our learning to advance and our progress against the disease to accelerate. Openness unleashes a volunteer army of researchers, working in their own facilities, across different time zones, and different countries. Openness leverages the human capital available in the world to tackle the disease, and also accesses the physical capital (such as plant and equipment) already in place to launch rapid testing of possible solutions. This openness corresponds well to an academic body of work called open innovation (Chesbrough, 2003Chesbrough, 2019).

Innovation is often analyzed in terms of costs, and the question of whether to “make or buy” often rests on which approach costs less. But in a pandemic, time is so valuable and essential, that the question of costs is far less important than the ability to get to a solution sooner. The Covid-19 disease appears to be doubling every 3–5 days, so a delay of just a few weeks in the search for a new vaccine (they normally take 1–2 years to develop, or more) might witness multiple doublings of size of the population infected with the disease. It is for this reason that Bill Gates is providing funds to construct facilities in advance for producing the leading vaccine candidates. Though the facilities for the losing candidates will not be used, it will save precious time to make the winning vaccine in high volume, once it is found.

Open innovation can help speed things up….(More)”.

Who Do You Trust? The Consequences of Partisanship and Trust in Government for Public Responsiveness to COVID-19


Paper by Daniel Goldstein and Johannes Wiedemann: “To combat the novel coronavirus, there must be relatively uniform implementation of preventative measures, e.g., social distancing and stay-at-home orders, in order to minimize continued spread. We analyze cellphone mobility data to measure county-level compliance with these critical public health policies. Leveraging staggered roll-out, we estimate the causal effect of stay-at-home orders on mobility using a difference-in-differences strategy, which we find to have significantly curtailed movement.

However, examination of descriptive heterogeneous effects suggests the critical role that several sociopolitical attributes hold for producing asymmetrical compliance across society. We examine measures of partisanship, partisan identity being shared with government leaders, and trust in government (measured by the proxies of voter turnout and social capital). We find that Republican counties comply less, but comply relatively more when directives are given by co-partisan leaders, suggesting citizens are more trusting in the authority of co-partisans. Furthermore, our proxy measures suggest that trust in government increases overall compliance. However, when trust (as measured by social capital) is interacted with county-level partisanship, which we interpret as community-level trust, we find that trust amplifies compliance or noncompliance, depending upon the prevailing community sentiment.

We argue that these results align with a theory of public policy compliance in which individual behavior is informed by one’s level of trust in the experts who craft policy and one’s trust in those who implement it, i.e., politicians and bureaucrats. Moreover, this evaluation is amplified by local community sentiments. Our results are supportive of this theory and provide a measure of the real-world importance of trust in government to citizen welfare. Moreover, our results illustrate the role that political polarization plays in creating asymmetrical compliance with mitigation policies, an outcome that may prove severely detrimental to successful containment of the COVID-19 pandemic….(More)”.

Transparency Deserts


Paper by Christina Koningisor: “Few contest the importance of a robust transparency regime in a democratic system of government. In the United States, the “crown jewel” of this regime is the Freedom of Information Act (FOIA). Yet despite widespread agreement about the importance of transparency in government, few are satisfied with FOIA. Since its enactment, the statute has engendered criticism from transparency advocates and critics alike for insufficiently serving the needs of both the public and the government. Legal scholars have widely documented these flaws in the federal public records law.

In contrast, scholars have paid comparatively little attention to transparency laws at the state and local level. This is surprising. The role of state and local government in the everyday lives of citizens has increased in recent decades, and many critical government functions are fulfilled by state and local entities today. Moreover, crucial sectors of the public namely, media and advocacy organizations—rely as heavily on state public records laws as they do on FOIA to hold the government to account. Yet these state laws and their effects remain largely overlooked, creating gaps in both local government law and transparency law scholarship.

This Article attempts to fill these gaps by surveying the state and local transparency regime, focusing on public records laws in particular. Drawing on hundreds of public records datasets, along with qualitative interviews, the Article demonstrates that in contrast with federal law, state transparency law introduces comparatively greater barriers to disclosure and comparatively higher burdens upon government. Further, the Article highlights the existence of “transparency deserts,” or localities in which a combination of poorly drafted transparency laws, hostile government actors, and weak local media and civil society impedes effective public oversight of government.

The Article serves as a corrective to the scholarship’s current, myopic focus on federal transparency law…(More)”.

Personalized nudging


Stuart Mills at Behavioural Public Policy: “A criticism of behavioural nudges is that they lack precision, sometimes nudging people who – had their personal circumstances been known – would have benefitted from being nudged differently. This problem may be solved through a programme of personalized nudging. This paper proposes a two-component framework for personalization that suggests choice architects can personalize both the choices being nudged towards (choice personalization) and the method of nudging itself (delivery personalization). To do so, choice architects will require access to heterogeneous data.

This paper argues that such data need not take the form of big data, but agrees with previous authors that the opportunities to personalize nudges increase as data become more accessible. Finally, this paper considers two challenges that a personalized nudging programme must consider, namely the risk personalization poses to the universality of laws, regulation and social experiences, and the data access challenges policy-makers may encounter….(More)”.

Accuracy nudge’ could curtail COVID-19 misinformation online


MIT Sloan: “On February 19 in the Ukrainian town of Novi Sanzhary, alarm went up regarding the new coronavirus and COVID-19, the disease it causes. “50 infected people from China are being brought to our sanitarium,” began a widely read post on the messaging app Viber. “We can’t afford to let them destroy our population, we must prevent countless deaths. People, rise up. We all have children!!!”

Soon after came another message: “if we sleep this night, then we will wake up dead.”

Citizens mobilized. Roads were barricaded. Tensions escalated. Riots broke out, ultimately injuring nine police officers and leading to the arrests of 24 people. Later, word emerged that the news was false.

As the director-general of the World Health Organization recently put it, “we’re not just fighting an epidemic; we’re fighting an infodemic.”

Now a new study suggests that an “accuracy nudge” from social media networks could curtail the spread of misinformation about COVID-19. The working paper, from researchers at MIT Sloan and the University of Regina, examines how and why misinformation about COVID-19 spreads on social media. The researchers also examine a simple intervention that could slow this spread. (The paper builds on prior work about how misinformation diffuses online.)…(More)”.

From insight network to open policy practice: practical experiences


Paper by Jouni T. Tuomisto, Mikko V. Pohjola & Teemu J. Rintala: “Evidence-informed decision-making and better use of scientific information in societal decisions has been an area of development for decades but is still topical. Decision support work can be viewed from the perspective of information collection, synthesis and flow between decision-makers, experts and stakeholders. Open policy practice is a coherent set of methods for such work. It has been developed and utilised mostly in Finnish and European contexts.

The evaluation revealed that methods and online tools work as expected, as demonstrated by the assessments and policy support processes conducted. The approach improves the availability of information and especially of relevant details. Experts are ambivalent about the acceptability of openness – it is an important scientific principle, but it goes against many current research and decision-making practices. However, co-creation and openness are megatrends that are changing science, decision-making and the society at large. Against many experts’ fears, open participation has not caused problems in performing high-quality assessments. On the contrary, a key challenge is to motivate and help more experts, decision-makers and citizens to participate and share their views. Many methods within open policy practice have also been widely used in other contexts.

Open policy practice proved to be a useful and coherent set of methods. It guided policy processes toward a more collaborative approach, whose purpose was wider understanding rather than winning a debate. There is potential for merging open policy practice with other open science and open decision process tools. Active facilitation, community building and improving the user-friendliness of the tools were identified as key solutions for improving the usability of the method in the future….(More)”.

The significance of algorithmic selection for everyday life: The Case of Switzerland


University of Zurich: “This project empirically investigates the significance of automated algorithmic selection (AS) applications on the Internet for everyday life in Switzerland. It is the first countrywide, representative empirical study in the emerging interdisciplinary field of critical algorithm studies which assesses growing social, economic and political implications of algorithms in various life domains. The project is based on an innovative mix of methods comprising qualitative interviews and a representative Swiss online survey, combined with a passive metering (tracking) of Internet use.

  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Use and Assigned Relevance of Algorithmic-Selection Applications in Switzerland. Report 1 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]
  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Awareness of Algorithmic Selection and Attitudes in Switzerland. Report 2 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]
  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Awareness of Risks Related to Algorithmic Selection in Switzerland. Report 3 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]
  • Latzer, Michael / Festic, Noemi / Kappeler, Kiran (2020): Coping Practices Related to Algorithmic Selection in Switzerland. Report 4 from the Project: The Significance of Algorithmic Selection for Everyday Life: The Case of Switzerland. Zurich: University of Zurich. http://mediachange.ch/research/algosig [forthcoming]…(More)”.

Developing better Civic Services through Crowdsourcing: The Twitter Case Study


Paper by Srushti Wadekar, Kunal Thapar, Komal Barge, Rahul Singh, Devanshu Mishra and Sabah Mohammed: “Civic technology is a fast-developing segment that holds huge potential for a new generation of startups. A recent survey report on civic technology noted that the sector saw $430 million in investment in just the last two years. It’s not just a new market ripe with opportunity it’s crucial to our democracy. Crowdsourcing has proven to be an effective supplementary mechanism for public engagement in city government in order to use mutual knowledge in online communities to address such issues as a means of engaging people in urban design. Government needs new alternatives — alternatives of modern, superior tools and services that are offered at reasonable rates.

An effective and easy-to-use civic technology platform enables wide participation. Response to, and a ‘conversation’ with, the users is very crucial for engagement, as is a feeling of being part of a society. These findings can contribute to the future design of civic technology platforms. In this research, we are trying to introduce a crowdsourcing platform, which will be helpful to people who are facing problems in their everyday practice because of the government services. This platform will gather the information from the trending twitter tweets for last month or so and try to identify which challenges public is confronting. Twitter for crowdsourcing as it is a simple social platform for questions and for the people who see the tweet to get an instant answer. These problems will be analyzed based on their significance which then will be made open to public for its solutions. The findings demonstrate how crowdsourcing tends to boost community engagement, enhances citizens ‘ views of their town and thus tends us find ways to enhance the city’s competitiveness, which faces some serious problems. Using of topic modeling with Latent Dirichlet Allocation (LDA) algorithm helped get categorized civic technology topics which was then validated by simple classification algorithm. While working on this research, we encountered some issues regarding to the tools that were available which we have discussed in the ‘Counter arguments’ section….(More)”.