Why crowdsourcing fails


Paper by Linus Dahlander and Henning Piezunka: ” Crowdsourcing—asking an undefned group of external contributors to work on tasks—allows organizations to tap into the expertise of people around the world. Crowdsourcing is known to increase innovation and loyalty to brands, but many organizations struggle to leverage its potential, as our research shows. Most often this is because organizations fail to properly plan for all the diferent stages of crowd engagement. In this paper, we use several examples to explain these challenges and ofer advice for how organizations can overcome them….(More)”.

Cultivating Trustworthy Artificial Intelligence in Digital Government


Paper by Teresa M. Harrison and Luis Felipe Luna-Reyes: “While there is growing consensus that the analytical and cognitive tools of artificial intelligence (AI) have the potential to transform government in positive ways, it is also clear that AI challenges traditional government decision-making processes and threatens the democratic values within which they are framed. These conditions argue for conservative approaches to AI that focus on cultivating and sustaining public trust. We use the extended Brunswik lens model as a framework to illustrate the distinctions between policy analysis and decision making as we have traditionally understood and practiced them and how they are evolving in the current AI context along with the challenges this poses for the use of trustworthy AI. We offer a set of recommendations for practices, processes, and governance structures in government to provide for trust in AI and suggest lines of research that support them….(More)”.

Beyond ‘Direct Democracy’: Popular Vote Processes in Democratic Systems


Introduction to Special Issue of the Journal of Representative Democracy by Alice el-Wakil & Spencer McKay: “Despite controversy over recent referendums and initiatives, populists and social movements continue to call for the use of these popular vote processes. Most political and academic debates about whether these calls should be answered have adopted a dominant framework that focuses on whether we should favour ‘direct’ or ‘representative’ democracy. However, this framework obscures more urgent questions about whether, when, and how popular vote processes should be implemented in democratic systems. How do popular vote processes interact with representative institutions? And how could these interactions be democratized? The contributions in this special issue address these and related questions by replacing the framework of ‘direct democracy’ with systemic approaches. The normative contributions illustrate how these approaches enable the development of counternarratives about the value of popular vote processes and clarify the nature of the underlying ideals they should realize. The empirical contributions examine recent cases with a variety of methodological tools, demonstrating that systemic approaches attentive to context can generate new insights about the use of popular vote processes. This introduction puts these contributions into conversation to illustrate how a shift in approach establishes a basis for (re-)evaluating existing practices and guiding reforms so that referendums and initiatives foster democracy….(More)”.

Data Combination for Problem-solving: A Case of an Open Data Exchange Platform


Paper by Teruaki Hayashi et al: “In recent years, rather than enclosing data within a single organization, exchanging and combining data from different domains has become an emerging practice. Many studies have discussed the economic and utility value of data and data exchange, but the characteristics of data that contribute to problem solving through data combination have not been fully understood. In big data and interdisciplinary data combinations, large-scale data with many variables are expected to be used, and value is expected to be created by combining data as much as possible. In this study, we conduct three experiments to investigate the characteristics of data, focusing on the relationships between data combinations and variables in each dataset, using empirical data shared by the local government. The results indicate that even datasets that have a few variables are frequently used to propose solutions for problem solving. Moreover, we found that even if the datasets in the solution do not have common variables, there are some well-established solutions to the problems. The findings of this study shed light on mechanisms behind data combination for problem-solving involving multiple datasets and variables…(More)”.

Co-creation applied to public policy: a case study on collaborative policies for the platform economy in the city of Barcelona


Paper by Mayo Fuster Morell & Enric Senabre Hidalgo: “This paper addresses how far co-creation methodologies can be applied to policy-making innovation in the platform economy. The driving question is how co-creation collaboration-based policy-making can increase diversity and strengthen the participation of actors. The analysis is based on a three-year case study on the platform economy in Barcelona, describing how co-creation dynamics contributed to the participatory definition of local public policies and agenda. The methodology is based on participatory design techniques, involving participant observation and content analysis. Results indicate that co-creation can increase participation diversity aligning academic, economic, and social viewpoints in policy innovation from a quadruple helix perspective. In addition, collaboration schemes assist in engaging a wide diversity of participants in the policy ideation process which, in this case, resulted in 87 new policy measures, with contributions from more than 300 people of different backgrounds and perspectives. The case study demonstrates the value of a cycle of collaboration going beyond mere symbolic engagement or citizen support to public policy-making. It further shows the importance of combining co-creation with methods of action research, strategic planning and knowledge management, as well as with face-to-face interactions and online channels….(More)”.

Policy priority inference: A computational framework to analyze the allocation of resources for the sustainable development goals


Paper by Omar A. Guerrero and Gonzalo Castañeda: “We build a computational framework to support the planning of development and the evaluation of budgetary strategies toward the 2030 Agenda. The methodology takes into account some of the complexities of the political economy underpinning the policymaking process: the multidimensionality of development, the interlinkages between these dimensions, and the inefficiencies of policy interventions, as well as institutional factors that promote or discourage these inefficiencies. The framework is scalable and usable even with limited publicly available information: development-indicator data. However, it can be further refined as more data becomes available, for example, on public expenditure. We demonstrate its usage through an application for the Mexican federal government. For this, we infer historical policy priorities, that is, the non-observable allocations of transformative resources that generated past changes in development indicators. We also show how to use the tool to assess the feasibility of development goals, to measure policy coherence, and to identify accelerators. Overall, the framework and its computational tools allow policymakers and other stakeholders to embrace a complexity (and a quantitative) view to tackle the challenges of the Sustainable Development Goals….(More)”.

Connected Devices – an Unfair Competition Law Approach to Data Access Rights of Users


Paper by Josef Drexl: “On the European level, promoting the free flow of data and access to data has moved to the forefront of the policy goals concerning the digital economy. A particular aspect of this economy is the advent of connected devices that are increasingly deployed and used in the context of the Internet of Things (IoT). As regards these devices, the Commission has identified the particular problem that the manufacturers may try to remain in control of the data and refuse data access to third parties, thereby impeding the development of innovative business models in secondary data-related markets. To address this issue, this paper discusses potential legislation on data access rights of the users of connected devices. The paper conceives refusals of the device manufacturers to grant access to data vis-à-vis users as a form of unfair trading practice and therefore recommends embedding data access rights of users in the context of the European law against unfair competition. Such access rights would be complementary to other access regimes, including sector-specific data access rights of competitors in secondary markets as well as access rights available under contract and competition law. Against the backdrop of ongoing debates to reform contract and competition law for the purpose of enhancing data access, the paper seeks to draw attention to a so far not explored unfair competition law approach….(More)”.

Mapping urban temperature using crowd-sensing data and machine learning


Paper by Marius Zumwald, Benedikt Knüsel, David N.Bresch and Reto Knutti: :”Understanding the patterns of urban temperature a high spatial and temporal resolution is of large importance for urban heat adaptation and mitigation. Machine learning offers promising tools for high-resolution modeling of urban heat, but it requires large amounts of data. Measurements from official weather stations are too sparse but could be complemented by crowd-sensed measurements from citizen weather stations (CWS). Here we present an approach to model urban temperature using the quantile regression forest algorithm and CWS, open government and remote sensing data. The analysis is based on data from 691 sensors in the city of Zurich (Switzerland) during a heat wave using data from for 25-30th June 2019. We trained the model using hourly data from for 25-29th June (n = 71,837) and evaluate the model using data from June 30th (n = 14,105). Based on the model, spatiotemporal temperature maps of 10 × 10 m resolution were produced. We demonstrate that our approach can accurately map urban heat at high spatial and temporal resolution without additional measurement infrastructure. We furthermore critically discuss and spatially map estimated prediction and extrapolation uncertainty. Our approach is able to inform highly localized urban policy and decision-making….(More)”.

Crowdsourcing Crime Control


Paper by Wayne A. Logan: “Crowdsourcing, which leverages the collective expertise and resources of (mainly online) communities to achieve specified objectives, today figures prominently in a broad array of realms, including business, human rights, and medical and scientific research. It also now plays a significant role in governmental crime control efforts. Web and forensic–genetic sleuths, armchair detectives, and the like are collecting and analyzing evidence and identifying criminal suspects, at the behest of and with varying degrees of assistance from police officials.

Unfortunately, as with so many other aspects of modern society, current criminal procedure doctrine is ill-equipped to address this development. In particular, for decades it has been accepted that the Fourth Amendment only limits searches and seizures undertaken by public law enforcement, not private actors. Crowdsourcing, however, presents considerable taxonomic difficulty for existing doctrine, making the already often permeable line between public and private behavior considerably more so. Moreover, although crowdsourcing promises considerable benefit as an investigative force multiplier for police, it poses risks, including misidentification of suspects, violation of privacy, a diminution of governmental transparency and democratic accountability, and the fostering of a mutual social suspicion that is inimical to civil society.

Despite its importance, government use of crowdsourcing to achieve crime control goals has not yet been examined by legal scholars. Like the internet on which it predominantly relies, crowdsourcing is not going away; if anything, it will proliferate in coming years. The challenge lies in harnessing its potential, while protecting against the significant harms that will accrue should it go unregulated. This Essay describes the phenomenon and provides a framework for its regulation, in the hope of ensuring that the wisdom of the crowd does not become the tyranny of the crowd….(More)”.

John Snow, Cholera, and South London Reconsidered


Paper by Thomas Coleman: “John Snow, the London doctor often considered the father of modern epidemiology, analyzed 1849 and 1854 cholera mortality for a population of nearly half a million in South London. His aim was to convince skeptics and “prove the overwhelming influence which the nature of the water supply exerted over the mortality.” Snow’s analysis was innovative – he is commonly credited with the first application of both randomization as an instrumental variable and differences-in-differences (DiD).

This paper provides an historical review of Snow’s three approaches to analyzing the data: a direct comparison of mixed (quasi-randomized) populations; a primitive form of difference-in-differences; and a comparison of actual versus predicted mortality. Snow’s analysis did not convince his skeptics, and we highlight problems with his analysis. We then turn to a re-analysis of Snow’s evidence, casting his analysis in the modern forms of randomization as IV and DiD. The re-examination supports Snow’s claims that data demonstrated the influence of water supply. As a matter of historical interest this strengthens the argument against Snow’s skeptics. For modern practitioners, the data and analysis provide an example of modern statistical tools (randomization and DiD) and the complementary use of observational and (quasi) experimental data….(More)”