Paper by Rianne Dekker et al: “There is increasing criticism on the use of big data and algorithms in public governance. Studies revealed that algorithms may reinforce existing biases and defy scrutiny by public officials using them and citizens subject to algorithmic decisions and services. In response, scholars have called for more algorithmic transparency and regulation. These are useful, but ex post solutions in which the development of algorithms remains a rather autonomous process. This paper argues that co-design of algorithms with relevant stakeholders from government and society is another means to achieve responsible and accountable algorithms that is largely overlooked in the literature. We present a case study of the development of an algorithmic tool to estimate the populations of refugee camps to manage the delivery of emergency supplies. This case study demonstrates how in different stages of development of the tool—data selection and pre-processing, training of the algorithm and post-processing and adoption—inclusion of knowledge from the field led to changes to the algorithm. Co-design supported responsibility of the algorithm in the selection of big data sources and in preventing reinforcement of biases. It contributed to accountability of the algorithm by making the estimations transparent and explicable to its users. They were able to use the tool for fitting purposes and used their discretion in the interpretation of the results. It is yet unclear whether this eventually led to better servicing of refugee camps…(More)”.
Intermediaries do matter: voluntary standards and the Right to Data Portability
Paper by Matteo Nebbiai: “This paper enlightens an understudied aspect of the application of the General Data Protection Regulation (GDPR) Right to Data Portability (RtDP), introducing a framework to analyse empirically the voluntary data portability standards adopted by various data controllers. The first section explains how the RtDP wording creates some “grey areas” that allow data controllers a broad interpretation of the right. Secondly, the paper shows why the regulatory initiatives affecting the interpretation of these “grey areas” can be framed as “regulatory standard-setting (RSS) schemes”, which are voluntary standards of behaviour settled either by private, public, or non-governmental actors. The empirical section reveals that in the EU, between 2000 and 2020, the number of such schemes increased every year and most of them were governed by private actors. Finally, the historical analysis highlights that the RtDP was introduced when many private-run RSS schemes were already operating, and no evidence suggests that the GDPR impacted significantly on their spread…(More)”.
Data-driven orientation and open innovation: the role of resilience in the (co-)development of social changes
Introduction to Special Issue by Orlando Troisi and Mara Grimaldi: “…Contemporary organizations that aim at exploiting the opportunities offered from Big data should reframe their processes through new technologies and analytics not only to gain competitive advantage but also to implement flexible governance and foster diffused decision-making (Visvizi et al., 2018; Polese et al., 2021).
In the last developments introduced in management research, new collaborative and open models are understood strategically according to a network view that considers the relationships with a broad set of stakeholders (from for-profit companies to users, non-profit and public institutions) as critical factors enabling well-being and innovation (Visvizi and Lytras, 2019a).
For this reason, open innovation (OI) (Chesbrough, 2003) is conceptualized to describe the way in which emergent models of innovation can enable the development of innovative insights thanks to the knowledge exchanged through a complex set of relationships enhanced by smart technologies.
Smart organizations based on OI models can be reread as smart communities, as technology-mediated networks that through the collaboration between people (Abbate et al., 2019) and the sharing of a set of norms, rules and values (Barile et al., 2017; Vargo et al., 2020) can improve well-being in different areas, from economy to environment and social inclusion (Appio et al., 2019; Kashef et al., 2021). The ability of communities to challenge environmental complexity through their constant evolution can help rereading the concept of resilience as the complex result of system’s adaptation, maintenance, change and disruption (Vargo et al., 2015). The investigation of the main resilient features (restructuring, adaptation, transformation) of smart communities can contribute to detect the transition from the emergence of innovation to the development of social changes.
Therefore, the goal of the current Special Issue is to advance new theoretical and empirical contributions that analysze how contemporary resilient data-driven organizations and communities can integrate technologies with human component (Bang et al., 2021) to reframe innovation emergence and foster the attainment of societal transformation. In this way, by using a collaborative approach, research can explore how organizations can develop innovation solutions to address relevant social issues thanks to the constant reshaping of culture and knowledge and to co-learning processes that can address the evolving community needs.
The exploration of the different ways to reframe organizational processes and policies thanks to human’s interactions mediated through technology can help the identification of how social, economic and health challenges (in COVID era but also in case of future crises) can be met through continuous transformation…(More)”
Can the use of minipublics backfire? Examining how policy adoption shapes the effect of minipublics on political support among the general public
Paper by Lisa van Dijk and Jonas Lefevere: “Academics and practitioners are increasingly interested in deliberative minipublics and whether these can address widespread dissatisfaction with contemporary politics. While optimism seems to prevail, there is also talk that the use of minipublics may backfire. When the government disregards a minipublic’s recommendations, this could lead to more dissatisfaction than not asking for its advice in the first place. Using an online survey experiment in Belgium (n = 3,102), we find that, compared to a representative decision-making process, a minipublic tends to bring about higher political support when its recommendations are fully adopted by the government, whereas it generates lower political support when its recommendations are not adopted. This study presents novel insights into whether and when the use of minipublics may alleviate or aggravate political dissatisfaction among the public at large….(More)”
City museums in the age of datafication: could museums be meaningful sites of data practice in smart cities?
Paper by Natalia Grincheva: “The article documents connections and synergies between city museums’ visions and programming as well as emerging smart city issues and dilemmas in a fast-paced urban environment marked with the processes of increasing digitalization and datafication. The research employs policy/document analysis and semi-structured interviews with smart city government representatives and museum professionals to investigating both smart city policy frameworks as well as city museum’s data-driven installations and activities in New York, London and Singapore. A comparative program analysis of the Singapore City Gallery, Museum of the City of New York and Museum of London identifies such sites of data practices as Data storytelling, interpretation and eco-curation. Discussing these sites as dedicated spaces of smart citizen engagement, the article reveals that city museums can either empower their visitors to consider their roles as active city co-makers or see them as passive recipients of the smart city transformations….(More)”.
Trust the Science But Do Your Research: A Comment on the Unfortunate Revival of the Progressive Case for the Administrative State
Essay by Mark Tushnet: “…offers a critique of one Progressive argument for the administrative state, that it would base policies on what disinterested scientific inquiries showed would best advance the public good and flexibly respond to rapidly changing technological, economic, and social conditions. The critique draws on recent scholarship in the field of Science and Technology Studies, which argues that what counts as a scientific fact is the product of complex social, political, and other processes. The critique is deployed in an analysis of the responses of the U.S. Centers for Disease Control and Food and Drug Administration to some important aspects of the COVD crisis in 2020.
A summary of the overall argument is this: The COVID virus had characteristics that made it exceptionally difficult to develop policies that would significantly limit its spread until a vaccine was available, and some of those characteristics went directly to the claim that the administrative state could respond flexibly to rapidly changing conditions. But, and here is where the developing critique of claims about scientific expertise enters, the relevant administrative agencies were bureaucracies with scientific staff members, and what those bureaucracies regard as “the science” was shaped in part by bureaucratic and political considerations, and the parts that were so shaped were important components of the overall policy response.
Part II describes policy-relevant characteristics of knowledge about the COVID virus and explains why those characteristics made it quite difficult for more than a handful of democratic nations to adopt policies that would effectively limit its penetration of their populations. Part III begins with a short presentation of the aspects of the STS critique of claims about disinterested science that have some bearing on policy responses to the pandemic. It then provides an examination shaped by that critique of the structures of the Food and Drug Administration and the Centers for Disease Control, showing how those structural features contributed to policy failures. Part IV concludes by sketching how the STS critique might inform efforts to reconstruct rather than deconstruct the administrative state, proposing the creation of Citizen Advisory Panels in science-based agencies…(More)”.
Machine learning and phone data can improve targeting of humanitarian aid
Paper by Emily Aiken, Suzanne Bellue, Dean Karlan, Chris Udry & Joshua E. Blumenstock: “The COVID-19 pandemic has devastated many low- and middle-income countries, causing widespread food insecurity and a sharp decline in living standards. In response to this crisis, governments and humanitarian organizations worldwide have distributed social assistance to more than 1.5 billion people. Targeting is a central challenge in administering these programmes: it remains a difficult task to rapidly identify those with the greatest need given available data. Here we show that data from mobile phone networks can improve the targeting of humanitarian assistance. Our approach uses traditional survey data to train machine-learning algorithms to recognize patterns of poverty in mobile phone data; the trained algorithms can then prioritize aid to the poorest mobile subscribers. We evaluate this approach by studying a flagship emergency cash transfer program in Togo, which used these algorithms to disburse millions of US dollars worth of COVID-19 relief aid. Our analysis compares outcomes—including exclusion errors, total social welfare and measures of fairness—under different targeting regimes. Relative to the geographic targeting options considered by the Government of Togo, the machine-learning approach reduces errors of exclusion by 4–21%. Relative to methods requiring a comprehensive social registry (a hypothetical exercise; no such registry exists in Togo), the machine-learning approach increases exclusion errors by 9–35%. These results highlight the potential for new data sources to complement traditional methods for targeting humanitarian assistance, particularly in crisis settings in which traditional data are missing or out of date…(More)”.
Hiding Behind Machines: Artificial Agents May Help to Evade Punishment
Paper by Till Feier, Jan Gogoll & Matthias Uhl: “The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision-makers to delegate sensitive decisions…(More)”.
Transparency in a “Post-Fact” World
Paper by Sabina Schnell: “What role can government transparency play in a democratic polity in a post-fact and a post-truth world? If the problem is not that citizens lack information about what the government does, but that they filter existing information through pre-existing ideological biases and world views, can government transparency still contribute to better informed citizens and more accountable government? To answer these questions, the article first reviews the critiques of transparency that are particularly salient in a post-fact world: that it reduces trust in government and increases polarization in deliberation. It then discusses three possible solutions: less transparency, tailored transparency, and reasoned transparency. Drawing on deliberative democracy theory, the article concludes that to reclaim the value of transparency, public administration scholars and practitioners need to move from a narrow interpretation of transparency as access to information to a broader, more holistic one, that considers more explicitly the communicative aspects of transparency and its normative foundations….(More)”.
Humans in the Loop
Paper by Rebecca Crootof, Margot E. Kaminski and W. Nicholson Price II: “From lethal drones to cancer diagnostics, complex and artificially intelligent algorithms are increasingly integrated into decisionmaking that affects human lives, raising challenging questions about the proper allocation of decisional authority between humans and machines. Regulators commonly respond to these concerns by putting a “human in the loop”: using law to require or encourage including an individual within an algorithmic decisionmaking process.
Drawing on our distinctive areas of expertise with algorithmic systems, we take a bird’s eye view to make three generalizable contributions to the discourse. First, contrary to the popular narrative, the law is already profoundly (and problematically) involved in governing algorithmic systems. Law may explicitly require or prohibit human involvement and law may indirectly encourage or discourage human involvement, all without regard to what we know about the strengths and weaknesses of human and algorithmic decisionmakers and the particular quirks of hybrid human-machine systems. Second, we identify “the MABA-MABA trap,” wherein regulators are tempted to address a panoply of concerns by “slapping a human in it” based on presumptions about what humans and algorithms are respectively better at doing, often without realizing that the new hybrid system needs its own distinct regulatory interventions. Instead, we suggest that regulators should focus on what they want the human to do—what role the human is meant to play—and design regulations to allow humans to play these roles successfully. Third, borrowing concepts from systems engineering and existing law regulating railroads, nuclear reactors, and medical devices, we highlight lessons for regulating humans in the loop as well as alternative means of regulating human-machine systems going forward….(More)”.