Paper by Gerhard Hammerschmid, Enora Palaric, Maike Rackwitz, and Kai Wegrich: “Despite claims of a paradigmatic shift toward the increased role of networks and partnerships as a form of governance—driven and enabled by digital technologies—the relation of “Networked Governance” with the pre-existing paradigms of “Traditional Weberian Public Administration” and “New Public Management” remains relatively unexplored. This research aims at collecting systematic evidence on the dominant paradigms in digitalization reforms in Europe by comparing the doctrines employed in the initial and most recent digitalization strategies across eight European countries: Estonia, France, Germany, Italy, The Netherlands, Norway, Spain, and the United Kingdom. We challenge the claim that Networked Governance is emerging as the dominant paradigm in the context of the digitalization of the public sector. The findings confirm earlier studies indicating that information and communication technologies tend to reinforce some traditional features of administration and the recentralization of power. Furthermore, we find evidence of the continued importance of key features of “New Public Management” in the digital era…(More)”.
To harness telecom data for good, there are six challenges to overcome
Blog by Anat Lewin and Sveta Milusheva: “The global use of mobile phones generates a vast amount of data. What good can be done with these data? During the COVID-19 pandemic, we saw that aggregated data from mobile phones can tell us where groups of humans are going, how many of them are there, and how they are behaving as a cluster. When used effectively and responsibly, mobile phone data can be immensely helpful for development work and emergency response — particularly in resource-constrained countries. For example, an African country that had, in recent years, experienced a cholera outbreak was ahead of the game. Since the legal and practical agreements were already in place to safely share aggregated mobile data, accessing newer information to support epidemiological modeling for COVID-19 was a straightforward exercise. The resulting datasets were used to produce insightful analyses that could better inform health, lockdown, and preventive policy measures in the country.
To better understand such challenges and opportunities, we led an effort to access and use anonymized, aggregated mobile phone data across 41 countries. During this process, we identified several recurring roadblocks and replicable successes, which we summarized in a paper along with our lessons learned. …(More)”.
Professional expertise in Policy Advisory Systems: How administrators and consultants built Behavioral Insights in Danish public agencies
Paper by Jakob Laage-Thomsen: “Recent work on consultants and academics in public policy has highlighted their transformational role. The paper traces how, in the absence of an explicit government strategy, external advisors establish different organizational arrangements to build Behavioral Insights in public agencies as a new form of administrative expertise. This variation shows the importance of the politico-administrative context within which external advisors exert influence. The focus on professional expertise adds to existing understandings of ideational compatibility in contemporary Policy Advisory Systems. Inspired by the Sociology of Professions, expertise is conceptualized as professionally constructed sets of diagnosis, inference, and treatment. The paper compares four Danish governmental agencies since 2010, revealing the central roles external advisors play in facilitating new policy ideas and diffusing new forms of expertise. This has implications for how we think of administrative expertise in contemporary bureaucracies, and the role of external advisors in fostering new forms of expertise….(More)”.
The Law of AI for Good
Paper by Orly Lobel: “Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.
A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design…(More)”
Contextualizing Datafication in Peru: Insights from a Citizen Data Literacy Project
Paper by Katherine Reilly and Marieliv Flores: The pilot data literacy project Son Mis Datos showed volunteers how to leverage Peru’s national data protection law to request access to personal data held by Peruvian companies, and then it showed them how to audit corporate data use based on the results. While this intervention had a positive impact on data literacy, by basing it on a universalist conception of datafication, our work inadvertently reproduced the dominant data paradigm we hoped to challenge. This paper offers a retrospective analysis of Son Mis Datos, and explores the gap between van Dijck’s widely cited theory of datafication, and the reality of our participants’ experiences with datafication and digital transformation on the ground in Peru. On this basis, we suggest an alternative definition of datafication more appropriate to critical scholarship as the transformation of social relations around the uptake of personal data in the coordination of transactions, and propose an alternative approach to data literacy interventions that begins with the experiences of data subjects…(More)”.
How Data Happened: A History from the Age of Reason to the Age of Algorithms
Book by Chris Wiggins and Matthew L Jones: “From facial recognition—capable of checking people into flights or identifying undocumented residents—to automated decision systems that inform who gets loans and who receives bail, each of us moves through a world determined by data-empowered algorithms. But these technologies didn’t just appear: they are part of a history that goes back centuries, from the census enshrined in the US Constitution to the birth of eugenics in Victorian Britain to the development of Google search.
Expanding on the popular course they created at Columbia University, Chris Wiggins and Matthew L. Jones illuminate the ways in which data has long been used as a tool and a weapon in arguing for what is true, as well as a means of rearranging or defending power. They explore how data was created and curated, as well as how new mathematical and computational techniques developed to contend with that data serve to shape people, ideas, society, military operations, and economies. Although technology and mathematics are at its heart, the story of data ultimately concerns an unstable game among states, corporations, and people. How were new technical and scientific capabilities developed; who supported, advanced, or funded these capabilities or transitions; and how did they change who could do what, from what, and to whom?
Wiggins and Jones focus on these questions as they trace data’s historical arc, and look to the future. By understanding the trajectory of data—where it has been and where it might yet go—Wiggins and Jones argue that we can understand how to bend it to ends that we collectively choose, with intentionality and purpose…(More)”.
Exploring data journalism practices in Africa: data politics, media ecosystems and newsroom infrastructures
Paper by Sarah Chiumbu and Allen Munoriyarwa: “Extant research on data journalism in Africa has focused on newsroom factors and the predilections of individual journalists as determinants of the uptake of data journalism on the continent. This article diverts from this literature by examining the slow uptake of data journalism in sub- Saharan Africa through the prisms of non-newsroom factors. Drawing on in-depth interviews with prominent investigative journalists sampled from several African countries, we argue that to understand the slow uptake of data journalism on the continent; there is a need to critique the role of data politics, which encompasses state, market and existing media ecosystems across the continent. Therefore, it is necessary to move beyond newsroom-centric factors that have dominated the contemporary understanding of data journalism practices. A broader, non-newsroom conceptualisation beyond individual journalistic predilections and newsroom resources provides productive clarity on data journalism’s slow uptake on the continent. These arguments are made through the conceptual prisms of materiality, performativity and reflexivity…(More)”.
Ten (not so) simple rules for clinical trial data-sharing
Paper by Claude Pellen et al: “Clinical trial data-sharing is seen as an imperative for research integrity and is becoming increasingly encouraged or even required by funders, journals, and other stakeholders. However, early experiences with data-sharing have been disappointing because they are not always conducted properly. Health data is indeed sensitive and not always easy to share in a responsible way. We propose 10 rules for researchers wishing to share their data. These rules cover the majority of elements to be considered in order to start the commendable process of clinical trial data-sharing:
- Rule 1: Abide by local legal and regulatory data protection requirements
- Rule 2: Anticipate the possibility of clinical trial data-sharing before obtaining funding
- Rule 3: Declare your intent to share data in the registration step
- Rule 4: Involve research participants
- Rule 5: Determine the method of data access
- Rule 6: Remember there are several other elements to share
- Rule 7: Do not proceed alone
- Rule 8: Deploy optimal data management to ensure that the data shared is useful
- Rule 9: Minimize risks
- Rule 10: Strive for excellence…(More)”
Machine Learning as a Tool for Hypothesis Generation
Paper by Jens Ludwig & Sendhil Mullainathan: “While hypothesis testing is a highly formalized activity, hypothesis generation remains largely informal. We propose a systematic procedure to generate novel hypotheses about human behavior, which uses the capacity of machine learning algorithms to notice patterns people might not. We illustrate the procedure with a concrete application: judge decisions about who to jail. We begin with a striking fact: The defendant’s face alone matters greatly for the judge’s jailing decision. In fact, an algorithm given only the pixels in the defendant’s mugshot accounts for up to half of the predictable variation. We develop a procedure that allows human subjects to interact with this black-box algorithm to produce hypotheses about what in the face influences judge decisions. The procedure generates hypotheses that are both interpretable and novel: They are not explained by demographics (e.g. race) or existing psychology research; nor are they already known (even if tacitly) to people or even experts. Though these results are specific, our procedure is general. It provides a way to produce novel, interpretable hypotheses from any high-dimensional dataset (e.g. cell phones, satellites, online behavior, news headlines, corporate filings, and high-frequency time series). A central tenet of our paper is that hypothesis generation is in and of itself a valuable activity, and hope this encourages future work in this largely “pre-scientific” stage of science…(More)”.
An iterative regulatory process for robot governance
Paper by Hadassah Drukarch, Carlos Calleja and Eduard Fosch-Villaronga: “There is an increasing gap between the policy cycle’s speed and that of technological and social change. This gap is becoming broader and more prominent in robotics, that is, movable machines that perform tasks either automatically or with a degree of autonomy. This is because current legislation was unprepared for machine learning and autonomous agents. As a result, the law often lags behind and does not adequately frame robot technologies. This state of affairs inevitably increases legal uncertainty. It is unclear what regulatory frameworks developers have to follow to comply, often resulting in technology that does not perform well in the wild, is unsafe, and can exacerbate biases and lead to discrimination. This paper explores these issues and considers the background, key findings, and lessons learned of the LIAISON project, which stands for “Liaising robot development and policymaking,” and aims to ideate an alignment model for robots’ legal appraisal channeling robot policy development from a hybrid top-down/bottom-up perspective to solve this mismatch. As such, LIAISON seeks to uncover to what extent compliance tools could be used as data generators for robot policy purposes to unravel an optimal regulatory framing for existing and emerging robot technologies…(More)”.