Identifying and addressing data asymmetries so as to enable (better) science


Paper by Stefaan Verhulst and Andrew Young: “As a society, we need to become more sophisticated in assessing and addressing data asymmetries—and their resulting political and economic power inequalities—particularly in the realm of open science, research, and development. This article seeks to start filling the analytical gap regarding data asymmetries globally, with a specific focus on the asymmetrical availability of privately-held data for open science, and a look at current efforts to address these data asymmetries. It provides a taxonomy of asymmetries, as well as both their societal and institutional impacts. Moreover, this contribution outlines a set of solutions that could provide a toolbox for open science practitioners and data demand-side actors that stand to benefit from increased access to data. The concept of data liquidity (and portability) is explored at length in connection with efforts to generate an ecosystem of responsible data exchanges. We also examine how data holders and demand-side actors are experimenting with new and emerging operational models and governance frameworks for purpose-driven, cross-sector data collaboratives that connect previously siloed datasets. Key solutions discussed include professionalizing and re-imagining data steward roles and functions (i.e., individuals or groups who are tasked with managing data and their ethical and responsible reuse within organizations). We present these solutions through case studies on notable efforts to address science data asymmetries. We examine these cases using a repurposable analytical framework that could inform future research. We conclude with recommended actions that could support the creation of an evidence base on work to address data asymmetries and unlock the public value of greater science data liquidity and responsible reuse…(More)”.

The UK Algorithmic Transparency Standard: A Qualitative Analysis of Police Perspectives


Paper by Marion Oswald, Luke Chambers, Ellen P. Goodman, Pam Ugwudike, and Miri Zilka: “1. The UK Government’s draft ‘Algorithmic Transparency Standard’ is intended to provide a standardised way for public bodies and government departments to provide information about how algorithmic tools are being used to support decisions. The research discussed in this report was conducted in parallel to the piloting of the Standard by the Cabinet Office and the Centre for Data Ethics and Innovation.
2. We conducted semi-structured interviews with respondents from across UK policing and commercial bodies involved in policing technologies. Our aim was to explore the implications for police forces of participation in the Standard, to identify rewards, risks, challenges for the police, and areas where the Standard could be improved, and therefore to contribute to the exploration of policy options for expansion of participation in the Standard.
3. Algorithmic transparency is both achievable for policing and could bring significant rewards. A key reward of police participation in the Standard is that it provides the opportunity to demonstrate proficient implementation of technology-driven policing, thus enhancing earned trust. Research participants highlighted the public good that could result from the considered use of algorithms.
4. Participants noted, however, a risk of misperception of the dangers of policing technology, especially if use of algorithmic tools was not appropriately compared to the status quo and current methods…(More)”.

Is GDP Becoming Obsolete? The “Beyond GDP” Debate


Paper by Charles R. Hulten & Leonard I. Nakamura: “GDP is a closely watched indicator of the current health of the economy and an important tool of economic policy. It has been called one of the great inventions of the 20th Century. It is not, however, a persuasive indicator of individual wellbeing or economic progress. There have been calls to refocus or replace GDP with a metric that better reflects the welfare dimension. In response, the U.S. agency responsible for the GDP accounts recently launched a “GDP and Beyond” program. This is by no means an easy undertaking, given the subjective and idiosyncratic nature of much of individual wellbeing. This paper joins the Beyond GDP effort by extending the standard utility maximization model of economic theory, using an expenditure function approach to include those non-GDP sources of wellbeing for which a monetary value can be established. We term our new measure expanded GDP (EGDP). A welfare-adjusted stock of wealth is also derived using the same general approach used to obtain EGDP. This stock is useful for issues involving the sustainability of wellbeing over time. One of the implications of this dichotomy is that conventional cost-based wealth may increase over a period of time while welfare-corrected wealth may show a decrease (due, for example, to strongly negative environmental externalities)…(More)”

Participatory Systems Mapping for Municipal Prioritization and Planning


Paper by Amanda Pomeroy–Stevens, Bailey Goldman & Karen Grattan: “Rapidly growing cities face new and compounding health challenges, leading governments and donors to seek innovative ways to support healthier, more resilient urban growth. One such approach is the systems mapping process developed by Engaging Inquiry (EI) for the USAID-funded Building Healthy Cities project (BHC) in four cities in Asia. This paper provides details on the theory and methods of the process. While systems mapping is not new, the approach detailed in this paper has been uniquely adapted to the purpose of municipal planning. Strategic stakeholder engagement, including participatory workshops with a diverse group of stakeholders, is at the core of this approach and led to deeper insights, greater buy-in, and shared understanding of the city’s unique opportunities and challenges. This innovative mapping process is a powerful tool for defining municipal priorities within growing cities across the globe, where the situation is rapidly evolving. It can be used to provide evidence-based information on where to invest to gain the biggest impact on specific goals. This paper is part of a collection in this issue providing a detailed accounting of BHC’s systems mapping approach across four project cities…(More)”.

Crowdsourcing Initiatives in City Management: The Perspective of Polish Local Governments


Paper by Ewa Glińska, Halina Kiryluk and Karolina Ilczuk: “The past decade has seen a rise in the significance of the Internet facilitating the communication between local governments and local stakeholders. A growing role in this dialog has been played by crowdsourcing. The paper aims to identify areas, forms, and tools for the implementation of crowdsourcing in managing cities in Poland as well as the assessment of benefits provided by the use of crowdsourcing initiatives by representatives of municipal governments. The article utilized a quantitative study method of the survey realized on a sample of 176 city governments from Poland. Conducted studies have shown that crowdsourcing initiatives of cities concern such areas as culture, city image, spatial management, environmental protection, security, recreation and tourism as well as relations between entrepreneurs and city hall, transport and innovations. Forms of stakeholder engagement via crowdsourcing involve civic budgets, “voting/polls/surveys and interviews” as well as “debate/discussion/meeting, workshop, postulates and comments”. The larger the city the more often its representatives employ the forms of crowdsourcing listed above. Local governments most frequently carry out crowdsourcing initiatives by utilizing cities’ official web pages, social media, and special platforms dedicated to public consultations. The larger the city the greater the value placed on the utility of crowdsourcing…(More)”.

Human-centred mechanism design with Democratic AI


Paper by Raphael Koster et al: “Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation…(More)”.

The People Versus The Algorithm: Stakeholders and AI Accountability


Paper by Jbid Arsenyan and Julia Roloff: “As artificial intelligence (AI) applications are used for a wide range of tasks, the question about who is responsible for detecting and remediating problems caused by AI applications remains disputed. We argue that responsibility attributions proposed by management scholars fail to enable a practical solution as two aspects are overlooked: the difficulty to design a complex algorithm that does not produce adverse outcomes, and the conflict of interest inherited in some AI applications by design as proprietors and users employ the application for different purposes. In this conceptual paper, we argue that effective accountability can only be delivered through solutions that enable stakeholders to employ their collective intelligence effectively in compiling problem reports and analyze problem patterns. This allows stakeholders, including governments, to hold providers of AI applications accountable, and ensure that appropriate corrections are carried out in a timely manner…(More)”.

What Might Hannah Arendt Make of Big Data?: On Thinking, Natality, and Narrative with Big Data


Paper by Daniel Brennan: “…considers the phenomenon of Big Data through the work of Hannah Arendt on technology and on thinking. By exploring the nuance to Arendt’s critique of technology, and its relation to the social and political spheres of human activity, the paper presents a case for considering the richness of Arendt’s thought for approaching moral questions of Big Data. The paper argues that the nuances of Arendt’s writing contribute a sceptical, yet also hopeful lens to the moral potential of Big Data. The scepticism is due to the potential of big data to reduce humans to a calculable, and thus manipulatable entity. Such warnings are rife throughout Arendt’s oeuvre. The hope is found in the unique way that Arendt conceives of thinking, as having a conversation with oneself, unencumbered by ideological, or fixed accounts of how things are, in a manner which challenges preconceived notions of the self and world. If thinking can be aided by Big Data, then there is hope for Big Data to contribute to the project of natality that characterises Arendt’s understanding of social progress. Ultimately, the paper contends that Arendt’s definition of what constitutes thinking is the mediator to make sense of the morally ambivalence surrounding Big Data. By focussing on Arendt’s account of the moral value of thinking, the paper provides an evaluative framework for interrogating uses of Big Data…(More)”.

Legislating Data Loyalty


Paper by Woodrow Hartzog and NNeil M. Richards: “eil M. RichardsLawmakers looking to embolden privacy law have begun to consider imposing duties of loyalty on organizations trusted with people’s data and online experiences. The idea behind loyalty is simple: organizations should not process data or design technologies that conflict with the best interests of trusting parties. But the logistics and implementation of data loyalty need to be developed if the concept is going to be capable of moving privacy law beyond its “notice and consent” roots to confront people’s vulnerabilities in their relationship with powerful data collectors.

In this short Essay, we propose a model for legislating data loyalty. Our model takes advantage of loyalty’s strengths—it is well-established in our law, it is flexible, and it can accommodate conflicting values. Our Essay also explains how data loyalty can embolden our existing data privacy rules, address emergent dangers, solve privacy’s problems around consent and harm, and establish an antibetrayal ethos as America’s privacy identity.

We propose that lawmakers use a two-step process to (1) articulate a primary, general duty of loyalty, then (2) articulate “subsidiary” duties that are more specific and sensitive to context. Subsidiary duties regarding collection, personalization, gatekeeping, persuasion, and mediation would target the most opportunistic contexts for self-dealing and result in flexible open-ended duties combined with highly specific rules. In this way, a duty of data loyalty is not just appealing in theory—it can be effectively implemented in practice just like the other duties of loyalty our law has recognized for hundreds of years. Loyalty is thus not only flexible, but it is capable of breathing life into America’s historically tepid privacy frameworks…(More)”.

Efficient and stable data-sharing in a public transit oligopoly as a coopetitive game


Paper by Qi Liu and Joseph Y.J. Chow: “In this study, various forms of data sharing are axiomatized. A new way of studying coopetition, especially data-sharing coopetition, is proposed. The problem of the Bayesian game with signal dependence on actions is observed; and a method to handle such dependence is proposed. We focus on fixed-route transit service markets. A discrete model is first presented to analyze the data-sharing coopetition of an oligopolistic transit market when an externality effect exists. Given a fixed data sharing structure, a Bayesian game is used to capture the competition under uncertainty while a coalition formation model is used to determine the stable data-sharing decisions. A new method of composite coalition is proposed to study efficient markets. An alternative continuous model is proposed to handle large networks using simulation. We apply these models to various types of networks. Test results show that perfect information may lead to perfect selfishness. Sharing more data does not necessarily improve transit service for all groups, at least if transit operators remain non-cooperative. Service complementarity does not necessarily guarantee a grand data-sharing coalition. These results can provide insights on policy-making, like whether city authorities should enforce compulsory data-sharing along with cooperation between operators or setup a voluntary data-sharing platform…(More)”.