Paper by Albert Meijer & Marcel Thaens: “The positive features of innovation are well known but the dark side of public innovation has received less attention. To fill this gap, this article develops a theoretical understanding of the dark side of public innovation. We explore a diversity of perverse effects on the basis of a literature review and an expert consultation. We indicate that these perverse effects can be categorized on two dimensions: low public value and low public control. We confront this exploratory analysis with the literature and conclude that the perverse effects are not coincidental but emerge from key properties of innovation processes such as creating niches for innovation and accepting uncertainty about public value outcomes. To limit perverse effects, we call for the dynamic assessment of public innovation. The challenge for innovators is to acknowledge the dark side and take measures to prevent perverse effects without killing the innovativeness of organizations…(More)“.
Paper by Kaustav Bhattacharjee, Min Chen. and Aritra Dasgupta: “Preservation of data privacy and protection of sensitive information from potential adversaries constitute a key socio‐technical challenge in the modern era of ubiquitous digital transformation. Addressing this challenge needs analysis of multiple factors: algorithmic choices for balancing privacy and loss of utility, potential attack scenarios that can be undertaken by adversaries, implications for data owners, data subjects, and data sharing policies, and access control mechanisms that need to be built into interactive data interfaces.
Visualization has a key role to play as part of the solution space, both as a medium of privacy‐aware information communication and also as a tool for understanding the link between privacy parameters and data sharing policies. The field of privacy‐preserving data visualization has witnessed progress along many of these dimensions. In this state‐of‐the‐art report, our goal is to provide a systematic analysis of the approaches, methods, and techniques used for handling data privacy in visualization. We also reflect on the road‐map ahead by analyzing the gaps and research opportunities for solving some of the pressing socio‐technical challenges involving data privacy with the help of visualization….(More)”.
Paper by Christopher Loynes, Jamal Ouenniche & Johannes De Smedt: “This paper provides the humanitarian community with an automated tool that can detect a disaster using tweets posted on Twitter, alongside a portal to identify local and regional Non-Governmental Organisations (NGOs) that are best-positioned to provide support to people adversely affected by a disaster. The proposed disaster detection tool uses a linear Support Vector Classifier (SVC) to detect man-made and natural disasters, and a density-based spatial clustering of applications with noise (DBSCAN) algorithm to accurately estimate a disaster’s geographic location. This paper provides two original contributions. The first is combining the automated disaster detection tool with the prototype portal for NGO identification. This unique combination could help reduce the time taken to raise awareness of the disaster detected, improve the coordination of aid, increase the amount of aid delivered as a percentage of initial donations and improve aid effectiveness. The second contribution is a general framework that categorises the different approaches that can be adopted for disaster detection. Furthermore, this paper uses responses obtained from an on-the-ground survey with NGOs in the disaster-hit region of Uttar Pradesh, India, to provide actionable insights into how the portal can be developed further…(More)”.
Paper by Sarah Giest: “Nudging is seen to complement or replace existing policy tools by altering people’s choice architectures towards behaviors that align with government aims, but has fallen short in meeting those targets. Crucially, governments do not nudge citizens directly, but need private agents to nudge their consumers. Based on this notion, the paper takes on an institutional approach towards nudging. Rather than looking at the relationship between nudger and nudgee, the research analyses the regulatory and market structures that affect nudge implementation by private actors, captured by the ‘budge’ idea. Focusing on the European energy policy domain, the paper analyses the contextual factors of green nudges that are initiated by Member States, and implemented by energy companies. The findings show that in the smart meter context, there are regulatory measures that affect implementation of smart meters and that government has a central role to ‘budge’, due to the dependence on private agents….(More)”.
James Brian Byrd, Anna C. Greene, Deepashree Venkatesh Prasad, Xiaoqian Jiang & Casey S. Greene in Nature: “Data sharing anchors reproducible science, but expectations and best practices are often nebulous. Communities of funders, researchers and publishers continue to grapple with what should be required or encouraged. To illuminate the rationales for sharing data, the technical challenges and the social and cultural challenges, we consider the stakeholders in the scientific enterprise. In biomedical research, participants are key among those stakeholders. Ethical sharing requires considering both the value of research efforts and the privacy costs for participants. We discuss current best practices for various types of genomic data, as well as opportunities to promote ethical data sharing that accelerates science by aligning incentives….(More)”.
Paper by Stefan Sauermann et al: “This paper provides insight into how restricted data can be incorporated in an open-be-default-by-design digital infrastructure for scientific data. We focus, in particular, on the ethical component of FAIRER (Findable, Accessible, Interoperable, Ethical, and Reproducible) data, and the pseudo-anonymization and anonymization of COVID-19 datasets to protect personally identifiable information (PII). First we consider the need for the customisation of the existing privacy preservation techniques in the context of rapid production, integration, sharing and analysis of COVID-19 data. Second, the methods for the pseudo-anonymization of direct identification variables are discussed. We also discuss different pseudo-IDs of the same person for multi-domain and multi-organization. Essentially, pseudo-anonymization and its encrypted domain specific IDs are used to successfully match data later, if required and permitted, as well as to restore the true ID (and authenticity) in individual cases of a patient’s clarification.Third, we discuss application of statistical disclosure control (SDC) techniques to COVID-19 disease data. To assess and limit the risk of re-identification of individual persons in COVID-19 datasets (that are often enriched with other covariates like age, gender, nationality, etc.) to acceptable levels, the risk of successful re-identification by a combination of attribute values must be assessed and controlled. This is done using statistical disclosure control for anonymization of data. Lastly, we discuss the limitations of the proposed techniques and provide general guidelines on using disclosure risks to decide on appropriate modes for data sharing to preserve the privacy of the individuals in the datasets….(More)”.
Paper by Alasdair S. Roberts: “The first two decades of this century have shown there is no simple formula for governing well. Leaders must make difficult choices about national priorities and the broad lines of policy – that is, about the substance of their strategy for governing. These strategic choices have important implications for public administration. Scholars in this field should study the processes by which strategy is formulated and executed more closely than they have over the last thirty years. A new agenda for public administration should emphasize processes of top-level decision-making, mechanisms to improve foresight and the management of societal risks, and problems of large-scale reorganization and inter-governmental coordination, among other topics. Many of these themes have been examined more closely by researchers in Canada than by those abroad. This difference should be recognized an advantage rather than a liability….(More)”.
Paper by Tamar Sharon: “Since the outbreak of COVID-19, governments have turned their attention to digital contact tracing. In many countries, public debate has focused on the risks this technology poses to privacy, with advocates and experts sounding alarm bells about surveillance and mission creep reminiscent of the post 9/11 era. Yet, when Apple and Google launched their contact tracing API in April 2020, some of the world’s leading privacy experts applauded this initiative for its privacy-preserving technical specifications. In an interesting twist, the tech giants came to be portrayed as greater champions of privacy than some democratic governments.
This article proposes to view the Apple/Google API in terms of a broader phenomenon whereby tech corporations are encroaching into ever new spheres of social life. From this perspective, the (legitimate) advantage these actors have accrued in the sphere of the production of digital goods provides them with (illegitimate) access to the spheres of health and medicine, and more worrisome, to the sphere of politics. These sphere transgressions raise numerous risks that are not captured by the focus on privacy harms. Namely, a crowding out of essential spherical expertise, new dependencies on corporate actors for the delivery of essential, public goods, the shaping of (global) public policy by non-representative, private actors and ultimately, the accumulation of decision-making power across multiple spheres. While privacy is certainly an important value, its centrality in the debate on digital contact tracing may blind us to these broader societal harms and unwittingly pave the way for ever more sphere transgressions….(More)”.
Paper by Nardine Alnemr: “Challenges in attaining deliberative democratic ideals – such as inclusion, authenticity and consequentiality – in wider political systems have driven the development of artificially-designed citizen deliberation. These designed deliberations, however, are expert-driven. Whereas they may achieve ‘deliberativeness’, their design and implementation are undemocratic and limit deliberative democracy’s emancipatory goals. This is relevant in respect to the role of facilitation. In online deliberation, algorithms and artificial actors replace the central role of human facilitators. The detachment of such designed settings from wider contexts is particularly troubling from a democratic perspective. Digital technologies in online deliberation are not developed in a manner consistent with democratic ideals and are not being amenable to scrutiny by citizens. I discuss the theoretical and the practical blind spots of algorithmic facilitation. Based on these, I present recommendations to democratise the design and implementation of online deliberation with a focus on chatbots as facilitators….(More)”.
The University of Warwick: “Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.
Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.
So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.
Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle“, published in Royal Society Open Science on Wednesday 1st July 2020….(More)”.