Paper by Nardine Alnemr: “Challenges in attaining deliberative democratic ideals – such as inclusion, authenticity and consequentiality – in wider political systems have driven the development of artificially-designed citizen deliberation. These designed deliberations, however, are expert-driven. Whereas they may achieve ‘deliberativeness’, their design and implementation are undemocratic and limit deliberative democracy’s emancipatory goals. This is relevant in respect to the role of facilitation. In online deliberation, algorithms and artificial actors replace the central role of human facilitators. The detachment of such designed settings from wider contexts is particularly troubling from a democratic perspective. Digital technologies in online deliberation are not developed in a manner consistent with democratic ideals and are not being amenable to scrutiny by citizens. I discuss the theoretical and the practical blind spots of algorithmic facilitation. Based on these, I present recommendations to democratise the design and implementation of online deliberation with a focus on chatbots as facilitators….(More)”.
The University of Warwick: “Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices—an ethical eye on AI.
Artificial intelligence (AI) is increasingly deployed in commercial situations. Consider for example using AI to set prices of insurance products to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used—regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you—or both.
So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.
Mathematicians and statisticians from University of Warwick, Imperial, EPFL and Sciteb Ltd have come together to help business and regulators creating a new “Unethical Optimization Principle” and provide a simple formula to estimate its impact. They have laid out the full details in a paper bearing the name “An unethical optimization principle“, published in Royal Society Open Science on Wednesday 1st July 2020….(More)”.
Paper by Tina Eliassi-Rad et al: “Political scientists have conventionally assumed that achieving democracy is a one-way ratchet. Only very recently has the question of “democratic backsliding” attracted any research attention. We argue that democratic instability is best understood with tools from complexity science. The explanatory power of complexity science arises from several features of complex systems. Their relevance in the context of democracy is discussed. Several policy recommendations are offered to help (re)stabilize current systems of representative democracy…(More)”.
Paper by Cass Sunstein: “Do people from benefit from food labels? When? By how much? Public officials face persistent challenges in answering these questions. In various nations, they use four different approaches: they refuse to do so on the ground that quantification is not feasible; they engage in breakeven analysis; they project end-states, such as economic savings or health outcomes; and they estimate willingness-to-pay for the relevant information. Each of these approaches runs into strong objections. In principle, the willingness-to-pay question has important advantages. But for those who has that question, there is a serious problem. In practice, people often lack enough information to give a sensible answer to the question how much they would be willing to pay for (more) information. People might also suffer from behavioral biases (including present bias and optimistic bias). And when preferences are labile or endogenous, even an informed and unbiased answer to the willingness to pay question may fail to capture the welfare consequences, because people may develop new tastes and values as a result of information….(More)”.
Paper by Federica Lucivero et al : “Data-driven digital technologies are often presented in policy agendas as contributing to the goal of sustainable development by providing information to reduce energy consumption and offering a green alternative to industries and behaviour with a higher environmental footprint. However, it is widely acknowledged in the context of environmental research that Information and Communication Technologies (ICT) in general, and data centres and cloud computing in particular, have a heavy footprint featuring a high consumption of non-renewable energy, waste production and carbon dioxide emissions. In spite of this, environmental issues have so far figured only sparsely in both policy initiatives supporting data-driven digital initiatives, as well as in recent ethics and governance scholarly literature discussing the data-driven revolution. We convened an interdisciplinary workshop to map out the current conceptual landscape on the environmental impacts of data-driven technologies, and to explore how ethical thinking can contribute to it. In this commentary, we discuss the main themes that emerged and our call for action….(More)”.
Paper by Rainer Diaz-Bone et al: “The phenomenon of big data does not only deeply affect current societies but also poses crucial challenges to social research. This article argues for moving towards a sociology of social research in order to characterize the new qualities of big data and its deficiencies. We draw on the neopragmatist approach of economics of convention (EC) as a conceptual basis for such a sociological perspective.
This framework suggests investigating processes of quantification in their interplay with orders of justifications and logics of evaluation. Methodological issues such as the question of the “quality of big data” must accordingly be discussed in their deep entanglement with epistemic values, institutional forms, and historical contexts and as necessarily implying political issues such as who controls and has access to data infrastructures. On this conceptual basis, the article uses the example of health to discuss the challenges of big data analysis for social research.
Phenomena such as the rise of new and massive privately owned data infrastructures, the economic valuation of huge amounts of connected data, or the movement of “quantified self” are presented as indications of a profound transformation compared to established forms of doing social research. Methodological and epistemological, but also institutional and political, strategies are presented to face the risk of being “outperformed” and “replaced” by big data analysis as they are already done in big US American and Chinese Internet enterprises. In conclusion, we argue that the sketched developments have important implications both for research practices and methods teaching in the era of big data…(More)”.
Paper by Steven Gray et al: ” Incorporating relevant stakeholder input into conservation decision making is fundamentally challenging yet critical for understanding both the status of, and human pressures on, natural resources. Collective intelligence (CI ), defined as the ability of a group to accomplish difficult tasks more effectively than individuals, is a growing area of investigation, with implications for improving ecological decision making. However, many questions remain about the ways in which emerging internet technologies can be used to apply CI to natural resource management. We examined how synchronous social‐swarming technologies and asynchronous “wisdom of crowds” techniques can be used as potential conservation tools for estimating the status of natural resources exploited by humans.
Using an example from a recreational fishery, we show that the CI of a group of anglers can be harnessed through cyber‐enabled technologies. We demonstrate how such approaches – as compared against empirical data – could provide surprisingly accurate estimates that align with formal scientific estimates. Finally, we offer a practical approach for using resource stakeholders to assist in managing ecosystems, especially in data‐poor situations….(More)”.
Paper by Susan Ariel Aaronson: “Data and national security have a complex relationship. Data is essential to national defense — to understanding and countering adversaries. Data underpins many modern military tools from drones to artificial intelligence. Moreover, to protect their citizens, governments collect lots of data about their constituents. Those same datasets are vulnerable to theft, hacking, and misuse. In 2013, the Department of Defense’s research arm (DARPA) funded a study examining if “ the availability of data provide a determined adversary with the tools necessary to inflict nation-state level damage. The results were not made public. Given the risks to the data of their citizens, defense officials should be vociferous advocates for interoperable data protection rules.
This policy brief uses case studies to show that inadequate governance of personal data can also undermine national security. The case studies represent diverse internet sectors affecting netizens differently. I do not address malware or disinformation, which are also issues of data governance, but have already been widely researched by other scholars. I illuminate how policymakers, technologists, and the public are/were unprepared for how inadequate governance spillovers affected national security. I then makes some specific recommendations about what we can do about this problem….(More)”.
Paper by Tammy M. Frisby and Jorge L. Contreras: “Since 2013, federal research-funding agencies have been required to develop and implement broad data sharing policies. Yet agencies today continue to grapple with the mechanisms necessary to enable the sharing of a wide range of data types, from genomic and other -omics data to clinical and pharmacological data to survey and qualitative data. In 2016, the National Cancer Institute (NCI) launched the ambitious $1.8 billion Cancer Moonshot Program, which included a new Public Access and Data Sharing (PADS) Policy applicable to funding applications submitted on or after October 1, 2017. The PADS Policy encourages the immediate public release of published research results and data and requires all Cancer Moonshot grant applicants to submit a PADS plan describing how they will meet these goals. We reviewed the PADS plans submitted with approximately half of all funded Cancer Moonshot grant applications in fiscal year 2018, and found that a majority did not address one or more elements required by the PADS Policy. Many such plans made no reference to the PADS Policy at all, and several referenced obsolete or outdated National Institutes of Health (NIH) policies instead. We believe that these omissions arose from a combination of insufficient education and outreach by NCI concerning its PADS Policy, both to potential grant applicants and among NCI’s program staff and external grant reviewers. We recommend that other research funding agencies heed these findings as they develop and roll out new data sharing policies….(More)”.
Paper by Dilek Frais: “The UN Sustainable Development Goals (SDGs) are a vision for achieving a sustainable future. Reliable, timely, comprehensive, and consistent data are critical for measuring progress towards, and ultimately achieving, the SDGs. Data from citizen science represent one new source of data that could be used for SDG reporting and monitoring. However, information is still lacking regarding the current and potential contributions of citizen science to the SDG indicator framework. Through a systematic review of the metadata and work plans of the 244 SDG indicators, as well as the identification of past and ongoing citizen science initiatives that could directly or indirectly provide data for these indicators, this paper presents an overview of where citizen science is already contributing and could contribute data to the SDG indicator framework.
The results demonstrate that citizen science is “already contributing” to the monitoring of 5 SDG indicators, and that citizen science “could contribute” to 76 indicators, which, together, equates to around 33%. Our analysis also shows that the greatest inputs from citizen science to the SDG framework relate to SDG 15 Life on Land, SDG 11 Sustainable Cities and Communities, SDG 3 Good Health and Wellbeing, and SDG 6 Clean Water and Sanitation. Realizing the full potential of citizen science requires demonstrating its value in the global data ecosystem, building partnerships around citizen science data to accelerate SDG progress, and leveraging investments to enhance its use and impact….(More)”.