Paper by Cass R. Sunstein: “Behavioral science is playing an increasing role in public policy, and it is raising new questions about fundamental issues – the role of government, freedom of choice, paternalism, and human welfare. In diverse nations, public officials are using behavioral findings to combat serious problems – poverty, air pollution, highway safety, COVID-19, discrimination, employment, climate change, and occupational health. Exploring theory and practice, this Element attempts to provide one-stop shopping for those who are new to the area and for those who are familiar with it. With reference to nudges, taxes, mandates, and bans, it offers concrete examples of behaviorally informed policies. It also engages the fundamental questions, include the proper analysis of human welfare in light of behavioral findings. It offers a plea for respecting freedom of choice – so long as people’s choices are adequately informed and free from behavioral biases….(More)”.
Federated Learning for Privacy-Preserving Data Access
Paper by Małgorzata Śmietanka, Hirsh Pithadia and Philip Treleaven: “Federated learning is a pioneering privacy-preserving data technology and also a new machine learning model trained on distributed data sets.
Companies collect huge amounts of historic and real-time data to drive their business and collaborate with other organisations. However, data privacy is becoming increasingly important because of regulations (e.g. EU GDPR) and the need to protect their sensitive and personal data. Companies need to manage data access: firstly within their organizations (so they can control staff access), and secondly protecting raw data when collaborating with third parties. What is more, companies are increasingly looking to ‘monetize’ the data they’ve collected. However, under new legislations, utilising data by different organization is becoming increasingly difficult (Yu, 2016).
Federated learning pioneered by Google is the emerging privacy- preserving data technology and also a new class of distributed machine learning models. This paper discusses federated learning as a solution for privacy-preserving data access and distributed machine learning applied to distributed data sets. It also presents a privacy-preserving federated learning infrastructure….(More)”.
Transparency in Local Governments: Patterns and Practices of Twenty-first Century
Paper by Redeemer Dornudo Yao Krah and Gerard Mertens: “The study is a systematic literature review that assembles scientific knowledge in local government transparency in the twenty-first Century. The study finds a remarkable growth in research on local government transparency in the first nineteen years, particularly in Europe and North America. Social, economic, political, and institutional factors are found to account for this trend. In vogue among local governments is the use of information technology to enhance transparency. The pressure to become transparent largely comes from the passage of Freedom of Information Laws and open data initiatives of governments….(More)”.
A Legal Framework for Access to Data – A Competition Policy Perspective
Paper by Heike Schweitzer and Robert Welker: “The paper strives to systematise the debate on access to data from a competition policy angle. At the outset, two general policy approaches to access to data are distinguished: a “private control of data” approach versus an “open access” approach. We argue that, when it comes to private sector data, the “private control of data” approach is preferable. According to this approach, the “whether” and “how” of data access should generally be left to the market. However, public intervention can be justified by significant market failures. We discuss the presence of such market failures and the policy responses, including, in particular, competition policy responses, with a view to three different data access scenarios: access to data by co-generators of usage data (Scenario 1); requests for access to bundled or aggregated usage data by third parties vis-à-vis a service or product provider who controls such datasets, with the goal to enter complementary markets (Scenario 2); requests by firms to access the large usage data troves of the Big Tech online platforms for innovative purposes (Scenario 3). On this basis we develop recommendations for data access policies….(More)”.
Not fit for Purpose: A critical analysis of the ‘Five Safes’
Paper by Chris Culnane, Benjamin I. P. Rubinstein, and David Watts: “Adopted by government agencies in Australia, New Zealand, and the UK as policy instrument or as embodied into legislation, the ‘Five Safes’ framework aims to manage risks of releasing data derived from personal information. Despite its popularity, the Five Safes has undergone little legal or technical critical analysis. We argue that the Fives Safes is fundamentally flawed: from being disconnected from existing legal protections and appropriation of notions of safety without providing any means to prefer strong technical measures, to viewing disclosure risk as static through time and not requiring repeat assessment. The Five Safes provides little confidence that resulting data sharing is performed using ‘safety’ best practice or for purposes in service of public interest….(More)”.
The CARE Principles for Indigenous Data Governance
Paper by Stephanie Russo Carroll et al: “Concerns about secondary use of data and limited opportunities for benefit-sharing have focused attention on the tension that Indigenous communities feel between (1) protecting Indigenous rights and interests in Indigenous data (including traditional knowledges) and (2) supporting open data, machine learning, broad data sharing, and big data initiatives. The International Indigenous Data Sovereignty Interest Group (within the Research Data Alliance) is a network of nation-state based Indigenous data sovereignty networks and individuals that developed the ‘CARE Principles for Indigenous Data Governance’ (Collective Benefit, Authority to Control, Responsibility, and Ethics) in consultation with Indigenous Peoples, scholars, non-profit organizations, and governments. The CARE Principles are people– and purpose-oriented, reflecting the crucial role of data in advancing innovation, governance, and self-determination among Indigenous Peoples. The Principles complement the existing data-centric approach represented in the ‘FAIR Guiding Principles for scientific data management and stewardship’ (Findable, Accessible, Interoperable, Reusable). The CARE Principles build upon earlier work by the Te Mana Raraunga Maori Data Sovereignty Network, US Indigenous Data Sovereignty Network, Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective, and numerous Indigenous Peoples, nations, and communities. The goal is that stewards and other users of Indigenous data will ‘Be FAIR and CARE.’ In this first formal publication of the CARE Principles, we articulate their rationale, describe their relation to the FAIR Principles, and present examples of their application….(More)” See also Selected Readings on Indigenous Data Sovereignty.
AI’s Wide Open: A.I. Technology and Public Policy
Paper by Lauren Rhue and Anne L. Washington: “Artificial intelligence promises predictions and data analysis to support efficient solutions for emerging problems. Yet, quickly deploying AI comes with a set of risks. Premature artificial intelligence may pass internal tests but has little resilience under normal operating conditions. This Article will argue that regulation of early and emerging artificial intelligence systems must address the management choices that lead to releasing the system into production. First, we present examples of premature systems in the Boeing 737 Max, the 2020 coronavirus pandemic public health response, and autonomous vehicle technology. Second, the analysis highlights relevant management practices found in our examples of premature AI. Our analysis suggests that redundancy is critical to protecting the public interest. Third, we offer three points of context for premature AI to better assess the role of management practices.
AI in the public interest should: 1) include many sensors and signals; 2) emerge from a broad range of sources; and 3) be legible to the last person in the chain. Finally, this Article will close with a series of policy suggestions based on this analysis. As we develop regulation for artificial intelligence, we need to cast a wide net to identify how problems develop within the technologies and through organizational structures….(More)”.
Harnessing the wisdom of crowds can improve guideline compliance of antibiotic prescribers and support antimicrobial stewardship
Paper by Eva M. Krockow et al: “Antibiotic overprescribing is a global challenge contributing to rising levels of antibiotic resistance and mortality. We test a novel approach to antibiotic stewardship. Capitalising on the concept of “wisdom of crowds”, which states that a group’s collective judgement often outperforms the average individual, we test whether pooling treatment durations recommended by different prescribers can improve antibiotic prescribing. Using international survey data from 787 expert antibiotic prescribers, we run computer simulations to test the performance of the wisdom of crowds by comparing three data aggregation rules across different clinical cases and group sizes. We also identify patterns of prescribing bias in recommendations about antibiotic treatment durations to quantify current levels of overprescribing. Our results suggest that pooling the treatment recommendations (using the median) could improve guideline compliance in groups of three or more prescribers. Implications for antibiotic stewardship and the general improvement of medical decision making are discussed. Clinical applicability is likely to be greatest in the context of hospital ward rounds and larger, multidisciplinary team meetings, where complex patient cases are discussed and existing guidelines provide limited guidance….(More)“
Open data in public libraries: Gauging activities and supporting ambitions
Paper by Kaitlin Fender Throgmorton, Bree Norlander and Carole L. Palmer: “As the open data movement grows, public libraries must assess if and how to invest resources in this new service area. This paper reports on a recent survey on open data in public libraries across Washington state, conducted by the Open Data Literacy project (ODL) in collaboration with the Washington State Library. Results document interests and activity in open data across small, medium, and large libraries in relation to traditional library services and priorities. Libraries are particularly active in open data through reference services and are beginning to release their own library data to the public. While capacity and resource challenges hinder progress for some, many libraries, large and small, are making progress on new initiatives, including strategic collaborations with local government agencies. Overall, the level and range of activity suggest that Washington state public libraries of all sizes recognize the value of open data for their communities, with a groundswell of libraries moving beyond ambition to action as they develop new services through evolution and innovation….(More)”.
Artificial intelligence, transparency, and public decision-making
Paper by Karl de Fine Licht & Jenny de Fine Licht: “The increasing use of Artificial Intelligence (AI) for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring….(More)”.