Paper by David Hopf, Sarah Dellmann, Christian Hauschke, and Marco Tullney: “Open access — the free availability of scholarly publications — intuitively offers many benefits. At the same time, some academics, university administrators, publishers, and political decision-makers express reservations. Many empirical studies on the effects of open access have been published in the last decade. This report provides an overview of the state of research from 2010 to 2021. The empirical results on the effects of open access help to determine the advantages and disadvantages of open access and serve as a knowledge base for academics, publishers, research funding and research performing institutions, and policy makers. This overview of current findings can inform decisions about open access and publishing strategies. In addition, this report identifies aspects of the impact of open access that are potentially highly relevant but have not yet been sufficiently studied…(More)”.
Artificial intelligence and the local government: A five-decade scientometric analysis on the evolution, state-of-the-art, and emerging trends
Paper by Tan Yigitcanlar et al: “In recent years, the rapid advancement of artificial intelligence (AI) technologies has significantly impacted various sectors, including public governance at the local level. However, there exists a limited understanding of the overarching narrative surrounding the adoption of AI in local governments and its future. Therefore, this study aims to provide a comprehensive overview of the evolution, current state-of-the-art, and emerging trends in the adoption of AI in local government. A comprehensive scientometric analysis was conducted on a dataset comprising 7112 relevant literature records retrieved from the Scopus database in October 2023, spanning over the last five decades. The study findings revealed the following key insights: (a) exponential technological advancements over the last decades ushered in an era of AI adoption by local governments; (b) the primary purposes of AI adoption in local governments include decision support, automation, prediction, and service delivery; (c) the main areas of AI adoption in local governments encompass planning, analytics, security, surveillance, energy, and modelling; and (d) under-researched but critical research areas include ethics of and public participation in AI adoption in local governments. This study informs research, policy, and practice by offering a comprehensive understanding of the literature on AI applications in local governments, providing valuable insights for stakeholders and decision-makers…(More)”.
Brazil hires OpenAI to cut costs of court battles
Article by Marcela Ayres and Bernardo Caram: “Brazil’s government is hiring OpenAI to expedite the screening and analysis of thousands of lawsuits using artificial intelligence (AI), trying to avoid costly court losses that have weighed on the federal budget.
The AI service will flag to government the need to act on lawsuits before final decisions, mapping trends and potential action areas for the solicitor general’s office (AGU).
AGU told Reuters that Microsoft would provide the artificial intelligence services from ChatGPT creator OpenAI through its Azure cloud-computing platform. It did not say how much Brazil will pay for the services.
Court-ordered debt payments have consumed a growing share of Brazil’s federal budget. The government estimated it would spend 70.7 billion reais ($13.2 billion) next year on judicial decisions where it can no longer appeal. The figure does not include small-value claims, which historically amount to around 30 billion reais annually.
The combined amount of over 100 billion reais represents a sharp increase from 37.3 billion reais in 2015. It is equivalent to about 1% of gross domestic product, or 15% more than the government expects to spend on unemployment insurance and wage bonuses to low-income workers next year.
AGU did not provide a reason for Brazil’s rising court costs…(More)”.
Using ChatGPT to Facilitate Truly Informed Medical Consent
Paper by Fatima N. Mirza: “Informed consent is integral to the practice of medicine. Most informed consent documents are written at a reading level that surpasses the reading comprehension level of the average American. Large language models, a type of artificial intelligence (AI) with the ability to summarize and revise content, present a novel opportunity to make the language used in consent forms more accessible to the average American and thus, improve the quality of informed consent. In this study, we present the experience of the largest health care system in the state of Rhode Island in implementing AI to improve the readability of informed consent documents, highlighting one tangible application for emerging AI in the clinical setting…(More)”.
Superconvergence
Book by Jamie Metzl: “…explores how artificial intelligence, genome sequencing, gene editing, and other revolutionary technologies are transforming our lives, world, and future. These accelerating and increasingly interconnected technologies have the potential to improve our health, feed billions of people, supercharge our economies, store essential information for millions of years, and save our planet, but they can also―if we are not careful―do immeasurable harm.
The challenge we face is that while our ability to engineer the world around us is advancing exponentially, our processes for understanding the scope, scale, and implications of these changes, and for managing our godlike powers wisely, are only inching forward glacially…(More)”.
Artificial Intelligence Applications for Social Science Research
Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”
The Deliberative Turn in Democratic Theory
Book by Antonino Palumbo: “Thirty years of developments in deliberative democracy (DD) have consolidated this subfield of democratic theory. The acquired disciplinary prestige has made theorist and practitioners very confident about the ability of DD to address the legitimacy crisis experienced by liberal democracies at present at both theoretical and practical levels. The book advance a critical analysis of these developments that casts doubts on those certainties — current theoretical debates are reproposing old methodological divisions, and are afraid to move beyond the minimalist model of democracy advocated by liberal thinkers; democratic experimentation at the micro-level seems to have no impact at the macro-level, and remain sets of isolated experiences. The book indicates that those defects are mainly due to the liberal minimalist frame of reference within which reflection in democratic theory and practice takes place. Consequently, it suggests to move beyond liberal understandings of democracy as a game in need of external rules, and adopt instead a vision of democracy as a self-correcting metagame…(More)”.
Designing for AI Transparency in Public Services: A User-Centred Study of Citizens’ Preferences
Paper by Stefan Schmager, Samrat Gupta, Ilias Pappas & Polyxeni Vassilakopoulou: “Enhancing transparency in AI enabled public services has the potential to improve their adoption and service delivery. Hence, it is important to identify effective design strategies for AI transparency in public services. To this end, we conduct this empirical qualitative study providing insights for responsible deployment of AI in practice by public organizations. We design an interactive prototype for a Norwegian public welfare service organization which aims to use AI to support sick leaves related services. Qualitative analysis of citizens’ data collected through survey, think-aloud interactions with the prototype, and open-ended questions revealed three key themes related to: articulating information in written form, representing information in graphical form, and establishing the appropriate level of information detail for improving AI transparency in public service delivery. This study advances research pertaining to design of public service portals and has implications for AI implementation in the public sector…(More)”.
Using Artificial Intelligence to Accelerate Collective Intelligence
Paper by Róbert Bjarnason, Dane Gambrell and Joshua Lanthier-Welch: “In an era characterized by rapid societal changes and complex challenges, institutions’ traditional methods of problem-solving in the public sector are increasingly proving inadequate. In this study, we present an innovative and effective model for how institutions can use artificial intelligence to enable groups of people to generate effective solutions to urgent problems more efficiently. We describe a proven collective intelligence method, called Smarter Crowdsourcing, which is designed to channel the collective intelligence of those with expertise about a problem into actionable solutions through crowdsourcing. Then we introduce Policy Synth, an innovative toolkit which leverages AI to make the Smarter Crowdsourcing problem-solving approach both more scalable, more effective and more efficient. Policy Synth is crafted using a human-centric approach, recognizing that AI is a tool to enhance human intelligence and creativity, not replace it. Based on a real-world case study comparing the results of expert crowdsourcing alone with expert sourcing supported by Policy Synth AI agents, we conclude that Smarter Crowdsourcing with Policy Synth presents an effective model for integrating the collective wisdom of human experts and the computational power of AI to enhance and scale up public problem-solving processes.
The potential for artificial intelligence to enhance the performance of groups of people has been a topic of great interest among scholars of collective intelligence. Though many AI toolkits exist, they too often are not fitted to the needs of institutions and policymakers. While many existing approaches view AI as a tool to make crowdsourcing and deliberative processes better and more efficient, Policy Synth goes a step further, recognizing that AI can also be used to synthesize the findings from engagements together with research to develop evidence-based solutions and policies. This study contributes significantly to the fields of collective intelligence, public problem-solving, and AI. The study offers practical tools and insights for institutions looking to engage communities effectively in addressing urgent societal challenges…(More)”
The tensions of data sharing for human rights: A modern slavery case study
Paper by Jamie Hancock et al: “There are calls for greater data sharing to address human rights issues. Advocates claim this will provide an evidence-base to increase transparency, improve accountability, enhance decision-making, identify abuses, and offer remedies for rights violations. However, these well-intentioned efforts have been found to sometimes enable harms against the people they seek to protect. This paper shows issues relating to fairness, accountability, or transparency (FAccT) in and around data sharing can produce such ‘ironic’ consequences. It does so using an empirical case study: efforts to tackle modern slavery and human trafficking in the UK. We draw on a qualitative analysis of expert interviews, workshops, ecosystem mapping exercises, and a desk-based review. The findings show how, in the UK, a large ecosystem of data providers, hubs, and users emerged to process and exchange data from across the country. We identify how issues including legal uncertainties, non-transparent sharing procedures, and limited accountability regarding downstream uses of data may undermine efforts to tackle modern slavery and place victims of abuses at risk of further harms. Our findings help explain why data sharing activities can have negative consequences for human rights, even within human rights initiatives. Moreover, our analysis offers a window into how FAccT principles for technology relate to the human rights implications of data sharing. Finally, we discuss why these tensions may be echoed in other areas where data sharing is pursued for human rights concerns, identifying common features which may lead to similar results, especially where sensitive data is shared to achieve social goods or policy objectives…(More)”.