Handbook of Public Participation in Impact Assessment


Book edited by Tanya Burdett and A. John Sinclair: “… provides a clear overview of how to achieve meaningful public participation in impact assessment (IA). It explores conceptual elements, including the democratic core of public participation in IA, as well as practical challenges, such as data sharing, with diverse perspectives from 39 leading academics and practitioners.

Critically examining how different engagement frameworks have evolved over time, this Handbook underlines the ways in which tokenistic approaches and wider planning and approvals structures challenge the implementation of meaningful public participation. Contributing authors discuss the impact of international agreements, legislation and regulatory regimes, and review commonly used professional association frameworks such as the International Association for Public Participation core values for practice. They demonstrate through case studies what meaningful public participation looks like in diverse regional contexts, addressing the intentions of being purposeful, inclusive, transformative and proactive. By emphasising the strength of community engagement, the Handbook argues that public participation in IA can contribute to enhanced democracy and sustainability for all…(More)”.

Mapping Behavioral Public Policy


Book by Paolo Belardinelli: “This book provides a new perspective on behavioral public policy. The field of behavioral public policy has been dominated by the concept of ‘nudging’ over the last decade. As this book demonstrates, however, ‘nudging’ is one of many behavioral techniques that practitioners and policymakers can utilize in order to achieve their goals. The book discusses the advantages and disadvantages of these alternative techniques, and demonstrates empirically how the impact of ‘nudging’ and ‘non-nudging’ interventions are often dependent on varying political contexts and the degree of trust that citizens have toward policymakers. In doing so, it addresses the important question of how citizens understand and approve of the use of behavioral techniques by governments. The book will appeal to all those interested in public management, public policy, behavioral psychology, and ‘nudging’. ..(More)”.

Invisible Rulers: The People Who Turn Lies into Reality


Book by Renée DiResta: “…investigation into the way power and influence have been profoundly transformed reveals how a virtual rumor mill of niche propagandists increasingly shapes public opinion. While propagandists position themselves as trustworthy Davids, their reach, influence, and economics make them classic Goliaths—invisible rulers who create bespoke realities to revolutionize politics, culture, and society. Their work is driven by a simple maxim: if you make it trend, you make it true.
 
By revealing the machinery and dynamics of the interplay between influencers, algorithms, and online crowds, DiResta vividly illustrates the way propagandists deliberately undermine belief in the fundamental legitimacy of institutions that make society work. This alternate system for shaping public opinion, unexamined until now, is rewriting the relationship between the people and their government in profound ways. It has become a force so shockingly effective that its destructive power seems limitless. Scientific proof is powerless in front of it. Democratic validity is bulldozed by it. Leaders are humiliated by it. But they need not be.
 
With its deep insight into the power of propagandists to drive online crowds into battle—while bearing no responsibility for the consequences—Invisible Rulers not only predicts those consequences but offers ways for leaders to rapidly adapt and fight back…(More)”.

Handbook on Public Policy and Artificial Intelligence


Book edited by Regine Paul, Emma Carmel and Jennifer Cobbe: “…explores the relationship between public policy and artificial intelligence (AI) technologies across a broad range of geographical, technical, political and policy contexts. It contributes to critical AI studies, focusing on the intersection of the norms, discourses, policies, practices and regulation that shape AI in the public sector.

Expert authors in the field discuss the creation and use of AI technologies, and how public authorities respond to their development, by bringing together emerging scholarly debates about AI technologies with longer-standing insights on public administration, policy, regulation and governance. Contributions in the Handbook mobilize diverse perspectives to critically examine techno-solutionist approaches to public policy and AI, dissect the politico-economic interests underlying AI promotion and analyse implications for sustainable development, fairness and equality. Ultimately, this Handbook questions whether regulatory concepts such as ethical, trustworthy or accountable AI safeguard a democratic future or contribute to a problematic de-politicization of the public sector…(More)”.

 

We need a social science of data


Article by Cristina Alaimo and Jannis Kallinikos: “The practical and technical knowledge of data science must be complemented by a scientific field that can respond to these challenges and trace their implications for social practice and institutions.

Determining how such a field will look is not the job of two people but, rather, that of a whole scientific and social discourse that we as a society have the obligation to develop and maintain. Students and data users must know the power and subtlety of the artefacts they study and employ.

Such a scientific field should also provide the basis for analysing the social relations and economic dynamics of data generation and use, which are closely associated with several social groups, professions, communities and firms….(More)”.

How to optimize the systematic review process using AI tools


Paper by Nicholas Fabiano et al: “Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever-increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time-consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods….(More)”.

ChatGPT in Teaching and Learning: A Systematic Review


Paper by Duha Ali: “The increasing use of artificial intelligence (AI) in education has raised questions about the implications of ChatGPT for teaching and learning. A systematic literature review was conducted to answer these questions, analyzing 112 scholarly articles to identify the potential benefits and challenges related to ChatGPT use in educational settings. The selection process was thorough to ensure a comprehensive analysis of the current academic discourse on AI tools in education. Our research sheds light on the significant impact of ChatGPT on improving student engagement and accessibility and the critical issues that need to be considered, including concerns about the quality and bias of generated responses, the risk of plagiarism, and the authenticity of educational content. The study aims to summarize the utilizations of ChatGPT in teaching and learning by addressing the identified benefits and challenges through targeted strategies. The authors outlined some recommendations that will ensure that the integration of ChatGPT into educational frameworks enhances learning outcomes while safeguarding academic standards…(More)”.

Misuse versus Missed use — the Urgent Need for Chief Data Stewards in the Age of AI


Article by Stefaan Verhulst and Richard Benjamins: “In the rapidly evolving landscape of artificial intelligence (AI), the need for and importance of Chief AI Officers (CAIO) are receiving increasing attention. One prominent example came in a recent memo on AI policy, issued by Shalanda Young, Director of the United States Office of Management and Budget. Among the most important — and prominently featured — recommendations were a call, “as required by Executive Order 14110,” for all government agencies to appoint a CAIO within 60 days of the release of the memo.

In many ways, this call is an important development; not even the EU AI Act is requiring this of public agencies. CAIOs have an important role to play in the search for a responsible use of AI for public services that would include guardrails and help protect the public good. Yet while acknowledging the need for CAIOs to safeguard a responsible use of AI, we argue that the duty of Administrations is not only to avoid negative impact, but also to create positive impact. In this sense, much work remains to be done in defining the CAIO role and considering their specific functions. In pursuit of these tasks, we further argue, policymakers and other stakeholders might benefit from looking at the role of another emerging profession in the digital ecology–that of Chief Data Stewards (CDS), which is focused on creating such positive impact for instance to help achieve the UN’s SDGs. Although the CDS position is itself somewhat in flux, we suggest that CDS can nonetheless provide a useful template for the functions and roles of CAIOs.

Image courtesy of Advertising Week

We start by explaining why CDS are relevant to the conversation over CAIOs; this is because data and data governance are foundational to AI governance. We then discuss some particular functions and competencies of CDS, showing how these can be equally applied to the governance of AI. Among the most important (if high-level) of these competencies is an ability to proactively identify opportunities in data sharing, and to balance the risks and opportunities of our data age. We conclude by exploring why this competency–an ethos of positive data responsibility that avoids overly-cautious risk aversion–is so important in the AI and data era…(More)”

The Social Value of Hurricane Forecasts


Paper by Renato Molina & Ivan Rudik: “What is the impact and value of hurricane forecasts? We study this question using newly-collected forecast data for major US hurricanes since 2005. We find higher wind speed forecasts increase pre-landfall protective spending, but erroneous under-forecasts increase post-landfall damage and rebuilding expenditures. Our main contribution is a new theoretically-grounded approach for estimating the marginal value of forecast improvements. We find that the average annual improvement reduced total per-hurricane costs, inclusive of unobserved protective spending, by $700,000 per county. Improvements since 2007 reduced costs by 19%, averaging $5 billion per hurricane. This exceeds the annual budget for all federal weather forecasting…(More)”.

Data Statements: From Technical Concept to Community Practice


Paper by Angelina McMillan-Major, Emily M. Bender, and Batya Friedman: “Responsible computing ultimately requires that technical communities develop and adopt tools, processes, and practices that mitigate harms and support human flourishing. Prior efforts toward the responsible development and use of datasets, machine learning models, and other technical systems have led to the creation of documentation toolkits to facilitate transparency, diagnosis, and inclusion. This work takes the next step: to catalyze community uptake, alongside toolkit improvement. Specifically, starting from one such proposed toolkit specialized for language datasets, data statements for natural language processing, we explore how to improve the toolkit in three senses: (1) the content of the toolkit itself, (2) engagement with professional practice, and (3) moving from a conceptual proposal to a tested schema that the intended community of use may readily adopt. To achieve these goals, we first conducted a workshop with natural language processing practitioners to identify gaps and limitations of the toolkit as well as to develop best practices for writing data statements, yielding an interim improved toolkit. Then we conducted an analytic comparison between the interim toolkit and another documentation toolkit, datasheets for datasets. Based on these two integrated processes, we present our revised Version 2 schema and best practices in a guide for writing data statements. Our findings more generally provide integrated processes for co-evolving both technology and practice to address ethical concerns within situated technical communities…(More)”