Artificial intelligence, the common good, and the democratic deficit in AI governance


Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.

Dynamic Collective Action and the Power of Large Numbers


Paper by Marco Battaglini & Thomas R. Palfrey: “Collective action is a dynamic process where individuals in a group assess over time the benefits and costs of participating toward the success of a collective goal. Early participation improves the expectation of success and thus stimulates the subsequent participation of other individuals who might otherwise be unwilling to engage. On the other hand, a slow start can depress expectations and lead to failure for the group. Individuals have an incentive to procrastinate, not only in the hope of free riding, but also in order to observe the flow of participation by others, which allows them to better gauge whether their own participation will be useful or simply wasted. How do these phenomena affect the probability of success for a group? As the size of the group increases, will a “power of large numbers” prevail producing successful outcomes, or will a “curse of large numbers” lead to failure? In this paper, we address these questions by studying a dynamic collective action problem in which n individuals can achieve a collective goal if a share of them takes a costly action (e.g., participate in a protest, join a picket line, or sign an environmental agreement). Individuals have privately known participation costs and decide over time if and when to participate. We characterize the equilibria of this game and show that under general conditions the eventual success of collective action is necessarily probabilistic. The process starts for sure, and hence there is always a positive probability of success; however, the process “gets stuck” with positive probability, in the sense that participation stops short of the goal. Equilibrium outcomes have a simple characterization in large populations: welfare converges to either full efficiency or zero as n→∞ depending on a precise condition on the rate at which the share required for success converges to zero. Whether success is achievable or not, delays are always irrelevant: in the limit, success is achieved either instantly or never…(More)”

Multiple Streams and Policy Ambiguity


Book by Rob A. DeLeo, Reimut Zohlnhöfer and Nikolaos Zahariadis: “The last decade has seen a proliferation of research bolstering the theoretical and methodological rigor of the Multiple Streams Framework (MSF), one of the most prolific theories of agenda-setting and policy change. This Element sets out to address some of the most prominent criticisms of the theory, including the lack of empirical research and the inconsistent operationalization of key concepts, by developing the first comprehensive guide for conducting MSF research. It begins by introducing the MSF, including key theoretical constructs and hypotheses. It then presents the most important theoretical extensions of the framework and articulates a series of best practices for operationalizing, measuring, and analyzing MSF concepts. It closes by exploring existing gaps in MSF research and articulating fruitful areas of future research…(More)”.

Applying Social and Behavioral Science to Federal Policies and Programs to Deliver Better Outcomes


The White House: “Human behavior is a key component of every major national and global challenge. Social and behavioral science examines if, when, and how people’s actions and interactions influence decisions and outcomes. Understanding human behavior through social and behavioral science is vitally important for creating federal policies and programs that open opportunities for everyone.

Today, the Biden-Harris Administration shares the Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking. This blueprint recommends actions for agencies across the federal government to effectively leverage social and behavioral science in improving policymaking to deliver better outcomes and opportunities for people all across America. These recommendations include specific actions for agencies, such as considering social and behavioral insights early in policy or program development. The blueprint also lays out broader opportunities for agencies, such as ensuring agencies have a sufficient number of staff with social and behavioral science expertise.  

The blueprint includes nearly a hundred examples of how social and behavioral science is already used to make real progress on our highest priorities, including promoting safe, equitable, and engaged communities; protecting the environment and promoting climate innovation; advancing economic prosperity and the future of the workforce; enhancing the health outcomes of all Americans; rebuilding our infrastructure and building for tomorrow; and promoting national defense and international security. Social and behavioral science informs the conceptualization, development, implementation, dissemination, and evaluation of interventions, programs, and policies. Policymakers and social scientists can examine data about how government services reach people or measure the effectiveness of a program in assisting a particular community. Using this information, we can understand why programs sometimes fall short in delivering their intended benefits or why other programs are highly successful in delivering benefits. These approaches also help us design better policies and scale proven successful interventions to benefit the entire country…(More)”.

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases


Book by Alex Edmans: “Our lives are minefields of misinformation. It ripples through our social media feeds, our daily headlines, and the pronouncements of politicians, executives, and authors. Stories, statistics, and studies are everywhere, allowing people to find evidence to support whatever position they want. Many of these sources are flawed, yet by playing on our emotions and preying on our biases, they can gain widespread acceptance, warp our views, and distort our decisions.

In this eye-opening book, renowned economist Alex Edmans teaches us how to separate fact from fiction. Using colorful examples—from a wellness guru’s tragic but fabricated backstory to the blunders that led to the Deepwater Horizon disaster to the diet that ensnared millions yet hastened its founder’s death—Edmans highlights the biases that cause us to mistake statements for facts, facts for data, data for evidence, and evidence for proof.

Armed with the knowledge of what to guard against, he then provides a practical guide to combat this tide of misinformation. Going beyond simply checking the facts and explaining individual statistics, Edmans explores the relationships between statistics—the science of cause and effect—ultimately training us to think smarter, sharper, and more critically. May Contain Lies is an essential read for anyone who wants to make better sense of the world and better decisions…(More)”.

Empowered Mini-Publics: A Shortcut or Democratically Legitimate?


Paper by Shao Ming Lee: “Contemporary mini-publics involve randomly selected citizens deliberating and eventually tackling thorny issues. Yet, the usage of mini-publics in creating public policy has come under criticism, of which a more persuasive  strand  is  elucidated  by  eminent  philosopher  Cristina  Lafont,  who  argues  that  mini-publics  with  binding  decision-making  powers  (or  ‘empowered  mini-publics’)  are  an  undemocratic  ‘shortcut’  and  deliberative democrats thus cannot use empowered mini-publics for shaping public policies. This paper aims to serve as a nuanced defense of empowered mini-publics against Lafont’s claims. I argue against her  claims  by  explicating  how  participants  of  an  empowered  mini-public  remain  ordinary,  accountable,  and therefore connected to the broader public in a democratically legitimate manner. I further critique Lafont’s own proposals for non-empowered mini-publics and judicial review as failing to satisfy her own criteria for democratic legitimacy in a self-defeating manner and relying on a double standard. In doing so, I show how empowered mini-publics are not only democratic but can thus serve to expand democratic deliberation—a goal Lafont shares but relegates to non-empowered mini-publics…(More)”.

Data Stewardship: The Way Forward in the New Digital Data Landscape


Essay by Courtney Cameron: “…It is absolutely critical that Statistics Canada, as a national statistical office (NSO) and public service organization, along with other government agencies and services, adapt to the new data ecosystem and digital landscapeCanada is falling behind in adjusting to rapid digitalization, exploding data volumes, the ever-increasing digital market monopolization by private companies, foreign data harvesting, and in managing the risks associated with data sharing or reuse. If Statistics Canada and the federal public service are to keep up with private companies or foreign powers in this digital data context, and to continue to provide useful insights and services for Canadians, concerns of data digitalization, data interoperability and data security must be addressed through effective data stewardship.

However, it is not sufficient to have data stewards responsible for data: as data governance expert David Plotkin argues in Data Stewardship: An Actionable Guide to Effective Data Management and Data Governance, government departments must also consult these stewards on decisions about the data that they steward, if they are to ensure that decisions are made in the best interests of those who get value from the information. Frameworks, policies and procedures are needed to ensure this, as is having a steward involved in the processes as they occur. Plotkin also writes that data stewardship involvement needs to be integrated into enterprise processes, such as in project management and systems development methodologies. Data stewardship and data governance principles must be accepted as a part of the corporate culture, and stewardship leaders need to advise, drive and support this shift.

Finally, stewardship goes beyond sound data management and standards: it is important to be mindful of the role of an NSO. Public acceptability and trust are of vital importance. Social licence, or acceptability, and public engagement are necessary for NSOs to be able to perform their duties. These are achieved through practising data stewardship and adhering to the principles of open data, as well as by ensuring transparent processes, confidentiality and security, and by communicating the value of citizens’ sharing their data…With the rapidly accelerating proliferation of data and the increasing demand for, and potential of, data sharing and collaboration, NSOs and public governance organizations alike need to reimagine data stewardship as a function and role encompassing a wider range of purposes and responsibilities…(More)”. See also: Data Stewards — Drafting the Job Specs for A Re-imagined Data Stewardship Role

AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?


Paper by John Wihbey: “As advanced artificial intelligence (AI) technologies are developed and deployed, core zones of information and knowledge that support democratic life will be mediated more comprehensively by machines. Chatbots and AI agents may structure most internet, media, and public informational domains. What humans believe to be true and worthy of attention – what becomes public knowledge – may increasingly be influenced by the judgments of advanced AI systems. This pattern will present profound challenges to democracy. A pattern of what we might consider “epistemic risk” will threaten the possibility of AI ethical alignment with human values. AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. The pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values…(More)” – See also: Steering Responsible AI: A Case for Algorithmic Pluralism

Technological Citizenship in Times of Digitization: An Integrative Framework


Article by Anne Marte Gardenier, Rinie van Est & Lambèr Royakkers: “This article introduces an integrative framework for technological citizenship, examining the impact of digitization and the active roles of citizens in shaping this impact across the private, social, and public sphere. It outlines the dual nature of digitization, offering opportunities for enhanced connectivity and efficiency while posing challenges to privacy, security, and democratic integrity. Technological citizenship is explored through the lenses of liberal, communitarian, and republican theories, highlighting the active roles of citizens in navigating the opportunities and risks presented by digital technologies across all life spheres. By operationalizing technological citizenship, the article aims to address the gap in existing literature on the active roles of citizens in the governance of digitization. The framework emphasizes empowerment and resilience as crucial capacities for citizens to actively engage with and govern digital technologies. It illuminates citizens’ active participation in shaping the digital landscape, advocating for policies that support their engagement in safeguarding private, social, and public values in the digital age. The study calls for further research into technological citizenship, emphasizing its significance in fostering a more inclusive and equitable digital society…(More)”.

The Poisoning of the American Mind


Book by Lawrence M. Eppard: “Humans are hard-wired to look for information that they agree with (regardless of the information’s veracity), avoid information that makes them uncomfortable (even if that information is true), and interpret information in a manner that is most favorable to their sense of self. The damage these cognitive tendencies cause to one’s perception of reality depends in part upon the information that a person surrounds himself/herself with. Unfortunately, in the U.S. today, both liberals and conservatives are regularly bombarded with misleading information as well as lies from people they believe to be trustworthy and authoritative sources. While there are several factors one could plausibly blame for this predicament, the decline in the quality of the sources of information that the right and left rely on over the last few decades plays a primary role. As a result of this decline, we are faced with an epistemic crisis that is poisoning the American mind and threatening our democracy. In his forthcoming book with Jacob L. Mackey, The Poisoning of the American Mind, Lawrence M. Eppard explores epistemic problems in both the right-wing and left-wing ideological silos in the U.S., including ideology presented as fact, misinformation, disinformation, and malinformation…(More)”.