Empowered Mini-Publics: A Shortcut or Democratically Legitimate?


Paper by Shao Ming Lee: “Contemporary mini-publics involve randomly selected citizens deliberating and eventually tackling thorny issues. Yet, the usage of mini-publics in creating public policy has come under criticism, of which a more persuasive  strand  is  elucidated  by  eminent  philosopher  Cristina  Lafont,  who  argues  that  mini-publics  with  binding  decision-making  powers  (or  ‘empowered  mini-publics’)  are  an  undemocratic  ‘shortcut’  and  deliberative democrats thus cannot use empowered mini-publics for shaping public policies. This paper aims to serve as a nuanced defense of empowered mini-publics against Lafont’s claims. I argue against her  claims  by  explicating  how  participants  of  an  empowered  mini-public  remain  ordinary,  accountable,  and therefore connected to the broader public in a democratically legitimate manner. I further critique Lafont’s own proposals for non-empowered mini-publics and judicial review as failing to satisfy her own criteria for democratic legitimacy in a self-defeating manner and relying on a double standard. In doing so, I show how empowered mini-publics are not only democratic but can thus serve to expand democratic deliberation—a goal Lafont shares but relegates to non-empowered mini-publics…(More)”.

Data Stewardship: The Way Forward in the New Digital Data Landscape


Essay by Courtney Cameron: “…It is absolutely critical that Statistics Canada, as a national statistical office (NSO) and public service organization, along with other government agencies and services, adapt to the new data ecosystem and digital landscapeCanada is falling behind in adjusting to rapid digitalization, exploding data volumes, the ever-increasing digital market monopolization by private companies, foreign data harvesting, and in managing the risks associated with data sharing or reuse. If Statistics Canada and the federal public service are to keep up with private companies or foreign powers in this digital data context, and to continue to provide useful insights and services for Canadians, concerns of data digitalization, data interoperability and data security must be addressed through effective data stewardship.

However, it is not sufficient to have data stewards responsible for data: as data governance expert David Plotkin argues in Data Stewardship: An Actionable Guide to Effective Data Management and Data Governance, government departments must also consult these stewards on decisions about the data that they steward, if they are to ensure that decisions are made in the best interests of those who get value from the information. Frameworks, policies and procedures are needed to ensure this, as is having a steward involved in the processes as they occur. Plotkin also writes that data stewardship involvement needs to be integrated into enterprise processes, such as in project management and systems development methodologies. Data stewardship and data governance principles must be accepted as a part of the corporate culture, and stewardship leaders need to advise, drive and support this shift.

Finally, stewardship goes beyond sound data management and standards: it is important to be mindful of the role of an NSO. Public acceptability and trust are of vital importance. Social licence, or acceptability, and public engagement are necessary for NSOs to be able to perform their duties. These are achieved through practising data stewardship and adhering to the principles of open data, as well as by ensuring transparent processes, confidentiality and security, and by communicating the value of citizens’ sharing their data…With the rapidly accelerating proliferation of data and the increasing demand for, and potential of, data sharing and collaboration, NSOs and public governance organizations alike need to reimagine data stewardship as a function and role encompassing a wider range of purposes and responsibilities…(More)”. See also: Data Stewards — Drafting the Job Specs for A Re-imagined Data Stewardship Role

AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?


Paper by John Wihbey: “As advanced artificial intelligence (AI) technologies are developed and deployed, core zones of information and knowledge that support democratic life will be mediated more comprehensively by machines. Chatbots and AI agents may structure most internet, media, and public informational domains. What humans believe to be true and worthy of attention – what becomes public knowledge – may increasingly be influenced by the judgments of advanced AI systems. This pattern will present profound challenges to democracy. A pattern of what we might consider “epistemic risk” will threaten the possibility of AI ethical alignment with human values. AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. The pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values…(More)” – See also: Steering Responsible AI: A Case for Algorithmic Pluralism

Technological Citizenship in Times of Digitization: An Integrative Framework


Article by Anne Marte Gardenier, Rinie van Est & Lambèr Royakkers: “This article introduces an integrative framework for technological citizenship, examining the impact of digitization and the active roles of citizens in shaping this impact across the private, social, and public sphere. It outlines the dual nature of digitization, offering opportunities for enhanced connectivity and efficiency while posing challenges to privacy, security, and democratic integrity. Technological citizenship is explored through the lenses of liberal, communitarian, and republican theories, highlighting the active roles of citizens in navigating the opportunities and risks presented by digital technologies across all life spheres. By operationalizing technological citizenship, the article aims to address the gap in existing literature on the active roles of citizens in the governance of digitization. The framework emphasizes empowerment and resilience as crucial capacities for citizens to actively engage with and govern digital technologies. It illuminates citizens’ active participation in shaping the digital landscape, advocating for policies that support their engagement in safeguarding private, social, and public values in the digital age. The study calls for further research into technological citizenship, emphasizing its significance in fostering a more inclusive and equitable digital society…(More)”.

The Poisoning of the American Mind


Book by Lawrence M. Eppard: “Humans are hard-wired to look for information that they agree with (regardless of the information’s veracity), avoid information that makes them uncomfortable (even if that information is true), and interpret information in a manner that is most favorable to their sense of self. The damage these cognitive tendencies cause to one’s perception of reality depends in part upon the information that a person surrounds himself/herself with. Unfortunately, in the U.S. today, both liberals and conservatives are regularly bombarded with misleading information as well as lies from people they believe to be trustworthy and authoritative sources. While there are several factors one could plausibly blame for this predicament, the decline in the quality of the sources of information that the right and left rely on over the last few decades plays a primary role. As a result of this decline, we are faced with an epistemic crisis that is poisoning the American mind and threatening our democracy. In his forthcoming book with Jacob L. Mackey, The Poisoning of the American Mind, Lawrence M. Eppard explores epistemic problems in both the right-wing and left-wing ideological silos in the U.S., including ideology presented as fact, misinformation, disinformation, and malinformation…(More)”.

Anti-Corruption and Integrity Outlook 2024


OECD Report: “This first edition of the OECD Anti-Corruption and Integrity Outlook analyses Member countries’ efforts to uphold integrity and fight corruption. Based on data from the Public Integrity Indicators, it analyses the performance of countries’ integrity frameworks, and explores how some of the main challenges to governments today (including the green transition, artificial intelligence, and foreign interference) are increasing corruption and integrity risks for countries. It also addresses how the shortcomings in integrity systems can impede countries’ responses to these major challenges. In providing a snapshot of how countries are performing today, the Outlook supports strategic planning and policy work to strengthen public integrity for the future…(More)”.

“Data Commons”: Under Threat by or The Solution for a Generative AI Era ? Rethinking Data Access and Re-us


Article by Stefaan G. Verhulst, Hannah Chafetz and Andrew Zahuranec: “One of the great paradoxes of our datafied era is that we live amid both unprecedented abundance and scarcity. Even as data grows more central to our ability to promote the public good, so too does it remain deeply — and perhaps increasingly — inaccessible and privately controlled. In response, there have been growing calls for “data commons” — pools of data that would be (self-)managed by distinctive communities or entities operating in the public’s interest. These pools could then be made accessible and reused for the common good.

Data commons are typically the results of collaborative and participatory approaches to data governance [1]. They offer an alternative to the growing tendency toward privatized data silos or extractive re-use of open data sets, instead emphasizing the communal and shared value of data — for example, by making data resources accessible in an ethical and sustainable way for purposes in alignment with community values or interests such as scientific researchsocial good initiativesenvironmental monitoringpublic health, and other domains.

Data commons can today be considered (the missing) critical infrastructure for leveraging data to advance societal wellbeing. When designed responsibly, they offer potential solutions for a variety of wicked problems, from climate change to pandemics and economic and social inequities. However, the rapid ascent of generative artificial intelligence (AI) technologies is changing the rules of the game, leading both to new opportunities as well as significant challenges for these communal data repositories.

On the one hand, generative AI has the potential to unlock new insights from data for a broader audience (through conversational interfaces such as chats), fostering innovation, and streamlining decision-making to serve the public interest. Generative AI also stands out in the realm of data governance due to its ability to reuse data at a massive scale, which has been a persistent challenge in many open data initiatives. On the other hand, generative AI raises uncomfortable questions related to equitable accesssustainability, and the ethical re-use of shared data resources. Further, without the right guardrailsfunding models and enabling governance frameworks, data commons risk becoming data graveyards — vast repositories of unused, and largely unusable, data.

Ten part framework to rethink Data Commons

In what follows, we lay out some of the challenges and opportunities posed by generative AI for data commons. We then turn to a ten-part framework to set the stage for a broader exploration on how to reimagine and reinvigorate data commons for the generative AI era. This framework establishes a landscape for further investigation; our goal is not so much to define what an updated data commons would look like but to lay out pathways that would lead to a more meaningful assessment of the design requirements for resilient data commons in the age of generative AI…(More)”

5 Ways AI Could Shake Up Democracy


Article by Shane Snider: “Tech luminary, author and Harvard Kennedy School lecturer Bruce Schneier on Tuesday offered his take on the promises and perils of artificial intelligence in key aspects of democracy.

In just two years, generative artificial intelligence (GenAI) has sparked a race to adopt (and defend against) the technology in government and the enterprise. It seems every aspect of life will soon be impacted — if not already feeling AI’s influence. A global race to place regulatory guardrails is taking shape even as companies and governments are spending billions of dollars implementing new AI technologies.

Schneier contends that five major areas of our democracy will likely see profound changes, including politics, lawmaking, administration, the legal system, and to citizens themselves.

“I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society, not necessarily by doing new things, but mostly by doing things that already or could be done by humans, are now replacing humans … There are potential changes in four dimensions: speed, scale, scope, and sophistication.”..(More)”.

The Age of AI Nationalism and its Effects


Paper by Susan Ariel Aaronson: “This paper aims to illuminate how AI nationalistic policies may backfire. Over time, such actions and policies could alienate allies and prod other countries to adopt “beggar-thy neighbor” approaches to AI (The Economist: 2023; Kim: 2023 Shivakumar et al. 2024). Moreover, AI nationalism could have additional negative spillovers over time. Many AI experts are optimistic about the benefits of AI, whey they are aware of its many risks to democracy, equity, and society. They understand that AI can be a public good when it is used to mitigate complex problems affecting society (Gopinath: 2023; Okolo: 2023). However, when policymakers take steps to advance AI within their borders, they may — perhaps without intending to do so – make it harder for other countries with less capital, expertise, infrastructure, and data prowess to develop AI systems that could meet the needs of their constituents. In so doing, these officials could undermine the potential of AI to enhance human welfare and impede the development of more trustworthy AI around the world. (Slavkovik: 2024; Aaronson: 2023; Brynjolfsson and Unger: 2023; Agrawal et al. 2017).

Governments have many means of nurturing AI within their borders that do not necessarily discriminate between foreign and domestic producers of AI. Nevertheless, officials may be under pressure from local firms to limit the market power of foreign competitors. Officials may also want to use trade (for example, export controls) as a lever to prod other governments to change their behavior (Buchanan: 2020). Additionally, these officials may be acting in what they believe is the nation’s national security interest, which may necessitate that officials rely solely on local suppliers and local control. (GAO: 2021)

Herein the author attempts to illuminate AI nationalism and its consequences by answering 3 questions:
• What are nations doing to nurture AI capacity within their borders?
• Are some of these actions trade distorting?
• What are the implications of such trade-distorting actions?…(More)”

What Mission-Driven Government Means


Article by Mariana Mazzucato & Rainer Kattel: “The COVID-19 pandemic, inflation, and wars have alerted governments to the realities of what it takes to tackle massive crises. In extraordinary times, policymakers often rediscover their capacity for bold decision-making. The rapid speed of COVID-19 vaccine development and deployment was a case in point.

But preparing for other challenges requires more sustained efforts in “mission-driven government.” Recalling the successful language and strategies of the Cold War-era moonshot, governments around the world are experimenting with ambitious policy programs and public-private partnerships in pursuit of specific social, economic, and environmental goals. For example, in the United Kingdom, the Labour Party’s five-mission campaign platform has kicked off a vibrant debate about whether and how to create a “mission economy.”

Mission-driven government is not about achieving doctrinal adherence to some original set of ideas; it is about identifying the essential components of missions and accepting that different countries might need different approaches. As matters stand, the emerging landscape of public missions is characterized by a re-labeling or repurposing of existing institutions and policies, with more stuttering starts than rapid takeoffs. But that is okay. We should not expect a radical change in policymaking strategies to happen overnight, or even over one electoral cycle.

Particularly in liberal democracies, ambitious change requires engagement across a wide range of constituencies to secure public buy-in, and to ensure that the benefits will be widely shared. The paradox at the heart of mission-driven government is that it pursues ambitious, clearly articulated policy goals through myriad policies and programs based on experimentation.

This embrace of experimentation is what separates today’s missions from the missions of the moonshot era (though it does echo the Roosevelt administration’s experimental approach during the 1930s New Deal). Major societal challenges, such as the urgent need to create more equitable and sustainable food systems, cannot be tackled the same way as a moon landing. Such systems consist of multiple technological dimensions (in the case of food, these include everything from energy to waste management), and involve widespread and often disconnected agents and an array of cultural norms, values, and habits…(More)”.