Empowered Mini-Publics: A Shortcut or Democratically Legitimate?


Paper by Shao Ming Lee: “Contemporary mini-publics involve randomly selected citizens deliberating and eventually tackling thorny issues. Yet, the usage of mini-publics in creating public policy has come under criticism, of which a more persuasive  strand  is  elucidated  by  eminent  philosopher  Cristina  Lafont,  who  argues  that  mini-publics  with  binding  decision-making  powers  (or  ‘empowered  mini-publics’)  are  an  undemocratic  ‘shortcut’  and  deliberative democrats thus cannot use empowered mini-publics for shaping public policies. This paper aims to serve as a nuanced defense of empowered mini-publics against Lafont’s claims. I argue against her  claims  by  explicating  how  participants  of  an  empowered  mini-public  remain  ordinary,  accountable,  and therefore connected to the broader public in a democratically legitimate manner. I further critique Lafont’s own proposals for non-empowered mini-publics and judicial review as failing to satisfy her own criteria for democratic legitimacy in a self-defeating manner and relying on a double standard. In doing so, I show how empowered mini-publics are not only democratic but can thus serve to expand democratic deliberation—a goal Lafont shares but relegates to non-empowered mini-publics…(More)”.

Artificial Intelligence and the Skill Premium


Paper by David E. Bloom et al: “How will the emergence of ChatGPT and other forms of artificial intelligence (AI) affect the skill premium? To address this question, we propose a nested constant elasticity of substitution production function that distinguishes among three types of capital: traditional physical capital (machines, assembly lines), industrial robots, and AI. Following the literature, we assume that industrial robots predominantly substitute for low-skill workers, whereas AI mainly helps to perform the tasks of high-skill workers. We show that AI reduces the skill premium as long as it is more substitutable for high-skill workers than low-skill workers are for high-skill workers…(More)”

AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?


Paper by John Wihbey: “As advanced artificial intelligence (AI) technologies are developed and deployed, core zones of information and knowledge that support democratic life will be mediated more comprehensively by machines. Chatbots and AI agents may structure most internet, media, and public informational domains. What humans believe to be true and worthy of attention – what becomes public knowledge – may increasingly be influenced by the judgments of advanced AI systems. This pattern will present profound challenges to democracy. A pattern of what we might consider “epistemic risk” will threaten the possibility of AI ethical alignment with human values. AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics. The pathway forward for achieving any vision of ethical and responsible AI in the context of democracy means an insistence on epistemic modesty within AI models, as well as norms that emphasize the incompleteness of AI’s judgments with respect to human knowledge and values…(More)” – See also: Steering Responsible AI: A Case for Algorithmic Pluralism

Internet use statistically associated with higher wellbeing


Article by Oxford University: “Links between internet adoption and wellbeing are likely to be positive, despite popular concerns to the contrary, according to a major new international study from researchers at the Oxford Internet Institute, part of the University of Oxford.

The study encompassed more than two million participants psychological wellbeing from 2006-2021 across 168 countries, in relation to internet use and psychological well-being across 33,792 different statistical models and subsets of data, 84.9% of associations between internet connectivity and wellbeing were positive and statistically significant. 

The study analysed data from two million individuals aged 15 to 99 in 168 countries, including Latin America, Asia, and Africa and found internet access and use was consistently associated with positive wellbeing.   

Assistant Professor Matti Vuorre, Tilburg University and Research Associate, Oxford Internet Institute and Professor Andrew Przybylski, Oxford Internet Institute carried out the study to assess how technology relates to wellbeing in parts of the world that are rarely studied.

Professor Przybylski said: ‘Whilst internet technologies and platforms and their potential psychological consequences remain debated, research to date has been inconclusive and of limited geographic and demographic scope. The overwhelming majority of studies have focused on the Global North and younger people thereby ignoring the fact that the penetration of the internet has been, and continues to be, a global phenomenon’. 

‘We set out to address this gap by analysing how internet access, mobile internet access and active internet use might predict psychological wellbeing on a global level across the life stages. To our knowledge, no other research has directly grappled with these issues and addressed the worldwide scope of the debate.’ 

The researchers studied eight indicators of well-being: life satisfaction, daily negative and positive experiences, two indices of social well-being, physical wellbeing, community wellbeing and experiences of purpose.   

Commenting on the findings, Professor Vuorre said, “We were surprised to find a positive correlation between well-being and internet use across the majority of the thousands of models we used for our analysis.”

Whilst the associations between internet access and use for the average country was very consistently positive, the researchers did find some variation by gender and wellbeing indicators: The researchers found that 4.9% of associations linking internet use and community well-being were negative, with most of those observed among young women aged 15-24yrs.

Whilst not identified by the researchers as a causal relation, the paper notes that this specific finding is consistent with previous reports of increased cyberbullying and more negative associations between social media use and depressive symptoms among young women. 

Adds Przybylski, ‘Overall we found that average associations were consistent across internet adoption predictors and wellbeing outcomes, with those who had access to or actively used the internet reporting meaningfully greater wellbeing than those who did not’…(More)” See also: A multiverse analysis of the associations between internet use and well-being

Technological Citizenship in Times of Digitization: An Integrative Framework


Article by Anne Marte Gardenier, Rinie van Est & Lambèr Royakkers: “This article introduces an integrative framework for technological citizenship, examining the impact of digitization and the active roles of citizens in shaping this impact across the private, social, and public sphere. It outlines the dual nature of digitization, offering opportunities for enhanced connectivity and efficiency while posing challenges to privacy, security, and democratic integrity. Technological citizenship is explored through the lenses of liberal, communitarian, and republican theories, highlighting the active roles of citizens in navigating the opportunities and risks presented by digital technologies across all life spheres. By operationalizing technological citizenship, the article aims to address the gap in existing literature on the active roles of citizens in the governance of digitization. The framework emphasizes empowerment and resilience as crucial capacities for citizens to actively engage with and govern digital technologies. It illuminates citizens’ active participation in shaping the digital landscape, advocating for policies that support their engagement in safeguarding private, social, and public values in the digital age. The study calls for further research into technological citizenship, emphasizing its significance in fostering a more inclusive and equitable digital society…(More)”.

Artificial intelligence and complex sustainability policy problems: translating promise into practice


Paper by Ruby O’Connor et al: “Addressing sustainability policy challenges requires tools that can navigate complexity for better policy processes and outcomes. Attention on Artificial Intelligence (AI) tools and expectations for their use by governments have dramatically increased over the past decade. We conducted a narrative review of academic and grey literature to investigate how AI tools are being used and adapted for policy and public sector decision-making. We found that academics, governments, and consultants expressed positive expectations about AI, arguing that AI could or should be used to address a wide range of policy challenges. However, there is much less evidence of how public decision makers are actually using AI tools or detailed insight into the outcomes of use. From our findings we draw four lessons for translating the promise of AI into practice: 1) Document and evaluate AI’s application to sustainability policy problems in the real-world; 2) Focus on existing and mature AI technologies, not speculative promises or external pressures; 3) Start with the problem to be solved, not the technology to be applied; and 4) Anticipate and adapt to the complexity of sustainability policy problems…(More)”.

Automatic Generation of Model and Data Cards: A Step Towards Responsible AI


Paper by Jiarui Liu, Wenkai Li, Zhijing Jin, Mona Diab: “In an era of model and data proliferation in machine learning/AI especially marked by the rapid advancement of open-sourced technologies, there arises a critical need for standardized consistent documentation. Our work addresses the information incompleteness in current human-generated model and data cards. We propose an automated generation approach using Large Language Models (LLMs). Our key contributions include the establishment of CardBench, a comprehensive dataset aggregated from over 4.8k model cards and 1.4k data cards, coupled with the development of the CardGen pipeline comprising a two-step retrieval process. Our approach exhibits enhanced completeness, objectivity, and faithfulness in generated model and data cards, a significant step in responsible AI documentation practices ensuring better accountability and traceability…(More)”.

Digital Sovereignty: A Descriptive Analysis and a Critical Evaluation of Existing Models


Paper by Samuele Fratini et al: “Digital sovereignty is a popular yet still emerging concept. It is claimed by and related to various global actors, whose narratives are often competing and mutually inconsistent. Various scholars have proposed different descriptive approaches to make sense of the matter. We argue that existing works help advance our analytical understanding and that a critical assessment of existing forms of digital sovereignty is needed. Thus, the article offers an updated mapping of forms of digital sovereignty, while testing their effectiveness in response to radical changes and challenges. To do this, the article undertakes a systematic literature review, collecting 271 peer-reviewed articles from Google Scholar. They are used to identify descriptive features (how digital sovereignty is pursued) and value features (why digital sovereignty is pursued), which are then combined to produce four models: the rights-based model, market-oriented model, centralisation model, and state-based model. We evaluate their effectiveness within a framework of robust governance that accounts for the models’ ability to absorb the disruptions caused by technological advancements, geopolitical changes, and evolving societal norms. We find that none of the available models fully combines comprehensive regulations of digital technologies with a sufficient degree of responsiveness to fast-paced technological innovation and social and economic shifts. However, each offers valuable lessons to policymakers who wish to implement an effective and robust form of digital sovereignty…(More)”.

The Age of AI Nationalism and its Effects


Paper by Susan Ariel Aaronson: “This paper aims to illuminate how AI nationalistic policies may backfire. Over time, such actions and policies could alienate allies and prod other countries to adopt “beggar-thy neighbor” approaches to AI (The Economist: 2023; Kim: 2023 Shivakumar et al. 2024). Moreover, AI nationalism could have additional negative spillovers over time. Many AI experts are optimistic about the benefits of AI, whey they are aware of its many risks to democracy, equity, and society. They understand that AI can be a public good when it is used to mitigate complex problems affecting society (Gopinath: 2023; Okolo: 2023). However, when policymakers take steps to advance AI within their borders, they may — perhaps without intending to do so – make it harder for other countries with less capital, expertise, infrastructure, and data prowess to develop AI systems that could meet the needs of their constituents. In so doing, these officials could undermine the potential of AI to enhance human welfare and impede the development of more trustworthy AI around the world. (Slavkovik: 2024; Aaronson: 2023; Brynjolfsson and Unger: 2023; Agrawal et al. 2017).

Governments have many means of nurturing AI within their borders that do not necessarily discriminate between foreign and domestic producers of AI. Nevertheless, officials may be under pressure from local firms to limit the market power of foreign competitors. Officials may also want to use trade (for example, export controls) as a lever to prod other governments to change their behavior (Buchanan: 2020). Additionally, these officials may be acting in what they believe is the nation’s national security interest, which may necessitate that officials rely solely on local suppliers and local control. (GAO: 2021)

Herein the author attempts to illuminate AI nationalism and its consequences by answering 3 questions:
• What are nations doing to nurture AI capacity within their borders?
• Are some of these actions trade distorting?
• What are the implications of such trade-distorting actions?…(More)”

Learning from Ricardo and Thompson: Machinery and Labor in the Early Industrial Revolution, and in the Age of AI


Paper by Daron Acemoglu & Simon Johnson: “David Ricardo initially believed machinery would help workers but revised his opinion, likely based on the impact of automation in the textile industry. Despite cotton textiles becoming one of the largest sectors in the British economy, real wages for cotton weavers did not rise for decades. As E.P. Thompson emphasized, automation forced workers into unhealthy factories with close surveillance and little autonomy. Automation can increase wages, but only when accompanied by new tasks that raise the marginal productivity of labor and/or when there is sufficient additional hiring in complementary sectors. Wages are unlikely to rise when workers cannot push for their share of productivity growth. Today, artificial intelligence may boost average productivity, but it also may replace many workers while degrading job quality for those who remain employed. As in Ricardo’s time, the impact of automation on workers today is more complex than an automatic linkage from higher productivity to better wages…(More)”.