Empowered Mini-Publics: A Shortcut or Democratically Legitimate?


Paper by Shao Ming Lee: “Contemporary mini-publics involve randomly selected citizens deliberating and eventually tackling thorny issues. Yet, the usage of mini-publics in creating public policy has come under criticism, of which a more persuasive  strand  is  elucidated  by  eminent  philosopher  Cristina  Lafont,  who  argues  that  mini-publics  with  binding  decision-making  powers  (or  ‘empowered  mini-publics’)  are  an  undemocratic  ‘shortcut’  and  deliberative democrats thus cannot use empowered mini-publics for shaping public policies. This paper aims to serve as a nuanced defense of empowered mini-publics against Lafont’s claims. I argue against her  claims  by  explicating  how  participants  of  an  empowered  mini-public  remain  ordinary,  accountable,  and therefore connected to the broader public in a democratically legitimate manner. I further critique Lafont’s own proposals for non-empowered mini-publics and judicial review as failing to satisfy her own criteria for democratic legitimacy in a self-defeating manner and relying on a double standard. In doing so, I show how empowered mini-publics are not only democratic but can thus serve to expand democratic deliberation—a goal Lafont shares but relegates to non-empowered mini-publics…(More)”.

AI for social good: Improving lives and protecting the planet


McKinsey Report: “…Challenges in scaling AI for social-good initiatives are persistent and tough. Seventy-two percent of the respondents to our expert survey observed that most efforts to deploy AI for social good to date have focused on research and innovation rather than adoption and scaling. Fifty-five percent of grants for AI research and deployment across the SDGs are $250,000 or smaller, which is consistent with a focus on targeted research or smaller-scale deployment, rather than large-scale expansion. Aside from funding, the biggest barriers to scaling AI continue to be data availability, accessibility, and quality; AI talent availability and accessibility; organizational receptiveness; and change management. More on these topics can be found in the full report.

While overcoming these challenges, organizations should also be aware of strategies to address the range of risks, including inaccurate outputs, biases embedded in the underlying training data, the potential for large-scale misinformation, and malicious influence on politics and personal well-being. As we have noted in multiple recent articles, AI tools and techniques can be misused, even if the tools were originally designed for social good. Experts identified the top risks as impaired fairness, malicious use, and privacy and security concerns, followed by explainability (Exhibit 2). Respondents from not-for-profits expressed relatively more concern about misinformation, talent issues such as job displacement, and effects of AI on economic stability compared with their counterparts at for-profits, who were more often concerned with IP infringement…(More)”

QuantGov


About: “QuantGov is an open-source policy analytics platform designed to help create greater understanding and analysis of the breadth of government actions through quantifying policy text. By using the platform, researchers can quickly and effectively retrieve unique data that lies embedded in large bodies of text – data on text complexity, part of speech metrics, topic modeling, etc. …

QuantGov is a tool designed to make policy text more accessible. Think about it in terms of a hyper-powerful Google search that not only finds (1) specified content within massive quantities of text, but (2) also finds patterns and groupings and can even make predictions about what is in a document. Some recent use cases include the following:

  • Analyzing state regulatory codes and predicting which parts of those codes are related to occupational licensing….And predicting which occupation the regulation is talking about….And determining the cost to receive the license.
  • Analyzing Canadian province regulatory code while grouping individual regulations by industry-topic….And determining which Ministers are responsible for those regulations….And determining the complexity of the text for those regulation.
  • Quantifying the number of tariff exclusions that exists due to the Trade Expansion Act of 1962 and recent tariff polices….And determining which products those exclusions target.
  • Comparing the regulatory codes and content of 46 US states, 11 Canadian provinces, and 7 Australian states….While using consistent metrics that can lead to insights that provide legitimate policy improvements…(More)”.

Big Bet Bummer


Article by Kevin Starr: “I just got back from Skoll World Forum, the Cannes Festival for those trying to make the world a better place…Amidst the flow of people and ideas, there was one persistent source of turbulence. Literally, within five minutes of my arrival, I was hearing tales of anxiety and exasperation about “Big Bet Philanthropy.” The more people I talked to, the more it felt like the hungover aftermath of a great party: Those who weren’t invited feel left out, while many of those who went are wondering how they’ll get through the day ahead.

When you write startlingly big checks in an atmosphere of chronic scarcity, there are bound to be unintended consequences. Those consequences should guide some iterative party planning on the part of both doers and funders. …big bets bring a whole new level of risk, one borne mostly by the organization. Big bets drive organizations to dramatically accelerate their plans in order to justify a huge (double-your-budget and beyond) infusion of dough. In a funding world that has a tiny number of big bet funders and generally sucks at channeling money to those best able to create change, that puts you at real risk of a momentum and reputation-damaging stall when that big grant runs out…(More)”.

Internet use statistically associated with higher wellbeing


Article by Oxford University: “Links between internet adoption and wellbeing are likely to be positive, despite popular concerns to the contrary, according to a major new international study from researchers at the Oxford Internet Institute, part of the University of Oxford.

The study encompassed more than two million participants psychological wellbeing from 2006-2021 across 168 countries, in relation to internet use and psychological well-being across 33,792 different statistical models and subsets of data, 84.9% of associations between internet connectivity and wellbeing were positive and statistically significant. 

The study analysed data from two million individuals aged 15 to 99 in 168 countries, including Latin America, Asia, and Africa and found internet access and use was consistently associated with positive wellbeing.   

Assistant Professor Matti Vuorre, Tilburg University and Research Associate, Oxford Internet Institute and Professor Andrew Przybylski, Oxford Internet Institute carried out the study to assess how technology relates to wellbeing in parts of the world that are rarely studied.

Professor Przybylski said: ‘Whilst internet technologies and platforms and their potential psychological consequences remain debated, research to date has been inconclusive and of limited geographic and demographic scope. The overwhelming majority of studies have focused on the Global North and younger people thereby ignoring the fact that the penetration of the internet has been, and continues to be, a global phenomenon’. 

‘We set out to address this gap by analysing how internet access, mobile internet access and active internet use might predict psychological wellbeing on a global level across the life stages. To our knowledge, no other research has directly grappled with these issues and addressed the worldwide scope of the debate.’ 

The researchers studied eight indicators of well-being: life satisfaction, daily negative and positive experiences, two indices of social well-being, physical wellbeing, community wellbeing and experiences of purpose.   

Commenting on the findings, Professor Vuorre said, “We were surprised to find a positive correlation between well-being and internet use across the majority of the thousands of models we used for our analysis.”

Whilst the associations between internet access and use for the average country was very consistently positive, the researchers did find some variation by gender and wellbeing indicators: The researchers found that 4.9% of associations linking internet use and community well-being were negative, with most of those observed among young women aged 15-24yrs.

Whilst not identified by the researchers as a causal relation, the paper notes that this specific finding is consistent with previous reports of increased cyberbullying and more negative associations between social media use and depressive symptoms among young women. 

Adds Przybylski, ‘Overall we found that average associations were consistent across internet adoption predictors and wellbeing outcomes, with those who had access to or actively used the internet reporting meaningfully greater wellbeing than those who did not’…(More)” See also: A multiverse analysis of the associations between internet use and well-being

Technological Citizenship in Times of Digitization: An Integrative Framework


Article by Anne Marte Gardenier, Rinie van Est & Lambèr Royakkers: “This article introduces an integrative framework for technological citizenship, examining the impact of digitization and the active roles of citizens in shaping this impact across the private, social, and public sphere. It outlines the dual nature of digitization, offering opportunities for enhanced connectivity and efficiency while posing challenges to privacy, security, and democratic integrity. Technological citizenship is explored through the lenses of liberal, communitarian, and republican theories, highlighting the active roles of citizens in navigating the opportunities and risks presented by digital technologies across all life spheres. By operationalizing technological citizenship, the article aims to address the gap in existing literature on the active roles of citizens in the governance of digitization. The framework emphasizes empowerment and resilience as crucial capacities for citizens to actively engage with and govern digital technologies. It illuminates citizens’ active participation in shaping the digital landscape, advocating for policies that support their engagement in safeguarding private, social, and public values in the digital age. The study calls for further research into technological citizenship, emphasizing its significance in fostering a more inclusive and equitable digital society…(More)”.

Anti-Corruption and Integrity Outlook 2024


OECD Report: “This first edition of the OECD Anti-Corruption and Integrity Outlook analyses Member countries’ efforts to uphold integrity and fight corruption. Based on data from the Public Integrity Indicators, it analyses the performance of countries’ integrity frameworks, and explores how some of the main challenges to governments today (including the green transition, artificial intelligence, and foreign interference) are increasing corruption and integrity risks for countries. It also addresses how the shortcomings in integrity systems can impede countries’ responses to these major challenges. In providing a snapshot of how countries are performing today, the Outlook supports strategic planning and policy work to strengthen public integrity for the future…(More)”.

Big data for everyone


Article by Henrietta Howells: “Raw neuroimaging data require further processing before they can be used for scientific or clinical research. Traditionally, this could be accomplished with a single powerful computer. However, much greater computing power is required to analyze the large open-access cohorts that are increasingly being released to the community. And processing pipelines are inconsistently scripted, which can hinder reproducibility efforts. This creates a barrier for labs lacking access to sufficient resources or technological support, potentially excluding them from neuroimaging research. A paper by Hayashi and colleagues in Nature Methods offers a solution. They present https://brainlife.io, a freely available, web-based platform for secure neuroimaging data access, processing, visualization and analysis. It leverages ‘opportunistic computing’, which pools processing power from commercial and academic clouds, making it accessible to scientists worldwide. This is a step towards lowering the barriers for entry into big data neuroimaging research…(More)”.

We don’t need an AI manifesto — we need a constitution


Article by Vivienne Ming: “Loans drive economic mobility in America, even as they’ve been a historically powerful tool for discrimination. I’ve worked on multiple projects to reduce that bias using AI. What I learnt, however, is that even if an algorithm works exactly as intended, it is still solely designed to optimise the financial returns to the lender who paid for it. The loan application process is already impenetrable to most, and now your hopes for home ownership or small business funding are dying in a 50-millisecond computation…

In law, the right to a lawyer and judicial review are a constitutional guarantee in the US and an established civil right throughout much of the world. These are the foundations of your civil liberties. When algorithms act as an expert witness, testifying against you but immune to cross examination, these rights are not simply eroded — they cease to exist.

People aren’t perfect. Neither ethics training for AI engineers nor legislation by woefully uninformed politicians can change that simple truth. I don’t need to assume that Big Tech chief executives are bad actors or that large companies are malevolent to understand that what is in their self-interest is not always in mine. The framers of the US Constitution recognised this simple truth and sought to leverage human nature for a greater good. The Constitution didn’t simply assume people would always act towards that greater good. Instead it defined a dynamic mechanism — self-interest and the balance of power — that would force compromise and good governance. Its vision of treating people as real actors rather than better angels produced one of the greatest frameworks for governance in history.

Imagine you were offered an AI-powered test for post-partum depression. My company developed that very test and it has the power to change your life, but you may choose not to use it for fear that we might sell the results to data brokers or activist politicians. You have a right to our AI acting solely for your health. It was for this reason I founded an independent non-profit, The Human Trust, that holds all of the data and runs all of the algorithms with sole fiduciary responsibility to you. No mother should have to choose between a life-saving medical test and her civil rights…(More)”.

A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI


Report by Hannah Chafetz, Sampriti Saxena, and Stefaan G. Verhulst: “Since late 2022, generative AI services and large language models (LLMs) have transformed how many individuals access, and process information. However, how generative AI and LLMs can be augmented with open data from official sources and how open data can be made more accessible with generative AI – potentially enabling a Fourth Wave of Open Data – remains an under explored area. 

For these reasons, The Open Data Policy Lab (a collaboration between The GovLab and Microsoft) decided to explore the possible intersections between open data from official sources and generative AI. Throughout the last year, the team has conducted a range of research initiatives about the potential of open data and generative including a panel discussion, interviews, and Open Data Action Labs – a series of design sprints with a diverse group of industry experts. 

These initiatives were used to inform our latest report, “A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI,” (May 2024) which provides a new framework and recommendations to support open data providers and other interested parties in making open data “ready” for generative AI…

The report outlines five scenarios in which open data from official sources (e.g. open government and open research data) and generative AI can intersect. Each of these scenarios includes case studies from the field and a specific set of requirements that open data providers can focus on to become ready for a scenario. These include…(More)” (Arxiv).

Png Cover Page 26