The Risks of Empowering “Citizen Data Scientists”


Article by Reid Blackman and Tamara Sipes: “Until recently, the prevailing understanding of artificial intelligence (AI) and its subset machine learning (ML) was that expert data scientists and AI engineers were the only people that could push AI strategy and implementation forward. That was a reasonable view. After all, data science generally, and AI in particular, is a technical field requiring, among other things, expertise that requires many years of education and training to obtain.

Fast forward to today, however, and the conventional wisdom is rapidly changing. The advent of “auto-ML” — software that provides methods and processes for creating machine learning code — has led to calls to “democratize” data science and AI. The idea is that these tools enable organizations to invite and leverage non-data scientists — say, domain data experts, team members very familiar with the business processes, or heads of various business units — to propel their AI efforts.

In theory, making data science and AI more accessible to non-data scientists (including technologists who are not data scientists) can make a lot of business sense. Centralized and siloed data science units can fail to appreciate the vast array of data the organization has and the business problems that it can solve, particularly with multinational organizations with hundreds or thousands of business units distributed across several continents. Moreover, those in the weeds of business units know the data they have, the problems they’re trying to solve, and can, with training, see how that data can be leveraged to solve those problems. The opportunities are significant.

In short, with great business insight, augmented with auto-ML, can come great analytic responsibility. At the same time, we cannot forget that data science and AI are, in fact, very difficult, and there’s a very long journey from having data to solving a problem. In this article, we’ll lay out the pros and cons of integrating citizen data scientists into your AI strategy and suggest methods for optimizing success and minimizing risks…(More)”.

Policy fit for the future: the Australian Government Futures primer


Primer by Will Hartigan and Arthur Horobin: “Futures is a systematic exploration of probable, possible and preferable future developments to inform present-day policy, strategy and decision-making. It uses multiple plausible scenarios of the future to anticipate and make sense of disruptive change. It is also known as strategic foresight...

This primer provides an overview of Futures methodologies and their practical application to policy development and advice. It is a first step for policy teams and officers interested in Futures: providing you with a range of flexible tools, ideas and advice you can adapt to your own policy challenges and environments.

This primer was developed by the Policy Projects and Taskforce Office in the Department of Prime Minister and Cabinet. We have drawn on expertise from inside and outside of government –including through our project partners, the Futures Hub at the National Security College in the Australian National University. 

This primer has been written by policy officers, for policy officers –with a focus on practical and tested approaches that can support you to create policy fit for the future…(More)”.

AI mass surveillance at Paris Olympics


Article by Anne Toomey McKenna: “The 2024 Paris Olympics is drawing the eyes of the world as thousands of athletes and support personnel and hundreds of thousands of visitors from around the globe converge in France. It’s not just the eyes of the world that will be watching. Artificial intelligence systems will be watching, too.

Government and private companies will be using advanced AI tools and other surveillance tech to conduct pervasive and persistent surveillance before, during and after the Games. The Olympic world stage and international crowds pose increased security risks so significant that in recent years authorities and critics have described the Olympics as the “world’s largest security operations outside of war.”

The French government, hand in hand with the private tech sector, has harnessed that legitimate need for increased security as grounds to deploy technologically advanced surveillance and data gathering tools. Its surveillance plans to meet those risks, including controversial use of experimental AI video surveillance, are so extensive that the country had to change its laws to make the planned surveillance legal.

The plan goes beyond new AI video surveillance systems. According to news reports, the prime minister’s office has negotiated a provisional decree that is classified to permit the government to significantly ramp up traditional, surreptitious surveillance and information gathering tools for the duration of the Games. These include wiretapping; collecting geolocation, communications and computer data; and capturing greater amounts of visual and audio data…(More)”.

The Data That Powers A.I. Is Disappearing Fast


Article by Kevin Roose: “For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt.

The study also found that as much as 45 percent of the data in one set, C4, had been restricted by websites’ terms of service.

“We’re seeing a rapid decline in consent to use data across the web that will have ramifications not just for A.I. companies, but for researchers, academics and noncommercial entities,” said Shayne Longpre, the study’s lead author, in an interview.

Data is the main ingredient in today’s generative A.I. systems, which are fed billions of examples of text, images and videos. Much of that data is scraped from public websites by researchers and compiled in large data sets, which can be downloaded and freely used, or supplemented with data from other sources…(More)”.

Governance of deliberative mini-publics: emerging consensus and divergent views


Paper by Lucy J. Parry, Nicole Curato, and , and John S. Dryzek: “Deliberative mini-publics are forums for citizen deliberation composed of randomly selected citizens convened to yield policy recommendations. These forums have proliferated in recent years but there are no generally accepted standards to govern their practice. Should there be? We answer this question by bringing the scholarly literature on citizen deliberation into dialogue with the lived experience of the people who study, design and implement mini-publics. We use Q methodology to locate five distinct perspectives on the integrity of mini-publics, and map the structure of agreement and dispute across them. We find that, across the five viewpoints, there is emerging consensus as well as divergence on integrity issues, with disagreement over what might be gained or lost by adapting common standards of practice, and possible sources of integrity risks. This article provides an empirical foundation for further discussion on integrity standards in the future…(More)”.

Precision public health in the era of genomics and big data


Paper by Megan C. Roberts et al: “Precision public health (PPH) considers the interplay between genetics, lifestyle and the environment to improve disease prevention, diagnosis and treatment on a population level—thereby delivering the right interventions to the right populations at the right time. In this Review, we explore the concept of PPH as the next generation of public health. We discuss the historical context of using individual-level data in public health interventions and examine recent advancements in how data from human and pathogen genomics and social, behavioral and environmental research, as well as artificial intelligence, have transformed public health. Real-world examples of PPH are discussed, emphasizing how these approaches are becoming a mainstay in public health, as well as outstanding challenges in their development, implementation and sustainability. Data sciences, ethical, legal and social implications research, capacity building, equity research and implementation science will have a crucial role in realizing the potential for ‘precision’ to enhance traditional public health approaches…(More)”.

Integrating Artificial Intelligence into Citizens’ Assemblies: Benefits, Concerns and Future Pathways


Paper by Sammy McKinney: “Interest in how Artificial Intelligence (AI) could be used within citizens’ assemblies (CAs) is emerging amongst scholars and practitioners alike. In this paper, I make four contributions at the intersection of these burgeoning fields. First, I propose an analytical framework to guide evaluations of the benefits and limitations of AI applications in CAs. Second, I map out eleven ways that AI, especially large language models (LLMs), could be used across a CAs full lifecycle. This introduces novel ideas for AI integration into the literature and synthesises existing proposals to provide the most detailed analytical breakdown of AI applications in CAs to date. Third, drawing on relevant literature, four key informant interviews, and the Global Assembly on the Ecological and Climate crisis as a case study, I apply my analytical framework to assess the desirability of each application. This provides insight into how AI could be deployed to address existing  challenges facing CAs today as well as the concerns that arise with AI integration. Fourth, bringing my analyses together, I argue that AI integration into CAs brings the potential to enhance their democratic quality and institutional capacity, but realising this requires the deliberative community to proceed cautiously, effectively navigate challenging trade-offs, and mitigate important concerns that arise with AI integration. Ultimately, this paper provides a foundation that can guide future research concerning AI integration into CAs and other forms of democratic innovation…(More)”.

Drivers of Trust in Public Institutions


Press Release: “In an increasingly challenging environment – marked by successive economic shocks, rising protectionism, the war in Europe and ongoing conflicts in the Middle East, as well as structural challenges and disruptions caused by rapid technological developments, climate change and population aging – 44% of respondents now have low or no trust in their national government, surpassing the 39% of respondents who express high or moderately high trust in national government, according to a new OECD report.  

OECD Survey on Drivers of Trust in Public Institutions – 2024 Results, presents findings from the second OECD Trust Survey, conducted in October and November 2023 across 30 Member countries. The biennial report offers a comprehensive analysis of current trust levels and their drivers across countries and public institutions. 

This edition of the Trust Survey confirms the previous finding that socio-economic and demographic factors, as well as a sense of having a say in decision making, affect trust. For example, 36% of women reported high or moderately high trust in government, compared to 43% of men. The most significant drop in trust since 2021 is seen among women and those with lower levels of education. The trust gap is largest between those who feel they have a say and those who feel they do not have a say in what the government does. Among those who report they have a say, 69% report high or moderately high trust in their national government, whereas among those who feel they do not only 22% do…(More)”.

Big Tech-driven deliberative projects


Report by Canning Malkin and Nardine Alnemr: “Google, Meta, OpenAI and Anthropic have commissioned projects based on deliberative democracy. What was the purpose of each project? How was deliberation designed and implemented, and what were the outcomes? In this Technical Paper, Malkin and Alnemr describe the commissioning context, the purpose and remit, and the outcomes of these deliberative projects. Finally, they offer insights on contextualising projects within the broader aspirations of deliberative democracy…(More)”.

Mapping the Landscape of AI-Powered Nonprofits


Article by Kevin Barenblat: “Visualize the year 2050. How do you see AI having impacted the world? Whatever you’re picturing… the reality will probably be quite a bit different. Just think about the personal computer. In its early days circa the 1980s, tech companies marketed the devices for the best use cases they could imagine: reducing paperwork, doing math, and keeping track of forgettable things like birthdays and recipes. It was impossible to imagine that decades later, the larger-than-a-toaster-sized devices would be smaller than the size of Pop-Tarts, connect with billions of other devices, and respond to voice and touch.

It can be hard for us to see how new technologies will ultimately be used. The same is true of artificial intelligence. With new use cases popping up every day, we are early in the age of AI. To make sense of all the action, many landscapes have been published to organize the tech stacks and private sector applications of AI. We could not, however, find an overview of how nonprofits are using AI for impact…

AI-powered nonprofits (APNs) are already advancing solutions to many social problems, and Google.org’s recent research brief AI in Action: Accelerating Progress Towards the Sustainable Development Goals shows that AI is driving progress towards all 17 SDGs. Three goals that stand out with especially strong potential to be transformed by AI are SDG 3 (Good Health and Well-Being), SDG 4 (Quality Education), and SDG 13 (Climate Action). As such, this series focuses on how AI-powered nonprofits are transforming the climate, health care, and education sectors…(More)”.