‘Evidence banks’ can drive better decisions in public life


Article by Anjana Ahuja: “Modern life is full of urgent questions to which governments should be seeking answers. Does working from home — WFH — damage productivity? Do LTNs (low-traffic neighbourhoods) cut air pollution? Are policy ideas that can be summed up in three-letter acronyms more palatable to the public than those, say Ulez, requiring four? 

That last one is tongue-in-cheek — but the point stands. While clinical trials can tell us reasonably confidently whether a drug or treatment works, a similar culture of evaluation is generally lacking for other types of intervention, such as crime prevention. Now research funders are stepping into the gap to build “evidence banks” or evidence syntheses: globally accessible one-stop shops for assessing the weight of evidence on a particular topic.

Last month, the Economic and Social Research Council, together with the Wellcome Trust, pledged a total of around £54mn to develop a database and tools that can collate and make sense of evidence in complex areas like climate change and healthy ageing. The announcement, Nature reports, was timed to coincide with the UN Summit of the Future, a conference in New York geared to improving the world for future generations.

Go-to repositories of good quality information that can feed the policy machine are essential. They normalise the role of robust evidence in public life. This matters: the policy pipeline has too often lacked due diligence. That can mean public money being squandered on ineffective wheezes, or worse.

Take Scared Straight, a crime prevention scheme originating in the US around 40 years ago and adopted in the UK. It was designed to keep teens on the straight and narrow by introducing them to prisoners. Those dalliances with delinquents were counterproductive. A review showed that children taking part were more likely to end up committing crimes than those who did not participate in the scheme…(More)”.

Challenging the neutrality myth in climate science and activism


Article by Christel W. van Eck, Lydia Messling & Katharine Hayhoe: “The myth of a scientist as a purely rational thinker, a “brain in a jar” devoid of emotions and values, still exists in some scientific circles. However, philosophers of science have long shown that it is a fundamental misconception to believe that science can be entirely free of social, political, and ethical values, and function as a neutral entity. As Lynda Walsh explains compellingly in “Scientists as Prophets,” the question of how scientists ought to engage with society is a value judgement itself3. This is particularly true in complex crises like climate change where traditional democratic debate alone cannot ascertain the optimal course of action. Scientists often play a crucial role in such crises, not only through conducting rigorous research, but also through engaging in dialogue with society by framing their research in terms of societal values – which includes rejecting the notion of morally neutral engagement.

This school of thought was recently challenged in a comment in Nature Climate Action titled “The importance of distinguishing climate science from climate activism” In it, Ulf Büntgen, a Professor of Environmental Systems Analysis at Cambridge University, communicated his personal concerns about climate scientists engaging in activism. The comment sparked considerable debate on social media, particularly among climate scientists, many of whom reject the views presented by Büntgen.

We believe a response is necessary, as many of Büntgen’s assumptions are unnuanced or unjustified. It is difficult to provide a full critique when Büntgen has not clearly defined what he means by ‘climate activism’, ‘quasi-religious belief’, or ‘a priori interests’, nor explicit examples evidencing what sort of interaction he finds to be objectionable. However, whether scientists consider certain activities to be activism, and their opinions on colleagues who engage in such activities, along with the general public’s perception of these activities, has been the subject of multiple research studies. While the opinion of an individual scientist is interesting, we argue it is not representative of the broader community’s views nor does it reflect the efficacy of such actions. Furthermore, by making unilateral value-based judgements, we propose that Büntgen is engaging in precisely the activity he deprecates…(More)”

Citizen scientists will be needed to meet global water quality goals


University College London: “Sustainable development goals for water quality will not be met without the involvement of citizen scientists, argues an international team led by a UCL researcher, in a new policy brief.

The policy brief and attached technical brief are published by Earthwatch Europe on behalf of the United Nations Environment Program (UNEP)-coordinated World Water Quality Alliance that has supported citizen science projects in Kenya, Tanzania and Sierra Leone. The reports detail how policymakers can learn from examples where citizen scientists (non-professionals engaged in the scientific process, such as by collecting data) are already making valuable contributions.

The report authors focus on how to meet one of the UN’s Sustainable Development Goals around improving water quality, which the UN states is necessary for the health and prosperity of people and the planet…

“Locals who know the water and use the water are both a motivated and knowledgeable resource, so citizen science networks can enable them to provide large amounts of data and act as stewards of their local water bodies and sources. Citizen science has the potential to revolutionize the way we manage water resources to improve water quality.”…

The report authors argue that improving water quality data will require governments and organizations to work collaboratively with locals who collect their own data, particularly where government monitoring is scarce, but also where there is government support for citizen science schemes. Water quality improvement has a particularly high potential for citizen scientists to make an impact, as professionally collected data is often limited by a shortage of funding and infrastructure, while there are effective citizen science monitoring methods that can provide reliable data.

The authors write that the value of citizen science goes beyond the data collected, as there are other benefits pertaining to education of volunteers, increased community involvement, and greater potential for rapid response to water quality issues…(More)”.

The Unaccountability Machine — why do big systems make bad decisions?


FT Review of book by Dan Davies: “The starting point of Davies’ entertaining, insightful book is that the uncontrolled proliferation of accountability sinks is one of the central drivers of what historian Adam Tooze calls the “polycrisis” of the 21st century. Their influence reaches far beyond frustrated customers endlessly on hold to “computer says no” service departments. In finance, banking crises regularly recur — yet few individual bankers are found at fault. If politicians’ promises flop, they complain they have no power; the Deep State is somehow to blame.

The origin of the problem, Davies argues, is the managerial revolution that began after the second world war, abetted by the advent of cheap computing power and the diffusion of algorithmic decision-making into every sphere of life. These systems have ended up “acting like a car’s crumple-zone to shield any individual manager from a disastrous decision”, he writes. While attractive from the individual’s perspective, they scramble the feedback on which society as a whole depends.

Yet the story, Davies continues, is not so simple. Seen from another perspective, accountability sinks are entirely reasonable responses to the ever-increasing complexity of modern economies. Standardisation and explicit policies and procedures offer the only feasible route to meritocratic recruitment, consistent service and efficient work. Relying on the personal discretion of middle managers would simply result in a different kind of mess…(More)”.

Scientists around the world call to protect research on one of humanity’s greatest short-term threats – Disinformation


Forum on Democracy and Information: “At a critical time for understanding digital communications’ impact on societies, research on disinformation is endangered. 

In August, researchers around the world bid farewell to CrowdTangle – the Meta-owned social media monitoring tool. The decision by Meta to close the number one platform used to track mis- and disinformation, in what is a major election year, only to present its alternative tool Meta Content Library and API, has been met with a barrage of criticism.

If, as suggested by the World Economic Forum’s 2024 global risk report, disinformation is one of the biggest short-term threats to humanity, our collective ability to understand how it spreads and impacts our society is crucial. Just as we would not impede scientific research into the spread of viruses and disease, nor into natural ecosystems or other historical and social sciences, disinformation research must be permitted to be carried out unimpeded and with access to information needed to understand its complexity. Understanding the political economy of disinformation as well as its technological dimensions is also a matter of public health, democratic resilience, and national security.

By directly affecting the research community’s ability to open social media black boxes, this radical decision will also, in turn, hamper public understanding of how technology affects democracy. Public interest scrutiny is also essential for the next era of technology, notably for the world’s largest AI systems, which are similarly proprietary and opaque. The research community is already calling on AI companies to learn from the mistakes of social media and guarantee protections for good faith research. The solution falls on multiple shoulders and the global scientific community, civil society, public institutions and philanthropies must come together to meaningfully foster and protect public interest research on information and democracy…(More)”.

AI-enhanced collective intelligence


Paper by Hao Cui and Taha Yasseri: “Current societal challenges exceed the capacity of humans operating either alone or collectively. As AI evolves, its role within human collectives will vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, together, can surpass the collective intelligence of either humans or AI in isolation. However, the interactions in humanAI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from complex network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising cognition, physical, and information layers. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. We explore how agents’ diversity and interactions influence the system’s collective intelligence and analyze real-world instances of AI-enhanced collective intelligence. We conclude by considering potential challenges and future developments in this field….(More)” See also: Where and When AI and CI Meet: Exploring the Intersection of Artificial and Collective Intelligence

Unlocking AI for All: The Case for Public Data Banks


Article by Kevin Frazier: “The data relied on by OpenAI, Google, Meta, and other artificial intelligence (AI) developers is not readily available to other AI labs. Google and Meta relied, in part, on data gathered from their own products to train and fine-tune their models. OpenAI used tactics to acquire data that now would not work or may be more likely to be found in violation of the law (whether such tactics violated the law when originally used by OpenAI is being worked out in the courts). Upstart labs as well as research outfits find themselves with a dearth of data. Full realization of the positive benefits of AI, such as being deployed in costly but publicly useful ways (think tutoring kids or identifying common illnesses), as well as complete identification of the negative possibilities of AI (think perpetuating cultural biases) requires that labs other than the big players have access to quality, sufficient data.

The proper response is not to return to an exploitative status quo. Google, for example, may have relied on data from YouTube videos without meaningful consent from users. OpenAI may have hoovered up copyrighted data with little regard for the legal and social ramifications of that approach. In response to these questionable approaches, data has (rightfully) become harder to acquire. Cloudflare has equipped websites with the tools necessary to limit data scraping—the process of extracting data from another computer program. Regulators have developed new legal limits on data scraping or enforced old ones. Data owners have become more defensive over their content and, in some cases, more litigious. All of these largely positive developments from the perspective of data creators (which is to say, anyone and everyone who uses the internet) diminish the odds of newcomers entering the AI space. The creation of a public AI training data bank is necessary to ensure the availability of enough data for upstart labs and public research entities. Such banks would prevent those new entrants from having to go down the costly and legally questionable path of trying to hoover up as much data as possible…(More)”.

The Deletion Remedy


Paper by Daniel Wilf-Townsend: “A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms

But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.

This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses…(More)”.

The Arrival of Field Experiments in Economics


Article by Timothy Taylor: “When most people think of “experiments,” they think of test tubes and telescopes, of Petri dishes and Bunsen burners. But the physical apparatus is not central to what an “experiment” means. Instead, what matters is the ability to specify different conditions–and then to observe how the differences in the underlying conditions alter the outcomes. When “experiments” are understood in this broader way, the application of “experiments” is expanded.

For example, back in 1881 when Louis Pasteur tested his vaccine for sheep anthrax, he gave the vaccine to half of a flock of sheep, expose the entire group to anthrax, and showed that those with the vaccine survived. More recently, the “Green Revolution” in agricultural technology was essentially a set of experiments, by systematically breeding plant varieties and then looking at the outcomes in terms of yield, water use, pest resistance, and the like.

This understanding of “experiment” can be applied in economics, as well. John A. List explains in “Field Experiments: Here Today Gone Tomorrow?” (American Economist, published online August 6, 2024). By “field experiments,” List is seeking to differentiate his topic from “lab experiments,” which for economists refers to experiments carried out in a classroom context, often with students as the subjects, and to focus instead on experiments that involve people in the “field”–that is, in the context of their actual economic activities, including work, selling and buying, charitable giving, and the like. As List points out, these kinds of economic experiments have been going on for decades. He points out that government agencies have been conducting field experiments for decades…(More)”.

Leveraging AI for Democracy: Civic Innovation on the New Digital Playing Field


Report by Beth Kerley, Carl Miller, and Fernanda Campagnucci: “Like social media before them, new AI tools promise to change the game when it comes to civic engagement. These technologies offer bold new possibilities for investigative journalists, anticorruption advocates, and others working with limited resources to advance democratic norms.

Yet the transformation wrought by AI advances is far from guaranteed to work in democracy’s favor. Potential threats to democracy from AI have drawn wide attention. To better the odds for prodemocratic actors in a fluid technological environment, systematic thinking about how to make AI work for democracy is needed.

The essays in this report outline possible paths toward a prodemocratic vision for AI. An overview essay by Beth Kerley based on insights from an International Forum for Democratic Studies expert workshop reflects on the critical questions that confront organizations seeking to deploy AI tools. Fernanda Campagnucci, spotlighting the work of Open Knowledge Brasil to open up government data, explores how AI advances are creating new opportunities for citizens to scrutinize public information. Finally, Demos’s Carl Miller sheds light on how AI technologies that enable new forms of civic deliberation might change the way we think about democratic participation itself…(More)“.