EBP+: Integrating science into policy evaluation using Evidential Pluralism


Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.

EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…

EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether is a cause of B requires assessing both association studies (studies that repeatedly measure and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.

A diagrammatic representation of Evidential Pluralism
Evidential Pluralism (© Jon Williamson)

The Power of Noticing What Was Always There


Book by Tali Sharot and Cass R. Sunstein: “Have you ever noticed that what is thrilling on Monday tends to become boring on Friday? Even exciting relationships, stimulating jobs, and breathtaking works of art lose their sparkle after a while. People stop noticing what is most wonderful in their own lives. They also stop noticing what is terrible. They get used to dirty air. They stay in abusive relationships. People grow to accept authoritarianism and take foolish risks. They become unconcerned by their own misconduct, blind to inequality, and are more liable to believe misinformation than ever before.

But what if we could find a way to see everything anew? What if you could regain sensitivity, not only to the great things in your life, but also to the terrible things you stopped noticing and so don’t try to change?

Now, neuroscience professor Tali Sharot and Harvard law professor (and presidential advisor) Cass R. Sunstein investigate why we stop noticing both the great and not-so-great things around us and how to “dishabituate” at the office, in the bedroom, at the store, on social media, and in the voting booth. This groundbreaking work, based on decades of research in the psychological and biological sciences, illuminates how we can reignite the sparks of joy, innovate, and recognize where improvements urgently need to be made. The key to this disruption—to seeing, feeling, and noticing again—is change. By temporarily changing your environment, changing the rules, changing the people you interact with—or even just stepping back and imagining change—you regain sensitivity, allowing you to more clearly identify the bad and more deeply appreciate the good…(More)”.

Designing Digital Voting Systems for Citizens


Paper by Joshua C. Yang et al: “Participatory Budgeting (PB) has evolved into a key democratic instrument for resource allocation in cities. Enabled by digital platforms, cities now have the opportunity to let citizens directly propose and vote on urban projects, using different voting input and aggregation rules. However, the choices cities make in terms of the rules of their PB have often not been informed by academic studies on voter behaviour and preferences. Therefore, this work presents the results of behavioural experiments where participants were asked to vote in a fictional PB setting. We identified approaches to designing PB voting that minimise cognitive load and enhance the perceived fairness and legitimacy of the digital process from the citizens’ perspective. In our study, participants preferred voting input formats that are more expressive (like rankings and distributing points) over simpler formats (like approval voting). Participants also indicated a desire for the budget to be fairly distributed across city districts and project categories. Participants found the Method of Equal Shares voting rule to be fairer than the conventional Greedy voting rule. These findings offer actionable insights for digital governance, contributing to the development of fairer and more transparent digital systems and collective decision-making processes for citizens…(More)”.

AI Accountability Policy Report


Report by NTIA: “Artificial intelligence (AI) systems are rapidly becoming part of the fabric of everyday American life. From customer service to image generation to manufacturing, AI systems are everywhere.

Alongside their transformative potential for good, AI systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security. Given their influence and ubiquity, these systems must be subject to security and operational mechanisms that mitigate risk and warrant stakeholder trust that they will not cause harm….


The AI Accountability Policy Report
 conceives of accountability as a chain of inputs linked to consequences. It focuses on how information flow (documentation, disclosures, and access) supports independent evaluations (including red-teaming and audits), which in turn feed into consequences (including liability and regulation) to create accountability. It concludes with recommendations for federal government action, some of which elaborate on themes in the AI EO, to encourage and possibly require accountability inputs…(More)”.

Graphic showing the AI Accountability Chain model

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

How Public Polling Has Changed in the 21st Century


Report by Pew Research: “The 2016 and 2020 presidential elections left many Americans wondering whether polling was broken and what, if anything, pollsters might do about it. A new Pew Research Center study finds that most national pollsters have changed their approach since 2016, and in some cases dramatically. Most (61%) of the pollsters who conducted and publicly released national surveys in both 2016 and 2022 used methods in 2022 that differed from what they used in 2016. The study also finds the use of multiple methods increasing. Last year 17% of national pollsters used at least three different methods to sample or interview people (sometimes in the same survey), up from 2% in 2016….(More)”.

The Non-Coherence Theory of Digital Human Rights


Book by Mart Susi: “…offers a novel non-coherence theory of digital human rights to explain the change in meaning and scope of human rights rules, principles, ideas and concepts, and the interrelationships and related actors, when moving from the physical domain into the online domain. The transposition into the digital reality can alter the meaning of well-established offline human rights to a wider or narrower extent, impacting core concepts such as transparency, legal certainty and foreseeability. Susi analyses the ‘loss in transposition’ of some core features of the rights to privacy and freedom of expression. The non-coherence theory is used to explore key human rights theoretical concepts, such as the network society approach, the capabilities approach, transversality, and self-normativity, and it is also applied to e-state and artificial intelligence, challenging the idea of the sameness of rights…(More)”.

The Need for Climate Data Stewardship: 10 Tensions and Reflections regarding Climate Data Governance


Paper by Stefaan Verhulst: “Datafication — the increase in data generation and advancements in data analysis — offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing new data sources in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates ten core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues…(More)”.

Meta Kills a Crucial Transparency Tool At the Worst Possible Time


Interview by Vittoria Elliott: “Earlier this month, Meta announced that it would be shutting down CrowdTangle, the social media monitoring and transparency tool that has allowed journalists and researchers to track the spread of mis- and disinformation. It will cease to function on August 14, 2024—just months before the US presidential election.

Meta’s move is just the latest example of a tech company rolling back transparency and security measures as the world enters the biggest global election year in history. The company says it is replacing CrowdTangle with a new Content Library API, which will require researchers and nonprofits to apply for access to the company’s data. But the Mozilla Foundation and 140 other civil society organizations protested last week that the new offering lacks much of CrowdTangle’s functionality, asking the company to keep the original tool operating until January 2025.

Meta spokesperson Andy Stone countered in posts on X that the groups’ claims “are just wrong,” saying the new Content Library will contain “more comprehensive data than CrowdTangle” and be made available to nonprofits, academics, and election integrity experts. When asked why commercial newsrooms, like WIRED, are to be excluded from the Content Library, Meta spokesperson Eric Porterfield said,  that it was “built for research purposes.” While journalists might not have direct access he suggested they could use commercial social network analysis tools, or “partner with an academic institution to help answer a research question related to our platforms.”

Brandon Silverman, cofounder and former CEO of CrowdTangle, who continued to work on the tool after Facebook acquired it in 2016, says it’s time to force platforms to open up their data to outsiders. The conversation has been edited for length and clarity…(More)”.

This Chatbot Democratizes Data to Empower India’s Farmers


Article by Abha Malpani Naismith: “…The lack of access to market price information and reliance on intermediaries to sell on their behalf leaves farmers vulnerable to price exploitation and uncertain returns on their investments.

To solve this, Gramhal is building a data cooperative in India where farmers contribute their information to a data ecosystem, which all farmers can leverage for better informed decision-making…

The social enterprise started the project to democratize data first by using the Indian government’s collected data sets from markets and crops across the country. It then built a chatbot (called Bolbhav) and plugged in that data. Soon about 300,000 farmers were accessing this data set via the chatbot on their mobile phones. 

“We spent no money on marketing — this was all just from word of mouth!” Kaleem said. 

gramhal chatbot provides market data for small farmers in India
Gramhal’s Bolbhav chatbot provides farmers with market data so they know how to fairly price their crops. 

However, Gramhal started getting feedback from farmers that the chatbot was giving them prices three days old and what they wanted was real-time, reliable data. “That is when we realized that we need to work with the power of community and think about a societal network framework where every farmer who is selling can contribute to the data and have access to it,” Kaleem explained. “We needed to find a way where the farmer can send price information about what they are selling by uploading their receipts, and we can aggregate that data across markets and share it with them.”

The solution was an upgraded version of the chatbot called Bolbhav Plus, which Gramhal launched in April 2023…(More)”