EBP+: Integrating science into policy evaluation using Evidential Pluralism


Article by Joe Jones, Alexandra Trofimov, Michael Wilde & Jon Williamson: “…While the need to integrate scientific evidence in policymaking is clear, there isn’t a universally accepted framework for doing so in practice. Orthodox evidence-based approaches take Randomised Controlled Trials (RCTs) as the gold standard of evidence. Others argue that social policy issues require theory-based methods to understand the complexities of policy interventions. These divisions may only further decrease trust in science at this critical time.

EBP+ offers a broader framework within which both orthodox and theory-based methods can sit. EBP+ also provides a systematic account of how to integrate and evaluate these different types of evidence. EBP+ can offer consistency and objectivity in policy evaluation, and could yield a unified approach that increases public trust in scientifically-informed policy…

EBP+ is motivated by Evidential Pluralism, a philosophical theory of causal enquiry that has been developed over the last 15 years. Evidential Pluralism encompasses two key claims. The first, object pluralism, says that establishing that A is a cause of B (e.g., that a policy intervention causes a specific outcome) requires establishing both that and B are appropriately correlated and that there is some mechanism which links the two and which can account for the extent of the correlation. The second claim, study pluralism, maintains that assessing whether is a cause of B requires assessing both association studies (studies that repeatedly measure and B, together with potential confounders, to measure their association) and mechanistic studies (studies of features of the mechanisms linking A to B), where available…(More)”.

A diagrammatic representation of Evidential Pluralism
Evidential Pluralism (© Jon Williamson)

A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

Meta Kills a Crucial Transparency Tool At the Worst Possible Time


Interview by Vittoria Elliott: “Earlier this month, Meta announced that it would be shutting down CrowdTangle, the social media monitoring and transparency tool that has allowed journalists and researchers to track the spread of mis- and disinformation. It will cease to function on August 14, 2024—just months before the US presidential election.

Meta’s move is just the latest example of a tech company rolling back transparency and security measures as the world enters the biggest global election year in history. The company says it is replacing CrowdTangle with a new Content Library API, which will require researchers and nonprofits to apply for access to the company’s data. But the Mozilla Foundation and 140 other civil society organizations protested last week that the new offering lacks much of CrowdTangle’s functionality, asking the company to keep the original tool operating until January 2025.

Meta spokesperson Andy Stone countered in posts on X that the groups’ claims “are just wrong,” saying the new Content Library will contain “more comprehensive data than CrowdTangle” and be made available to nonprofits, academics, and election integrity experts. When asked why commercial newsrooms, like WIRED, are to be excluded from the Content Library, Meta spokesperson Eric Porterfield said,  that it was “built for research purposes.” While journalists might not have direct access he suggested they could use commercial social network analysis tools, or “partner with an academic institution to help answer a research question related to our platforms.”

Brandon Silverman, cofounder and former CEO of CrowdTangle, who continued to work on the tool after Facebook acquired it in 2016, says it’s time to force platforms to open up their data to outsiders. The conversation has been edited for length and clarity…(More)”.

AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine


Article by Amit Katwala: “Robots, computers, and algorithms are hunting for potential new therapies in ways humans can’t—by processing huge volumes of data and building previously unimagined molecules. At an old biscuit factory in South London, giant mixers and industrial ovens have been replaced by robotic arms, incubators, and DNA sequencing machines.

James Field and his company LabGenius aren’t making sweet treats; they’re cooking up a revolutionary, AI-powered approach to engineering new medical antibodies. In nature, antibodies are the body’s response to disease and serve as the immune system’s front-line troops. They’re strands of protein that are specially shaped to stick to foreign invaders so that they can be flushed from the system. Since the 1980s, pharmaceutical companies have been making synthetic antibodies to treat diseases like cancer, and to reduce the chance of transplanted organs being rejected. But designing these antibodies is a slow process for humans—protein designers must wade through the millions of potential combinations of amino acids to find the ones that will fold together in exactly the right way, and then test them all experimentally, tweaking some variables to improve some characteristics of the treatment while hoping that doesn’t make it worse in other ways. “If you want to create a new therapeutic antibody, somewhere in this infinite space of potential molecules sits the molecule you want to find,” says Field, the founder and CEO of LabGenius…(More)”.

Whatever Happened to All Those Care Robots?


Article by Stephanie H. Murray: “So far, companion robots haven’t lived up to the hype—and might even exacerbate the problems they’re meant to solve…There are likely many reasons that the long-predicted robot takeover of elder care has yet to take off. Robots are expensive, and cash-strapped care homes don’t have money lying around to purchase a robot, let alone to pay for the training needed to actually use one effectively. And at least so far, social robots just aren’t worth the investment, Wright told me. Pepper can’t do a lot of the things people claimed he could—and he relies heavily on humans to help him do what he can. Despite some research suggesting they can boost well-being among the elderly, robots have shown little evidence that they make life easier for human caregivers. In fact, they require quite a bit of care themselves. Perhaps robots of the future will revolutionize caregiving as hoped. But the care robots we have now don’t even come close, and might even exacerbate the problems they’re meant to solve…(More)”.

Why we’re fighting to make sure labor unions have a voice in how AI is implemented


Article by Liz Shuler and Mike Kubzansky: “Earlier this month, Google’s co-founder admitted that the company had “definitely messed up” after its AI tool, Gemini, produced historically inaccurate images—including depictions of racially diverse Nazis. Sergey Brin cited a lack of “thorough testing” of the AI tool, but the incident is a good reminder that, despite all the hype around generative AI replacing human output, the technology still has a long way to go. 

Of course, that hasn’t stopped companies from deploying AI in the workplace. Some even use the technology as an excuse to lay workers off. Since last May, at least 4,000 people have lost their jobs to AI, and 70% of workers across the country live with the fear that AI is coming for theirs next. And while the technology may still be in its infancy, it’s developing fast. Earlier this year, AI pioneer Mustafa Suleyman said that “left completely to the market and to their own devices, [AI tools are] fundamentally labor-replacing.” Without changes now, AI could be coming to replace a lot of people’s jobs.

It doesn’t have to be this way. AI has enormous potential to build prosperity and unleash human creativity, but only if it also works for working people. Ensuring that happens requires giving the voice of workers—the people who will engage with these technologies every day, and whose lives, health, and livelihoods are increasingly affected by AI and automation—a seat at the decision-making table. 

As president of the AFL-CIO, representing 12.5 million working people across 60 unions, and CEO of Omidyar Network, a social change philanthropy that supports responsible technology, we believe that the single best movement to give everyone a voice is the labor movement. Empowering workers—from warehouse associates to software engineers—is the most powerful tactic we have to ensure that AI develops in the interests of the many, not the few…(More)”.

Central banks use AI to assess climate-related risks


Article by Huw Jones: “Central bankers said on Tuesday they have broken new ground by using artificial intelligence to collect data for assessing climate-related financial risks, just as the volume of disclosures from banks and other companies is set to rise.

The Bank for International Settlements, a forum for central banks, the Bank of Spain, Germany’s Bundesbank and the European Central Bank said their experimental Gaia AI project was used to analyse company disclosures on carbon emissions, green bond issuance and voluntary net-zero commitments.

Regulators of banks, insurers and asset managers need high-quality data to assess the impact of climate-change on financial institutions. However, the absence of a single reporting standard confronts them with a patchwork of public information spread across text, tables and footnotes in annual reports.

Gaia was able to overcome differences in definitions and disclosure frameworks across jurisdictions to offer much-needed transparency, and make it easier to compare indicators on climate-related financial risks, the central banks said in a joint statement.

Despite variations in how the same data is reported by companies, Gaia focuses on the definition of each indicator, rather than how the data is labelled.

Furthermore, with the traditional approach, each additional key performance indicator, or KPI, and each new institution requires the analyst to either search for the information in public corporate reports or contact the institution for information…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Meta to shut off data access to journalists


Article by Sara Fischer: “Meta plans to officially shutter CrowdTangle, the analytics tool widely used by journalists and researchers to see what’s going viral on Facebook and Instagram, the company’s president of global affairs Nick Clegg told Axios in an interview.

Why it matters: The company plans to instead offer select researchers access to a set of new data tools, but news publishers, journalists or anyone with commercial interests will not be granted access to that data.

The big picture: The effort comes amid a broader pivot from Meta away from news and politics and more toward user-generated viral videos.

  • Meta acquired CrowdTangle in 2016 at a time when publishers were heavily reliant on the tech giant for traffic.
  • In recent years, it’s stopped investing in the tool, making it less reliable.

The new research tools include Meta’s Content Library, which it launched last year, and an API, or backend interface used by developers.

  • Both tools offer researchers access to huge swaths of data from publicly accessible content across Facebook and Instagram.
  • The tools are available in 180 languages and offer global data.
  • Researchers must apply for access to those tools through the Inter-university Consortium for Political and Social Research at the University of Michigan, which will vet their requests…(More)”

How artificial intelligence can facilitate investigative journalism


Article by Luiz Fernando Toledo: “A few years ago, I worked on a project for a large Brazilian television channel whose objective was to analyze the profiles of more than 250 guardianship counselors in the city of São Paulo. These elected professionals have the mission of protecting the rights of children and adolescents in Brazil.

Critics had pointed out that some counselors did not have any expertise or prior experience working with young people and were only elected with the support of religious communities. The investigation sought to verify whether these elected counselors had professional training in working with children and adolescents or had any relationships with churches.

After requesting the counselors’ resumes through Brazil’s access to information law, a small team combed through each resume in depth—a laborious and time-consuming task. But today, this project might have required far less time and labor. Rapid developments in generative AI hold potential to significantly scale access and analysis of data needed for investigative journalism.

Many articles address the potential risks of generative AI for journalism and democracy, such as threats AI poses to the business model for journalism and its ability to facilitate the creation and spread of mis- and disinformation. No doubt there is cause for concern. But technology will continue to evolve, and it is up to journalists and researchers to understand how to use it in favor of the public interest.

I wanted to test how generative AI can help journalists, especially those that work with public documents and data. I tested several tools, including Ask Your PDF (ask questions to any documents in your computer), Chatbase (create your own chatbot), and Document Cloud (upload documents and ask GPT-like questions to hundreds of documents simultaneously).

These tools make use of the same mechanism that operates OpenAI’s famous ChatGPT—large language models that create human-like text. But they analyze the user’s own documents rather than information on the internet, ensuring more accurate answers by using specific, user-provided sources…(More)”.