Hypotheses devised by AI could find ‘blind spots’ in research


Article by Matthew Hutson: “One approach is to use AI to help scientists brainstorm. This is a task that large language models — AI systems trained on large amounts of text to produce new text — are well suited for, says Yolanda Gil, a computer scientist at the University of Southern California in Los Angeles who has worked on AI scientists. Language models can produce inaccurate information and present it as real, but this ‘hallucination’ isn’t necessarily bad, Mullainathan says. It signifies, he says, “‘here’s a kind of thing that looks true’. That’s exactly what a hypothesis is.”

Blind spots are where AI might prove most useful. James Evans, a sociologist at the University of Chicago, has pushed AI to make ‘alien’ hypotheses — those that a human would be unlikely to make. In a paper published earlier this year in Nature Human Behaviour4, he and his colleague Jamshid Sourati built knowledge graphs containing not just materials and properties, but also researchers. Evans and Sourati’s algorithm traversed these networks, looking for hidden shortcuts between materials and properties. The aim was to maximize the plausibility of AI-devised hypotheses being true while minimizing the chances that researchers would hit on them naturally. For instance, if scientists who are studying a particular drug are only distantly connected to those studying a disease that it might cure, then the drug’s potential would ordinarily take much longer to discover.

When Evans and Sourati fed data published up to 2001 to their AI, they found that about 30% of its predictions about drug repurposing and the electrical properties of materials had been uncovered by researchers, roughly six to ten years later. The system can be tuned to make predictions that are more likely to be correct but also less of a leap, on the basis of concurrent findings and collaborations, Evans says. But “if we’re predicting what people are going to do next year, that just feels like a scoop machine”, he adds. He’s more interested in how the technology can take science in entirely new directions….(More)”

Science and the State 


Introduction to Special Issue by Alondra Nelson et al: “…Current events have thrown these debates into high relief. Pressing issues from the pandemic to anthropogenic climate change, and the new and old inequalities they exacerbate, have intensified calls to critique but also imagine otherwise the relationship between scientific and state authority. Many of the subjects and communities whose well-being these authorities claim to promote have resisted, doubted, and mistrusted technoscientific experts and government officials. How might our understanding of the relationship change if the perspectives and needs of those most at risk from state and/or scientific violence or neglect were to be centered? Likewise, the pandemic and climate change have reminded scientists and state officials that relations among states matter at home and in the world systems that support supply chains, fuel technology, and undergird capitalism and migration. How does our understanding of the relationship between science and the state change if we eschew the nationalist framing of the classic Mertonian formulation and instead account for states in different parts of the world, as well as trans-state relationships?
This special issue began as a yearlong seminar on Science and the State convened by Alondra Nelson and Charis Thompson at the Institute for Advanced Study in Princeton, New Jersey. During the 2020–21 academic year, seventeen scholars from four continents met on a biweekly basis to read, discuss, and interrogate historical and contemporary scholarship on the origins, transformations, and sociopolitical
consequences of different configurations of science, technology, and governance. Our group consisted of scholars from different disciplines, including sociology, anthropology, philosophy, economics, history, political science, and geography. Examining technoscientific expertise and political authority while experiencing the conditions of the pandemic exerted a heightened sense of the stakes concerned and forced us to rethink easy critiques of scientific knowledge and state power. Our affective and lived experiences of the pandemic posed questions about what good science and good statecraft could be. How do we move beyond a presumption of isomorphism between “good” states and “good” science to understand and study the uneven experiences and sometimes exploitative practices of different configurations of science and the state?…(More)”.

Overcoming the Challenges of Using Automated Technologies for Public Health Evidence Synthesis


Article by Lucy Hocking et al: “Many organisations struggle to keep pace with public health evidence due to the volume of published literature and length of time it takes to conduct literature reviews. New technologies that help automate parts of the evidence synthesis process can help conduct reviews more quickly and efficiently to better provide up-to-date evidence for public health decision making. To date, automated approaches have seldom been used in public health due to significant barriers to their adoption. In this Perspective, we reflect on the findings of a study exploring experiences of adopting automated technologies to conduct evidence reviews within the public health sector. The study, funded by the European Centre for Disease Prevention and Control, consisted of a literature review and qualitative data collection from public health organisations and researchers in the field. We specifically focus on outlining the challenges associated with the adoption of automated approaches and potential solutions and actions that can be taken to mitigate these. We explore these in relation to actions that can be taken by tool developers (e.g. improving tool performance and transparency), public health organisations (e.g. developing staff skills, encouraging collaboration) and funding bodies/the wider research system (e.g. researchers, funding bodies, academic publishers and scholarly journals)…(More)”

Matchmaking Research To Policy: Introducing Britain’s Areas Of Research Interest Database


Article by Kathryn Oliver: “Areas of research interest (ARIs) were originally recommended in the 2015 Nurse Review, which argued that if government stated what it needed to know more clearly and more regularly, then it would be easier for policy-relevant research to be produced.

During our time in government, myself and Annette Boaz worked to develop these areas of research interest, mobilize experts and produce evidence syntheses and other outputs addressing them, largely in response to the COVID pandemic. As readers of this blog will know, we have learned a lot about what it takes to mobilize evidence – the hard, and often hidden labor of creating and sustaining relationships, being part of transient teams, managing group dynamics, and honing listening and diplomatic skills.

Some of the challenges we encountered include the oft-cited, cultural gap between research and policy, the relevance of evidence, and the difficulty in resourcing knowledge mobilization and evidence synthesis require systemic responses. However, one challenge, the information gap noted by Nurse, between researchers and what government departments actually want to know offered a simpler solution.

Up until September 2023, departmental ARIs were published on gov.uk, in pdf or html format. Although a good start, we felt that having all the ARIs in one searchable database would make them more interactive and accessible. So, working with Overton, we developed the new ARI database. The primary benefit of the database will be to raise awareness of ARIs (through email alerts about new ARIs) and accessibility (by holding all ARIs in one place which is easily searchable)…(More)”.

Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI


Paper by David Leslie: “In the current hype-laden climate surrounding the rapid proliferation of foundation models and generative AI systems like ChatGPT, it is becoming increasingly important for societal stakeholders to reach sound understandings of their limitations and potential transformative effects. This is especially true in the natural and applied sciences, where magical thinking among some scientists about the take-off of “artificial general intelligence” has arisen simultaneously as the growing use of these technologies is putting longstanding norms, policies, and standards of good research practice under pressure. In this analysis, I argue that a deflationary understanding of foundation models and generative AI systems can help us sense check our expectations of what role they can play in processes of scientific exploration, sense-making, and discovery. I claim that a more sober, tool-based understanding of generative AI systems as computational instruments embedded in warm-blooded research processes can serve several salutary functions. It can play a crucial bubble-bursting role that mitigates some of the most serious threats to the ethos of modern science posed by an unreflective overreliance on these technologies. It can also strengthen the epistemic and normative footing of contemporary science by helping researchers circumscribe the part to be played by machine-led prediction in communicative contexts of scientific discovery while concurrently prodding them to recognise that such contexts are principal sites for human empowerment, democratic agency, and creativity. Finally, it can help spur ever richer approaches to collaborative experimental design, theory-construction, and scientific world-making by encouraging researchers to deploy these kinds of computational tools to heuristically probe unbounded search spaces and patterns in high-dimensional biophysical data that would otherwise be inaccessible to human-scale examination and inference…(More)”.

Open-access reformers launch next bold publishing plan


Article by Layal Liverpool: “The group behind the radical open-access initiative Plan S has announced its next big plan to shake up research publishing — and this one could be bolder than the first. It wants all versions of an article and its associated peer-review reports to be published openly from the outset, without authors paying any fees, and for authors, rather than publishers, to decide when and where to first publish their work.

The group of influential funding agencies, called cOAlition S, has over the past five years already caused upheaval in the scholarly publishing world by pressuring more journals to allow immediate open-access publishing. Its new proposal, prepared by a working group of publishing specialists and released on 31 October, puts forward an even broader transformation in the dissemination of research.

It outlines a future “community-based” and “scholar-led” open-research communication system (see go.nature.com/45zyjh) in which publishers are no longer gatekeepers that reject submitted work or determine first publication dates. Instead, authors would decide when and where to publish the initial accounts of their findings, both before and after peer review. Publishers would become service providers, paid to conduct processes such as copy-editing, typesetting and handling manuscript submissions…(More)”.

Can Indigenous knowledge and Western science work together? New center bets yes


Article by Jeffrey Mervis: “For millennia, the Passamaquoddy people used their intimate understanding of the coastal waters along the Gulf of Maine to sustainably harvest the ocean’s bounty. Anthropologist Darren Ranco of the University of Maine hoped to blend their knowledge of tides, water temperatures, salinity, and more with a Western approach in a project to study the impact of coastal pollution on fish, shellfish, and beaches.

But the Passamaquoddy were never really given a seat at the table, says Ranco, a member of the Penobscot Nation, which along with the Passamaquoddy are part of the Wabanaki Confederacy of tribes in Maine and eastern Canada. The Passamaquoddy thought water quality and environmental protection should be top priority; the state emphasized forecasting models and monitoring. “There was a disconnect over who were the decision-makers, what knowledge would be used in making decisions, and what participation should look like,” Ranco says about the 3-year project, begun in 2015 and funded by the National Science Foundation (NSF).

Last month, NSF aimed to bridge such disconnects, with a 5-year, $30 million grant designed to weave together traditional ecological knowledge (TEK) and Western science. Based at the University of Massachusetts (UMass) Amherst, the Center for Braiding Indigenous Knowledges and Science (CBIKS) aims to fundamentally change the way scholars from both traditions select and carry out joint research projects and manage data…(More)”.

How ChatGPT and other AI tools could disrupt scientific publishing


Article by Gemma Conroy: “When radiologist Domenico Mastrodicasa finds himself stuck while writing a research paper, he turns to ChatGPT, the chatbot that produces fluent responses to almost any query in seconds. “I use it as a sounding board,” says Mastrodicasa, who is based at the University of Washington School of Medicine in Seattle. “I can produce a publication-ready manuscript much faster.”

Mastrodicasa is one of many researchers experimenting with generative artificial-intelligence (AI) tools to write text or code. He pays for ChatGPT Plus, the subscription version of the bot based on the large language model (LLM) GPT-4, and uses it a few times a week. He finds it particularly useful for suggesting clearer ways to convey his ideas. Although a Nature survey suggests that scientists who use LLMs regularly are still in the minority, many expect that generative AI tools will become regular assistants for writing manuscripts, peer-review reports and grant applications.

Those are just some of the ways in which AI could transform scientific communication and publishing. Science publishers are already experimenting with generative AI in scientific search tools and for editing and quickly summarizing papers. Many researchers think that non-native English speakers could benefit most from these tools. Some see generative AI as a way for scientists to rethink how they interrogate and summarize experimental results altogether — they could use LLMs to do much of this work, meaning less time writing papers and more time doing experiments…(More)”.

Seven routes to experimentation in policymaking: a guide to applied behavioural science methods


OECD Resource: “…offers guidelines and a visual roadmap to help policymakers choose the most fit-for-purpose evidence collection method for their specific policy challenge.

Source: Elaboration of the authors: Varazzani, C., Emmerling. T., Brusoni, S., Fontanesi, L., and Tuomaila, H., (2023), “Seven routes to experimentation: A guide to applied behavioural science methods,” OECD Working Papers on Public Governance, OECD Publishing, Paris. Note: The authors elaborated the map based on a previous map ideated, researched, and designed by Laura Castro Soto, Judith Wagner, and Torben Emmerling (sevenroutes.com).

The seven applied behavioural science methods:

  • Randomised Controlled Trials (RCTs) are experiments that can demonstrate a causal relationship between an intervention and an outcome, by randomly assigning individuals to an intervention group and a control group.
  • A/B testing tests two or more manipulations (such as variants of a webpage) to assess which performs better in terms of a specific goal or metric.
  • Difference-in-Difference is an experimental method that estimates the causal effect of an intervention by comparing changes in outcomes between an intervention group and a control group before and after the intervention.
  • Before-After studies assess the impact of an intervention or event by comparing outcomes or measurements before and after its occurrence, without a control group.
  • Longitudinal studies collect data from the same individuals or groups over an extended period to assess trends over time.
  • Correlational studies help to investigate the relationship between two or more variables to determine if they vary together (without implying causation).
  • Qualitative studies explore the underlying meanings and nuances of a phenomenon through interviews, focus group sessions, or other exploratory methods based on conversations and observations…(More)”.

Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence


Paper by Andres Karjus: “The increasing capacities of large language models (LLMs) present an unprecedented opportunity to scale up data analytics in the humanities and social sciences, augmenting and automating qualitative analytic tasks previously typically allocated to human labor. This contribution proposes a systematic mixed methods framework to harness qualitative analytic expertise, machine scalability, and rigorous quantification, with attention to transparency and replicability. 16 machine-assisted case studies are showcased as proof of concept. Tasks include linguistic and discourse analysis, lexical semantic change detection, interview analysis, historical event cause inference and text mining, detection of political stance, text and idea reuse, genre composition in literature and film; social network inference, automated lexicography, missing metadata augmentation, and multimodal visual cultural analytics. In contrast to the focus on English in the emerging LLM applicability literature, many examples here deal with scenarios involving smaller languages and historical texts prone to digitization distortions. In all but the most difficult tasks requiring expert knowledge, generative LLMs can demonstrably serve as viable research instruments. LLM (and human) annotations may contain errors and variation, but the agreement rate can and should be accounted for in subsequent statistical modeling; a bootstrapping approach is discussed. The replications among the case studies illustrate how tasks previously requiring potentially months of team effort and complex computational pipelines, can now be accomplished by an LLM-assisted scholar in a fraction of the time. Importantly, this approach is not intended to replace, but to augment researcher knowledge and skills. With these opportunities in sight, qualitative expertise and the ability to pose insightful questions have arguably never been more critical…(More)”.