City/Science Intersections: A Scoping Review of Science for Policy in Urban Contexts


Paper by Gabriela Manrique Rueda et al: “Science is essential for cities to understand and intervene on the increasing global risks. However, challenges in effectively utilizing scientific knowledge in decision-making processes limit cities’ abilities to address these risks. This scoping review examines the development of science for urban policy, exploring the contextual factors, organizational structures, and mechanisms that facilitate or hinder the integration of science and policy. It investigates the challenges faced and the outcomes achieved. The findings reveal that science has gained influence in United Nations (UN) policy discourses, leading to the expansion of international, regional, and national networks connecting science and policy. Boundary-spanning organizations and collaborative research initiatives with stakeholders have emerged, creating platforms for dialogue, knowledge sharing, and experimentation. However, cultural differences between the science and policy realms impede the effective utilization of scientific knowledge in decision-making. While efforts are being made to develop methods and tools for knowledge co-production, translation, and mobilization, more attention is needed to establish science-for-policy organizational structures and address power imbalances in research processes that give rise to ethical challenges…(More)”.

Do People Like Algorithms? A Research Strategy


Paper by Cass R. Sunstein and Lucia Reisch: “Do people like algorithms? In this study, intended as a promissory note and a description of a research strategy, we offer the following highly preliminary findings. (1) In a simple choice between a human being and an algorithm, across diverse settings and without information about the human being or the algorithm, people in our tested groups are about equally divided in their preference. (2) When people are given a very brief account of the data on which an algorithm relies, there is a large shift in favor of the algorithm over the human being. (3) When people are given a very brief account of the experience of the relevant human being, without an account of the data on which the relevant algorithm relies, there is a moderate shift in favor of the human being. (4) When people are given both (a) a very brief account of the experience of the relevant human being and (b) a very brief account of the data on which the relevant algorithm relies, there is a large shift in favor of the algorithm over the human being. One lesson is that in the tested groups, at least one-third of people seem to have a clear preference for either a human being or an algorithm – a preference that is unaffected by brief information that seems to favor one or the other. Another lesson is that a brief account of the data on which an algorithm relies does have a significant effect on a large percentage of the tested groups, whether or not people are also given positive information about the human alternative. Across the various surveys, we do not find persistent demographic differences, with one exception: men appear to like algorithms more than women do. These initial findings are meant as proof of concept, or more accurately as a suggestion of concept, intended to inform a series of larger and more systematic studies of whether and when people prefer to rely on algorithms or human beings, and also of international and demographic differences…(More)”.

The Early History of Counting


Essay by Keith Houston: “Figuring out when humans began to count systematically, with purpose, is not easy. Our first real clues are a handful of curious, carved bones dating from the final few millennia of the three-​million-​year expanse of the Old Stone Age, or Paleolithic era. Those bones are humanity’s first pocket calculators: For the prehistoric humans who carved them, they were mathematical notebooks and counting aids rolled into one. For the anthropologists who unearthed them thousands of years later, they were proof that our ability to count had manifested itself no later than 40,000 years ago.

In 1973, while excavating a cave in the Lebombo Mountains, near South Africa’s border with Swaziland, Peter Beaumont found a small, broken bone with twenty-​nine notches carved across it. The so-​called Border Cave had been known to archaeologists since 1934, but the discovery during World War II of skeletal remains dating to the Middle Stone Age heralded a site of rare importance. It was not until Beaumont’s dig in the 1970s, however, that the cave gave up its most significant treasure: the earliest known tally stick, in the form of a notched, three-​inch long baboon fibula.

On the face of it, the numerical instrument known as the tally stick is exceedingly mundane. Used since before recorded history—​still used, in fact, by some cultures—​to mark the passing days, or to account for goods or monies given or received, most tally sticks are no more than wooden rods incised with notches along their length. They help their users to count, to remember, and to transfer ownership. All of which is reminiscent of writing, except that writing did not arrive until a scant 5,000 years ago—​and so, when the Lebombo bone was determined to be some 42,000 years old, it instantly became one of the most intriguing archaeological artifacts ever found. Not only does it put a date on when Homo sapiens started counting, it also marks the point at which we began to delegate our memories to external devices, thereby unburdening our minds so that they might be used for something else instead. Writing in 1776, the German historian Justus Möser knew nothing of the Lebombo bone, but his musings on tally sticks in general are strikingly apposite:

The notched tally stick itself testifies to the intelligence of our ancestors. No invention is simpler and yet more significant than this…(More)”.

Philosophy of Open Science


Book by Sabina Leonelli: “The Open Science [OS] movement aims to foster the wide dissemination, scrutiny and re-use of research components for the good of science and society. This Element examines the role played by OS principles and practices within contemporary research and how this relates to the epistemology of science. After reviewing some of the concerns that have prompted calls for more openness, it highlights how the interpretation of openness as the sharing of resources, so often encountered in OS initiatives and policies, may have the unwanted effect of constraining epistemic diversity and worsening epistemic injustice, resulting in unreliable and unethical scientific knowledge. By contrast, this Element proposes to frame openness as the effort to establish judicious connections among systems of practice, predicated on a process-oriented view of research as a tool for effective and responsible agency…(More)”.

AI tools are designing entirely new proteins that could transform medicine


Article by Ewen Callaway: “OK. Here we go.” David Juergens, a computational chemist at the University of Washington (UW) in Seattle, is about to design a protein that, in 3-billion-plus years of tinkering, evolution has never produced.

On a video call, Juergens opens a cloud-based version of an artificial intelligence (AI) tool he helped to develop, called RFdiffusion. This neural network, and others like it, are helping to bring the creation of custom proteins — until recently a highly technical and often unsuccessful pursuit — to mainstream science.

These proteins could form the basis for vaccines, therapeutics and biomaterials. “It’s been a completely transformative moment,” says Gevorg Grigoryan, the co-founder and chief technical officer of Generate Biomedicines in Somerville, Massachusetts, a biotechnology company applying protein design to drug development.

The tools are inspired by AI software that synthesizes realistic images, such as the Midjourney software that, this year, was famously used to produce a viral image of Pope Francis wearing a designer white puffer jacket. A similar conceptual approach, researchers have found, can churn out realistic protein shapes to criteria that designers specify — meaning, for instance, that it’s possible to speedily draw up new proteins that should bind tightly to another biomolecule. And early experiments show that when researchers manufacture these proteins, a useful fraction do perform as the software suggests.

The tools have revolutionized the process of designing proteins in the past year, researchers say. “It is an explosion in capabilities,” says Mohammed AlQuraishi, a computational biologist at Columbia University in New York City, whose team has developed one such tool for protein design. “You can now create designs that have sought-after qualities.”

“You’re building a protein structure customized for a problem,” says David Baker, a computational biophysicist at UW whose group, which includes Juergens, developed RFdiffusion. The team released the software in March 2023, and a paper describing the neural network appears this week in Nature1. (A preprint version was released in late 2022, at around the same time that several other teams, including AlQuraishi’s2 and Grigoryan’s3, reported similar neural networks)…(More)”.

Just Citation


Paper by Amanda Levendowski: “Contemporary citation practices are often unjust. Data cartels, like Google, Westlaw, and Lexis, prioritize profits and efficiency in ways that threaten people’s autonomy, particularly that of pregnant people and immigrants. Women and people of color have been legal scholars for more than a century, yet colleagues consistently under-cite and under-acknowledge their work. Other citations frequently lead to materials that cannot be accessed by disabled people, poor people or the public due to design, paywalls or link rot. Yet scholars and students often understand citation practices as “just” citation and perpetuate these practices unknowingly. This Article is an intervention. Using an intersectional feminist framework for understanding how cyberlaws oppress and liberate oppressed, an emerging movement known as feminist cyberlaw, this Article investigates problems posed by prevailing citation practices and introduces practical methods that bring citation into closer alignment with the feminist values of safety, equity, and accessibility. Escaping data cartels, engaging marginalized scholars, embracing free and public resources, and ensuring that those resources remain easily available represent small, radical shifts that promote just citation. This Article provides powerful, practical tools for pursuing all of them…(More)”.

Engaging Scientists to Prevent Harmful Exploitation of Advanced Data Analytics and Biological Data


Proceedings from the National Academies of Sciences: “Artificial intelligence (AI), facial recognition, and other advanced computational and statistical techniques are accelerating advancements in the life sciences and many other fields. However, these technologies and the scientific developments they enable also hold the potential for unintended harm and malicious exploitation. To examine these issues and to discuss practices for anticipating and preventing the misuse of advanced data analytics and biological data in a global context, the National Academies of Sciences, Engineering, and Medicine convened two virtual workshops on November 15, 2022, and February 9, 2023. The workshops engaged scientists from the United States, South Asia, and Southeast Asia through a series of presentations and scenario-based exercises to explore emerging applications and areas of research, their potential benefits, and the ethical issues and security risks that arise when AI applications are used in conjunction with biological data. This publication highlights the presentations and discussions of the workshops…(More)”.

How should a robot explore the Moon? A simple question shows the limits of current AI systems


Article by Sally Cripps, Edward Santow, Nicholas Davis, Alex Fischer and Hadi Mohasel Afshar: “..Ultimately, AI systems should help humans make better, more accurate decisions. Yet even the most impressive and flexible of today’s AI tools – such as the large language models behind the likes of ChatGPT – can have the opposite effect.

Why? They have two crucial weaknesses. They do not help decision-makers understand causation or uncertainty. And they create incentives to collect huge amounts of data and may encourage a lax attitude to privacy, legal and ethical questions and risks…

ChatGPT and other “foundation models” use an approach called deep learning to trawl through enormous datasets and identify associations between factors contained in that data, such as the patterns of language or links between images and descriptions. Consequently, they are great at interpolating – that is, predicting or filling in the gaps between known values.

Interpolation is not the same as creation. It does not generate knowledge, nor the insights necessary for decision-makers operating in complex environments.

However, these approaches require huge amounts of data. As a result, they encourage organisations to assemble enormous repositories of data – or trawl through existing datasets collected for other purposes. Dealing with “big data” brings considerable risks around security, privacy, legality and ethics.

In low-stakes situations, predictions based on “what the data suggest will happen” can be incredibly useful. But when the stakes are higher, there are two more questions we need to answer.

The first is about how the world works: “what is driving this outcome?” The second is about our knowledge of the world: “how confident are we about this?”…(More)”.

Diversity of Expertise is Key to Scientific Impact


Paper by Angelo Salatino, Simone Angioni, Francesco Osborne, Diego Reforgiato Recupero, Enrico Motta: “Understanding the relationship between the composition of a research team and the potential impact of their research papers is crucial as it can steer the development of new science policies for improving the research enterprise. Numerous studies assess how the characteristics and diversity of research teams can influence their performance across several dimensions: ethnicity, internationality, size, and others. In this paper, we explore the impact of diversity in terms of the authors’ expertise. To this purpose, we retrieved 114K papers in the field of Computer Science and analysed how the diversity of research fields within a research team relates to the number of citations their papers received in the upcoming 5 years. The results show that two different metrics we defined, reflecting the diversity of expertise, are significantly associated with the number of citations. This suggests that, at least in Computer Science, diversity of expertise is key to scientific impact…(More)”.