Paper by D. Max Crowley et al: “This study is an experimental trial that demonstrates the potential for formal outreach strategies to change congressional use of research. Our results show that collaboration between policy and research communities can change policymakers’ value of science and result in legislation that appears to be more inclusive of research evidence. The findings of this study also demonstrated changes in researchers’ knowledge and motivation to engage with policymakers as well as their actual policy engagement behavior. Together, the observed changes in both policymakers and researchers randomized to receive an intervention for supporting legislative use of research evidence (i.e., the Research-to-Policy Collaboration model) provides support for the underlying theories around the social nature of research translation and evidence use….(More)”.
The speed of science
Essay by Saloni Dattani & Nathaniel Bechhofer: “The 21st century has seen some phenomenal advances in our ability to make scientific discoveries. Scientists have developed new technology to build vaccines swiftly, new algorithms to predict the structure of proteins accurately, new equipment to sequence DNA rapidly, and new engineering solutions to harvest energy efficiently. But in many fields of science, reliable knowledge and progress advance staggeringly slowly. What slows it down? And what can we learn from individual fields of science to pick up the pace across the board – without compromising on quality?
By and large, scientific research is published in journals in the form of papers – static documents that do not update with new data or new methods. Instead of sharing the data and the code that produces their results, most scientists simply publish a textual description of their research in online publications. These publications are usually hidden behind paywalls, making it harder for outsiders to verify their authenticity.
On the occasion when a reader spots a discrepancy in the data or an error in the methods, they must read the intricate details of a study’s method scrupulously, and cross-check the statistics manually. When scientists don’t share the data to produce their results openly, the task becomes even harder. The process of error correction – from scientists publishing a paper, to readers spotting errors, to having the paper corrected or retracted – can take years, assuming those errors are spotted at all.
When scientists reference previous research, they cite entire papers, not specific results or values from them. And although there is evidence that scientists hold back from citing papers once they have been retracted, the problem is compounded over time – consider, for example, a researcher who cites a study that itself derives its data or assumptions from prior research that has been disputed, corrected or retracted. The longer it takes to sift through the science, to identify which results are accurate, the longer it takes to gather an understanding of scientific knowledge.
What makes the problem even more challenging is that flaws in a study are not necessarily mathematical errors. In many situations, researchers make fairly arbitrary decisions as to how they collect their data, which methods they apply to analyse them, and which results they report – altogether leaving readers blind to the impact of these decisions on the results.
This murkiness can result in what is known as p-hacking: when researchers selectively apply arbitrary methods in order to achieve a particular result. For example, in a study that compares the well-being of overweight people to that of underweight people, researchers may find that certain cut-offs of weight (or certain subgroups in their sample) provide the result they’re looking for, while others don’t. And they may decide to only publish the particular methods that provided that result…(More)”.
The Mathematics of How Connections Become Global
Kelsey Houston-Edwards at Scientific American: “When you hit “send” on a text message, it is easy to imagine that the note will travel directly from your phone to your friend’s. In fact, it typically goes on a long journey through a cellular network or the Internet, both of which rely on centralized infrastructure that can be damaged by natural disasters or shut down by repressive governments. For fear of state surveillance or interference, tech-savvy protesters in Hong Kong avoided the Internet by using software such as FireChat and Bridgefy to send messages directly between nearby phones.
These apps let a missive hop silently from one phone to the next, eventually connecting the sender to the receiver—the only users capable of viewing the message. The collections of linked phones, known as mesh networks or mobile ad hoc networks, enable a flexible and decentralized mode of communication. But for any two phones to communicate, they need to be linked via a chain of other phones. How many people scattered throughout Hong Kong need to be connected via the same mesh network before we can be confident that crosstown communication is possible?

A branch of mathematics called percolation theory offers a surprising answer: just a few people can make all the difference. As users join a new network, isolated pockets of connected phones slowly emerge. But full east-to-west or north-to-south communication appears all of a sudden as the density of users passes a critical and sharp threshold. Scientists describe such a rapid change in a network’s connectivity as a phase transition—the same concept used to explain abrupt changes in the state of a material such as the melting of ice or the boiling of water.

Percolation theory examines the consequences of randomly creating or removing links in such networks, which mathematicians conceive of as a collection of nodes (represented by points) linked by “edges” (lines). Each node represents an object such as a phone or a person, and the edges represent a specific relation between two of them. The fundamental insight of percolation theory, which dates back to the 1950s, is that as the number of links in a network gradually increases, a global cluster of connected nodes will suddenly emerge….(More)”.
What the drive for open science data can learn from the evolving history of open government data
Stefaan Verhulst, Andrew Young, and Andrew Zahuranec at The Conversation: “Nineteen years ago, a group of international researchers met in Budapest to discuss a persistent problem. While experts published an enormous amount of scientific and scholarly material, few of these works were accessible. New research remained locked behind paywalls run by academic journals. The result was researchers struggled to learn from one another. They could not build on one another’s findings to achieve new insights. In response to these problems, the group developed the Budapest Open Access Initiative, a declaration calling for free and unrestricted access to scholarly journal literature in all academic fields.
In the years since, open access has become a priority for a growing number of universities, governments, and journals. But while access to scientific literature has increased, access to the scientific data underlying this research remains extremely limited. Researchers can increasingly see what their colleagues are doing but, in an era defined by the replication crisis, they cannot access the data to reproduce the findings or analyze it to produce new findings. In some cases there are good reasons to keep access to the data limited – such as confidentiality or sensitivity concerns – yet in many other cases data hoarding still reigns.
To make scientific research data open to citizens and scientists alike, open science data advocates can learn from open data efforts in other domains. By looking at the evolving history of the open government data movement, scientists can see both limitations to current approaches and identify ways to move forward from them….(More) (French version)”.
The Future of Nudging Will Be Personal
Essay by Stuart Mills: “Nudging, now more than a decade old as an intervention tool, has become something of a poster child for the behavioral sciences. We know that people don’t always act in their own best interest—sometimes spectacularly so—and nudges have emerged as a noncoercive way to live better in a world shaped by our behavioral foibles.
But with nudging’s maturity, we’ve also begun to understand some of the ways that it falls short. Take, for instance, research by Linda Thunström and her colleagues. They found that “successful” nudges can actually harm subgroups of a population. In their research, spendthrifts (those who spend freely) spent less when nudged, bringing them closer to optimal spending. But when given the same nudge, tightwads also spent less, taking them further from the optimal.
While a nudge might appear effective because a population benefited on average, at the individual level the story could be different. Should nudging penalize people that differ from the average just because, on the whole, a policy would benefit the population? Though individual versus population trade-offs are part and parcel to policymaking, as our ability to personalize advances, through technology and data, these trade-offs seem less and less appealing….(More)”.
DNA databases are too white, so genetics doesn’t help everyone. How do we fix that?
Tina Hesman Saey at ScienceNews: “It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases.
That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.
Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.”
She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.
When people of African, Asian, Native American or Pacific Island ancestry get a DNA test to determine if they inherited a variant that may cause cancer or if a particular drug will work for them, they’re often left with more questions than answers. The results often reveal “variants of uncertain significance,” leaving doctors with too little useful information. This happens less often for people of European descent. That disparity could change if genetics included a more diverse group of participants, researchers agree (SN: 9/17/16, p. 8).
One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests….(More)”.
Revenge of the Experts: Will COVID-19 Renew or Diminish Public Trust in Science?
Paper by Barry Eichengreen, Cevat Aksoy and Orkun Saka: “It is sometimes said that an effect of the COVID-19 pandemic will be heightened appreciation of the importance of scientific research and expertise. We test this hypothesis by examining how exposure to previous epidemics affected trust in science and scientists. Building on the “impressionable years hypothesis” that attitudes are durably formed during the ages 18 to 25, we focus on individuals exposed to epidemics in their country of residence at this particular stage of the life course. Combining data from a 2018 Wellcome Trust survey of more than 75,000 individuals in 138 countries with data on global epidemics since 1970, we show that such exposure has no impact on views of science as an endeavor but that it significantly reduces trust in scientists and in the benefits of their work. We also illustrate that the decline in trust is driven by the individuals with little previous training in science subjects. Finally, our evidence suggests that epidemic-induced distrust translates into lower compliance with health-related policies in the form of negative views towards vaccines and lower rates of child vaccination….(More)”.
Connected papers
About: “Connected papers is a unique, visual tool to help researchers and applied scientists find and explore papers relevant to their field of work.
How does it work?
- To create each graph, we analyze an order of ~50,000 papers and select the few dozen with the strongest connections to the origin paper.
- In the graph, papers are arranged according to their similarity. That means that even papers that do not directly cite each other can be strongly connected and very closely positioned. Connected Papers is not a citation tree.
- Our similarity metric is based on the concepts of Co-citation and Bibliographic Coupling. According to this measure, two papers that have highly overlapping citations and references are presumed to have a higher chance of treating a related subject matter.
- Our algorithm then builds a Force Directed Graph to distribute the papers in a way that visually clusters similar papers together and pushes less similar papers away from each other. Upon node selection we highlight the shortest path from each node to the origin paper in similarity space.
- Our database is connected to the Semantic Scholar Paper Corpus (licensed under ODC-BY). Their team has done an amazing job of compiling hundreds of millions of published papers across many scientific fields.…(More)”.
Citizen Scientists Are Filling Research Gaps Created by the Pandemic
Article by Theresa Crimmins, Erin Posthumus, and Kathleen Prudic: “The rapid spread of COVID-19 in 2020 disrupted field research and environmental monitoring efforts worldwide. Travel restrictions and social distancing forced scientists to cancel studies or pause their work for months. These limits measurably reduced the accuracy of weather forecasts and created data gaps on issues ranging from bird migration to civil rights in U.S. public schools.
Our work relies on this kind of information to track seasonal events in nature and understand how climate change is affecting them. We also recruit and train citizens for community science – projects that involve amateur or volunteer scientists in scientific research, also known as citizen science. This often involves collecting observations of phenomena such as plants and animals, daily rainfall totals, water quality or asteroids.
Participation in many community science programs has skyrocketed during COVID-19 lockdowns, with some programs reporting record numbers of contributors. We believe these efforts can help to offset data losses from the shutdown of formal monitoring activities….(More)”.
Politics and Open Science: How the European Open Science Cloud Became Reality (the Untold Story)
Jean-Claude Burgelman at Data Intelligence: “This article will document how the European Open Science Cloud (EOSC) emerged as one of the key policy intentions to foster Open Science (OS) in Europe. It will describe some of the typical, non-rational roadblocks on the way to implement EOSC. The article will also argue that the only way Europe can take care of its research data in a way that fits the European specificities fully, is by supporting EOSC.
It is fair to say—note the word FAIR here—that realizing the European Open Science Cloud (EOSC) is now part and parcel of the European Data Science (DS) policy. In particular since EOSC will be from 2021 in the hands of the independent EOSC Association and thus potentially way out of the so-called “Brussels Bubble”.
This article will document the whole story of how EOSC emerged in this “bubble” as one of the policy intentions to foster Open Science (OS) in Europe. In addition, it will describe some of the typical, non-rational roadblocks on the way to implement EOSC. The article will also argue that the only way Europe can take care of its research data in a way that fits the European specificities fully, is by supporting EOSC….(More)”