DNA databases are too white, so genetics doesn’t help everyone. How do we fix that?


Tina Hesman Saey at ScienceNews: “It’s been two decades since the Human Genome Project first unveiled a rough draft of our genetic instruction book. The promise of that medical moon shot was that doctors would soon be able to look at an individual’s DNA and prescribe the right medicines for that person’s illness or even prevent certain diseases.

That promise, known as precision medicine, has yet to be fulfilled in any widespread way. True, researchers are getting clues about some genetic variants linked to certain conditions and some that affect how drugs work in the body. But many of those advances have benefited just one group: people whose ancestral roots stem from Europe. In other words, white people.

Instead of a truly human genome that represents everyone, “what we have is essentially a European genome,” says Constance Hilliard, an evolutionary historian at the University of North Texas in Denton. “That data doesn’t work for anybody apart from people of European ancestry.”

She’s talking about more than the Human Genome Project’s reference genome. That database is just one of many that researchers are using to develop precision medicine strategies. Often those genetic databases draw on data mainly from white participants. But race isn’t the issue. The problem is that collectively, those data add up to a catalog of genetic variants that don’t represent the full range of human genetic diversity.

When people of African, Asian, Native American or Pacific Island ancestry get a DNA test to determine if they inherited a variant that may cause cancer or if a particular drug will work for them, they’re often left with more questions than answers. The results often reveal “variants of uncertain significance,” leaving doctors with too little useful information. This happens less often for people of European descent. That disparity could change if genetics included a more diverse group of participants, researchers agree (SN: 9/17/16, p. 8).

One solution is to make customized reference genomes for populations whose members die from cancer or heart disease at higher rates than other groups, for example, or who face other worse health outcomes, Hilliard suggests….(More)”.

Revenge of the Experts: Will COVID-19 Renew or Diminish Public Trust in Science?


Paper by Barry Eichengreen, Cevat Aksoy and Orkun Saka: “It is sometimes said that an effect of the COVID-19 pandemic will be heightened appreciation of the importance of scientific research and expertise. We test this hypothesis by examining how exposure to previous epidemics affected trust in science and scientists. Building on the “impressionable years hypothesis” that attitudes are durably formed during the ages 18 to 25, we focus on individuals exposed to epidemics in their country of residence at this particular stage of the life course. Combining data from a 2018 Wellcome Trust survey of more than 75,000 individuals in 138 countries with data on global epidemics since 1970, we show that such exposure has no impact on views of science as an endeavor but that it significantly reduces trust in scientists and in the benefits of their work. We also illustrate that the decline in trust is driven by the individuals with little previous training in science subjects. Finally, our evidence suggests that epidemic-induced distrust translates into lower compliance with health-related policies in the form of negative views towards vaccines and lower rates of child vaccination….(More)”.

Connected papers


About: “Connected papers is a unique, visual tool to help researchers and applied scientists find and explore papers relevant to their field of work.

How does it work?

  • To create each graph, we analyze an order of ~50,000 papers and select the few dozen with the strongest connections to the origin paper.
  • In the graph, papers are arranged according to their similarity. That means that even papers that do not directly cite each other can be strongly connected and very closely positioned. Connected Papers is not a citation tree.
  • Our similarity metric is based on the concepts of Co-citation and Bibliographic Coupling. According to this measure, two papers that have highly overlapping citations and references are presumed to have a higher chance of treating a related subject matter.
  • Our algorithm then builds a Force Directed Graph to distribute the papers in a way that visually clusters similar papers together and pushes less similar papers away from each other. Upon node selection we highlight the shortest path from each node to the origin paper in similarity space.
  • Our database is connected to the Semantic Scholar Paper Corpus (licensed under ODC-BY). Their team has done an amazing job of compiling hundreds of millions of published papers across many scientific fields.…(More)”.

Citizen Scientists Are Filling Research Gaps Created by the Pandemic


Article by  Theresa Crimmins, Erin Posthumus, and Kathleen Prudic: “The rapid spread of COVID-19 in 2020 disrupted field research and environmental monitoring efforts worldwide. Travel restrictions and social distancing forced scientists to cancel studies or pause their work for months. These limits measurably reduced the accuracy of weather forecasts and created data gaps on issues ranging from bird migration to civil rights in U.S. public schools.

Our work relies on this kind of information to track seasonal events in nature and understand how climate change is affecting them. We also recruit and train citizens for community science – projects that involve amateur or volunteer scientists in scientific research, also known as citizen science. This often involves collecting observations of phenomena such as plants and animalsdaily rainfall totalswater quality or asteroids.

Participation in many community science programs has skyrocketed during COVID-19 lockdowns, with some programs reporting record numbers of contributors. We believe these efforts can help to offset data losses from the shutdown of formal monitoring activities….(More)”.

Politics and Open Science: How the European Open Science Cloud Became Reality (the Untold Story)


Jean-Claude Burgelman at Data Intelligence: “This article will document how the European Open Science Cloud (EOSC) emerged as one of the key policy intentions to foster Open Science (OS) in Europe. It will describe some of the typical, non-rational roadblocks on the way to implement EOSC. The article will also argue that the only way Europe can take care of its research data in a way that fits the European specificities fully, is by supporting EOSC.

It is fair to say—note the word FAIR here—that realizing the European Open Science Cloud (EOSC) is now part and parcel of the European Data Science (DS) policy. In particular since EOSC will be from 2021 in the hands of the independent EOSC Association and thus potentially way out of the so-called “Brussels Bubble”.

This article will document the whole story of how EOSC emerged in this “bubble” as one of the policy intentions to foster Open Science (OS) in Europe. In addition, it will describe some of the typical, non-rational roadblocks on the way to implement EOSC. The article will also argue that the only way Europe can take care of its research data in a way that fits the European specificities fully, is by supporting EOSC….(More)”

How public should science be?


Discussion Report by Edel, A., Kübler: “Since the outbreak of the COVID-19 pandemic, the question of what role science should play in political discourse has moved into the focus of public interest with unprecedented vehemence. In addition to governments directly consulting individual virologists or (epidemiological) research institutes, major scientific institutions such as the German National Academy of Sciences Leopoldina1 and the presidents of four non-university research organisations have actively participated in the discussion by providing recommendations. More than ever before, scientific problem descriptions, data and evaluations are influencing political measures. It seems as if the relationship between science, politics and the public is currently being reassessed.

The current crisis situation has not created a new phenomenon but has only reinforced the trend of mutual reliance between science, politics and the public, which has been observed for some time. Decision-makers in the political arena and in business were already looking for ways to better substantiate and legitimise their decisions through external scientific expertise when faced with major societal challenges, for example when trying to deal with increasing immigration, climate protection and when preparing for far-reaching reforms (e.g. of the labour market or the pension system) or in economic crises. Research is also held in high esteem within society. The special edition of the ‘Science Barometer’ was able to demonstrate in the surveys an increased trust in science in the case of the current COVID-19 pandemic. Conversely, scientists have always been and continue to be active in the public sphere. For some time now, research experts have frequently been guests on talk shows. Authors from the field of science often write opinion pieces and guest contributions in daily newspapers and magazines. However, this role of research is by no means un-controversial….(More)”.

Ten computer codes that transformed science


Jeffrey M. Perkel at Nature: “From Fortran to arXiv.org, these advances in programming and platforms sent biology, climate science and physics into warp speed….In 2019, the Event Horizon Telescope team gave the world the first glimpse of what a black hole actually looks like. But the image of a glowing, ring-shaped object that the group unveiled wasn’t a conventional photograph. It was computed — a mathematical transformation of data captured by radio telescopes in the United States, Mexico, Chile, Spain and the South Pole1. The team released the programming code it used to accomplish that feat alongside the articles that documented its findings, so the scientific community could see — and build on — what it had done.

It’s an increasingly common pattern. From astronomy to zoology, behind every great scientific finding of the modern age, there is a computer. Michael Levitt, a computational biologist at Stanford University in California who won a share of the 2013 Nobel Prize in Chemistry for his work on computational strategies for modelling chemical structure, notes that today’s laptops have about 10,000 times the memory and clock speed that his lab-built computer had in 1967, when he began his prizewinning work. “We really do have quite phenomenal amounts of computing at our hands today,” he says. “Trouble is, it still requires thinking.”

Enter the scientist-coder. A powerful computer is useless without software capable of tackling research questions — and researchers who know how to write it and use it. “Research is now fundamentally connected to software,” says Neil Chue Hong, director of the Software Sustainability Institute, headquartered in Edinburgh, UK, an organization dedicated to improving the development and use of software in science. “It permeates every aspect of the conduct of research.”

Scientific discoveries rightly get top billing in the media. But Nature this week looks behind the scenes, at the key pieces of code that have transformed research over the past few decades.

Although no list like this can be definitive, we polled dozens of researchers over the past year to develop a diverse line-up of ten software tools that have had a big impact on the world of science. You can weigh in on our choices at the end of the story….(More)”.

Scholarly publishing needs regulation


Essay by Jean-Claude Burgelman: “The world of scientific communication has changed significantly over the past 12 months. Understandably, the amazing mobilisation of research and scholarly publishing in an effort to mitigate the effects of Covid-19 and find a vaccine has overshadowed everything else. But two other less-noticed events could also have profound implications for the industry and the researchers who rely on it.

On 10 January 2020, Taylor and Francis announced its acquisition of one of the most innovative small open-access publishers, F1000 Research. A year later, on 5 January 2021, another of the big commercial scholarly publishers, Wiley, paid nearly $300 million for Hindawi, a significant open-access publisher in London.

These acquisitions come alongside rapid change in publishers’ functions and business models. Scientific publishing is no longer only about publishing articles. It’s a knowledge industry—and it’s increasingly clear it needs to be regulated like one.

The two giant incumbents, Springer Nature and Elsevier, are already a long way down the road to open access, and have built up impressive in-house capacity. But Wiley, and Taylor and Francis, had not. That’s why they decided to buy young open-access publishers. Buying up a smaller, innovative competitor is a well-established way for an incumbent in any industry to expand its reach, gain the ability to do new things and reinvent its business model—it’s why Facebook bought WhatsApp and Instagram, for example.

New regulatory approach

To understand why this dynamic demands a new regulatory approach in scientific publishing, we need to set such acquisitions alongside a broader perspective of the business’s transformation into a knowledge industry. 

Monopolies, cartels and oligopolies in any industry are a cause for concern. By reducing competition, they stifle innovation and push up prices. But for science, the implications of such a course are particularly worrying. 

Science is a common good. Its products—and especially its spillovers, the insights and applications that cannot be monopolised—are vital to our knowledge societies. This means that having four companies control the worldwide production of car tyres, as they do, has very different implications to an oligopoly in the distribution of scientific outputs. The latter situation would give the incumbents a tight grip on the supply of knowledge.

Scientific publishing is not yet a monopoly, but Europe at least is witnessing the emergence of an oligopoly, in the shape of Elsevier, Springer Nature, Wiley, and Taylor and Francis. The past year’s acquisitions have left only two significant independent players in open-access publishing—Frontiers and MDPI, both based in Switzerland….(More)”.

Enabling the future of academic research with the Twitter API


Twitter Developer Blog: “When we introduced the next generation of the Twitter API in July 2020, we also shared our plans to invest in the success of the academic research community with tailored solutions that better serve their goals. Today, we’re excited to launch the Academic Research product track on the new Twitter API. 

Why we’re launching this & how we got here

Since the Twitter API was first introduced in 2006, academic researchers have used data from the public conversation to study topics as diverse as the conversation on Twitter itself – from state-backed efforts to disrupt the public conversation to floods and climate change, from attitudes and perceptions about COVID-19 to efforts to promote healthy conversation online. Today, academic researchers are one of the largest groups of people using the Twitter API. 

Our developer platform hasn’t always made it easy for researchers to access the data they need, and many have had to rely on their own resourcefulness to find the right information. Despite this, for over a decade, academic researchers have used Twitter data for discoveries and innovations that help make the world a better place.

Over the past couple of years, we’ve taken iterative steps to improve the experience for researchers, like when we launched a webpage dedicated to Academic Research, and updated our Twitter Developer Policy to make it easier to validate or reproduce others’ research using Twitter data.

We’ve also made improvements to help academic researchers use Twitter data to advance their disciplines, answer urgent questions during crises, and even help us improve Twitter. For example, in April 2020, we released the COVID-19 stream endpoint – the first free, topic-based stream built solely for researchers to use data from the global conversation for the public good. Researchers from around the world continue to use this endpoint for a number of projects.

Over two years ago, we started our own extensive research to better understand the needs, constraints and challenges that researchers have when studying the public conversation. In October 2020, we tested this product track in a private beta program where we gathered additional feedback. This gave us a glimpse into some of the important work that the free Academic Research product track we’re launching today can now enable….(More)”.

The Hidden Cost of Using Amazon Mechanical Turk for Research


Paper by Antonios Saravanos: “This work shares unexpected findings obtained from the use of the Amazon Mechanical Turk platform as a source of participants for the study of technology adoption. Expressly, of the 564 participants from the United States, 126 (22.34%) failed at least one of three forms of attention check (logic, honesty, and time). We also examined whether characteristics such as gender, age, education, and income affected participant attention. Amongst all characteristics assessed, only prior experience with the technology being studied was found to be related to attentiveness. We conclude this work by reaffirming the need for multiple forms of attention checks to gauge participant attention. Furthermore, we propose that researchers adjust their budgets accordingly to account for the possibility of having to discard responses from participants determined not to be displaying adequate attention….(More)”.