Peer-Reviewed Scientific Journals Don’t Really Do Their Job


Article by Simine Vazire: “THE RUSH FOR scientific cures and treatments for Covid-19 has opened the floodgates of direct communication between scientists and the public. Instead of waiting for their work to go through the slow process of peer review at scientific journals, scientists are now often going straight to print themselves, posting write-ups of their work to public servers as soon as they’re complete. This disregard for the traditional gatekeepers has led to grave concerns among both scientists and commentators: Might not shoddy science—and dangerous scientific errors—make its way into the media, and spread before an author’s fellow experts can correct it? As two journalism professors suggested in an op-ed last month for The New York Times, it’s possible the recent spread of so-called preprints has only “sown confusion and discord with a general public not accustomed to the high level of uncertainty inherent in science.”

There’s another way to think about this development, however. Instead of showing (once again) that formal peer review is vital for good science, the last few months could just as well suggest the opposite. To me, at least—someone who’s served as an editor at seven different journals, and editor in chief at two—the recent spate of decisions to bypass traditional peer review gives the lie to a pair of myths that researchers have encouraged the public to believe for years: First, that peer-reviewed journals publish only trustworthy science; and second, that trustworthy science is published only in peer-reviewed journals.

Scientists allowed these myths to spread because it was convenient for us. Peer-reviewed journals came into existence largely to keep government regulators off our backs. Scientists believe that we are the best judges of the validity of each other’s work. That’s very likely true, but it’s a huge leap from that to “peer-reviewed journals publish only good science.” The most selective journals still allow flawed studies—even really terribly flawed ones—to be published all the time. Earlier this month, for instance, the journal Proceedings of the National Academy of Sciences put out a paper claiming that mandated face coverings are “the determinant in shaping the trends of the pandemic.” PNAS is a very prestigious journal, and their website claims that they are an “authoritative source” that works “to publish only the highest quality scientific research.” However, this paper was quickly and thoroughly criticized on social media; by last Thursday, 45 researchers had signed a letter formally calling for its retraction.

Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review. So they’re ditching the pretense that journals are a sure-fire quality control filter, and sharing their papers as self-published PDFs. This might be just the shakeup that peer review needs….(More)”.

The Data Assembly


Press Release: “The Governance Lab (The GovLab), an action research center at New York University Tandon School of Engineering, with the support of the Henry Luce Foundation, announced the creation of The Data Assembly. Beginning in New York City, the effort will explore how communities perceive the risks and benefits of data re-use for COVID-19. Understanding that policymakers often lack information about the concerns of different stakeholders, The Data Assembly’s deliberations will inform the creation of a responsible data re-use framework to guide the use of data and technology at the city and state level to fight COVID-19’s many consequences.

The Data Assembly will hold deliberations with civil rights organizations, key data holders and policymakers, and the public at large. Consultations with these stakeholders will take place through a series of remote engagements, including surveys and an online town hall meeting. This work will allow the project to consider the perspectives of people from different strata of society and how they might exercise some control over the flow of data.

After the completion of these data re-use deliberations, The Data Assembly will create a path forward for using data responsibly to solve public challenges. The first phases of the project will commence in New York City, seeking to engage with city residents and their leaders on data governance issues. 

“Data is increasingly the primary format for sharing information to understand crises and plan recovery efforts; empowering everyone to better understand how data is collected and how it should be used is paramount,” said Adrienne Schmoeker, Director of Civic Engagement & Strategy and Deputy Chief Analytics Officer at the NYC Mayor’s Office of Data Analytics. “We look forward to learning from the insights gathered by the GovLab through The Data Assembly work they are conducting in New York City.”…(More)”.

A Practical Guide for Establishing an Evidence Centre


Report by Alliance for Useful Evidence: “Since 2013, Nesta and the Alliance for Useful Evidence have supported the development of more than eight evidence centres. This report draws on insight from our own experience, published material and interviews with senior leaders from a range of evidence intermediaries.

The report identifies five common ingredients that contribute to successful evidence centres:

  1. Clear objectives: Good knowledge of the centre’s intended user group(s), clear outcomes to work towards and an evidence-informed theory of change.
  2. Robust organisational development: Commitment to create an independent and sustainable organisation with effective governance and the right mix of skills and experience, over a timescale that will be sufficient to make a difference.
  3. Engaged users: Understanding users’ evidence needs and working collaboratively with them to increase their capability, motivation and opportunity to use evidence in their decision-making.
  4. Rigorous curation and creation of evidence: A robust and transparent approach to selecting and generating high-quality evidence for the centre’s users.
  5. A focus on impact: Commitment to learn from the centre’s activities, including successes and failures, so that you can increase your effectiveness in achieving your objectives…(More)”.

UK parliamentary select committees: crowdsourcing for evidence-based policy or grandstanding?


Paper by the The LSE GV314 Group: “In the United Kingdom, the influence of parliamentary select committees on policy depends substantially on the ‘seriousness’ with which they approach the task of gathering and evaluating a wide range of evidence and producing reports and recommendations based on it. However, select committees are often charged with being concerned with ‘political theatre’ and ‘grandstanding’ rather than producing evidence-based policy recommendations. This study, based on a survey of 919 ‘discretionary’ witnesses, including those submitting written and oral evidence, examines the case for arguing that there is political bias and grandstanding in the way select committees go about selecting witnesses, interrogating them and using their evidence to put reports together. While the research finds some evidence of such ‘grandstanding’ it does not appear to be strong enough to suggest that the role of select committees is compromised as a crowdsourcer of evidence….(More)”.

German humanities scholars enlisted to end coronavirus lockdown


David Matthews at THE: “In contrast to other countries, philosophers, historians, theologians and jurists have played a major role advising the state as it seeks to loosen restrictions…

In the struggle against the new coronavirus, humanities academics have entered the fray – in Germany at least.

Arguably to a greater extent than has happened in the UK, France or the US, the country has enlisted the advice of philosophers, historians of science, theologians and jurists as it navigates the delicate ethical balancing act of reopening society while safeguarding the health of the public.

When the German federal government announced a slight loosening of restrictions on 15 April – allowing small shops to open and some children to return to school in May – it had been eagerly awaiting a report written by a 26-strong expert group containing only a minority of natural scientists and barely a handful of virologists and medical specialists.

Instead, this working group from the Leopoldina – Germany’s independent National Academy of Sciences dating back to 1652 – included historians of industrialisation and early Christianity, a specialist on the philosophy of law and several pedagogical experts.

This paucity of virologists earned the group a swipe from Markus Söder, minister-president of badly hit Bavaria, who has led calls in Germany for a tough lockdown (although earlier in the pandemic the Leopoldina did release a report written by more medically focused specialists).

But “the crisis is a complex one, it’s a systemic crisis” and so it needs to be dissected from every angle, argued Jürgen Renn, director of the Max Planck Institute for the History of Science, and one of those who wrote the crucial recommendations.

And Professor Renn – who earlier this year published a book on rethinking science in the Anthropocene – made the argument for green post-virus reconstruction. Urbanisation and deforestation have squashed mankind and wildlife together, making other animal-to-human disease transmissions ever more likely, he argued. “It’s not the only virus waiting out there,” he said.

Germany’s Ethics Council – which traces its roots back to the stem cell debates of the early 2000s and is composed of theologians, jurists, philosophers and other ethical thinkers – also contributed to a report at the end of March, warning that it was up to elected politicians, not scientists, to make the “painful decisions” weighing up the lockdown’s effect on health and its other side-effects….(More)“.

Experts and the Will of the People


Book by Harry Collins, Robert  Evans, Darrin Durant and Martin Weinel: “The rise of populism in the West has led to attacks on the legitimacy of scientific expertise in political decision making. This book explores the differences between populism and pluralist democracy and their relationship with science. Pluralist democracy is characterised by respect for minority choices and a system of checks and balances that prevents power being concentrated in one group, while populism treats minorities as traitorous so as to concentrate power in the government. The book argues that scientific expertise – and science more generally — should be understood as one of the checks and balances in pluralist democracies. It defends science as ‘craftwork with integrity’ and shows how its crucial role in democratic societies can be rethought and that it must be publicly explained. This book will be of value to scholars and practitioners working across STS as well as to anyone interested in decoding the populist agenda against science….(More)”.

Comparative Constitution Making


Book edited by Hanna Lerner and David Landau: “In a seminal article more than two decades ago, Jon Elster lamented that despite the large volume of scholarship in related fields, such as comparative constitutional law and constitutional design, there was a severe dearth of work on the process and context of constitution making. Happily, his point no longer holds. Recent years have witnessed a near-explosion of high-quality work on constitution-making processes, across a range of fields including law, political science, and history. This volume attempts to synthesize and expand upon this literature. It offers a number of different perspectives and methodologies aimed at understanding the contexts in which constitution making takes place, its motivations, the theories and processes that guide it, and its effects. The goal of the contributors is not simply to explain the existing state of the field, but also to provide new research on these key questions.

Our aims in this introduction are relatively modest. First, we seek to set up some of the major questions treated by recent research in order to explain how the chapters in this volume contribute to them. We do not aim to give a complete state of the field, but we do lay out what we see as several of the biggest challenges and questions posed by recent scholarship. …(More)”.

Why policy networks don’t work (the way we think they do)


Blog by James Georgalakis: “Is it who you know or what you know? The literature on evidence uptake and the role of communities of experts mobilised at times of crisis convinced me that a useful approach would be to map the social network that emerged around the UK-led mission to Sierra Leone so it could be quantitatively analysed. Despite the well-deserved plaudits for my colleagues at IDS and their partners in the London School of Hygiene and Tropical Medicine, the UK Department for International Development (DFID), the Wellcome Trust and elsewhere, I was curious to know why they had still met real resistance to some of their policy advice. This included the provision of home care kits for victims of the virus who could not access government or NGO run Ebola Treatment Units (ETUs).

It seemed unlikely these challenges were related to poor communications. The timely provision of accessible research knowledge by the Ebola Response Anthropology Platform has been one of the most celebrated aspects of the mobilisation of anthropological expertise. This approach is now being replicated in the current Ebola response in the Democratic Republic of Congo (DRC).  Perhaps the answer was in the network itself. This was certainly indicated by some of the accounts of the crisis by those directly involved.

Social network analysis

I started by identifying the most important looking policy interactions that took place between March 2014, prior to the UK assuming leadership of the Sierra Leone international response and mid-2016, when West Africa was finally declared Ebola free. They had to be central to the efforts to coordinate the UK response and harness the use of evidence. I then looked for documents related to these events, a mixture of committee minutes, reports and correspondence , that could confirm who was an active participant in each. This analysis of secondary sources related to eight separate policy processes and produced a list of 129 individuals. However, I later removed a large UK conference that took place in early 2016 at which learning from the crisis was shared.  It appeared that most delegates had no significant involvement in giving policy advice during the crisis. This reduced the network to 77….(More)”.

For academics, what matters more: journal prestige or readership?


Katie Langin at Science: “With more than 30,000 academic journals now in circulation, academics can have a hard time figuring out where to submit their work for publication. The decision is made all the more difficult by the sky-high pressure of today’s academic environment—including working toward tenure and trying to secure funding, which can depend on a researcher’s publication record. So, what does a researcher prioritize?

According to a new study posted on the bioRxiv preprint server, faculty members say they care most about whether the journal is read by the people they most want to reach—but they think their colleagues care most about journal prestige. Perhaps unsurprisingly, prestige also held more sway for untenured faculty members than for their tenured colleagues.

“I think that it is about the security that comes with being later in your career,” says study co-author Juan Pablo Alperin, an assistant professor in the publishing program at Simon Fraser University in Vancouver, Canada. “It means you can stop worrying so much about the specifics of what is being valued; there’s a lot less at stake.”

According to a different preprint that Alperin and his colleagues posted on PeerJ in April, 40% of research-intensive universities in the United States and Canada explicitly mention that journal impact factors can be considered in promotion and tenure decisions. More likely do so unofficially, with faculty members using journal names on a CV as a kind of shorthand for how “good” a candidate’s publication record is. “You can’t ignore the fact that journal impact factor is a reality that gets looked at,” Alperin says. But some argue that journal prestige and impact factor are overemphasized and harm science, and that academics should focus on the quality of individual work rather than journal-wide metrics. 

In the new study, only 31% of the 338 faculty members who were surveyed—all from U.S. and Canadian institutions and from a variety of disciplines, including 38% in the life and physical sciences and math—said that journal prestige was “very important” to them when deciding where to submit a manuscript. The highest priority was journal readership, which half said was very important. Fewer respondents felt that publication costs (24%) and open access (10%) deserved the highest importance rating.

But, when those same faculty members were asked to assess how their colleagues make the same decision, journal prestige shot to the top of the list, with 43% of faculty members saying that it was very important to their peers when deciding where to submit a manuscript. Only 30% of faculty members thought the same thing about journal readership—a drop of 20 percentage points compared with how faculty members assessed their own motivations….(More)”.

New technology and ‘old’ think tanks


Article by Tom Ascott: ‘Expert or academic carries out research. Generates rigorous 40-page report. Comms officer is asked to promote said report. Launch event, press release, tweets. Maybe a video. Maybe an infographic’. This is formula for how think tanks seek to influence policy matters. It is how they build, maintain and increase their credibility. While it has arguably worked since the expansion of the think tank community following the Cold War, this model of disseminating information is now fraying.

It is not a sustainable model because it is largely, and in some ways even designed to be, inaccessible to a larger and now increasingly inquisitive public. This inaccessibility is only accentuated by the large number of institutes specialising in niche subjects, which are often more agile and better able to leverage technology to their advantage. Tastes also change: for many of today’s potential punters, the enforced networking associated with think tank events may be considered a negative experience; being able to watch lectures and conferences from home, alone, may now be considered of greater benefit.

The publication of written reports and holding launch events, unlike broader communications methods, are often targeting specific policymakers or stakeholders. In the short term, this strategy may work for think tanks, in the sense that they can address their core audiences. Still, the model faces two main hurdles.

One is providing policymakers with what they need. Paul C Avey and Michael C Desch, two US-based academics, found in their study ‘What Do Policymakers Want From Us?’ that ‘the only methodology that more than half of the respondents characterised as “not very useful” or “not useful at all” was formal models’. The respondents in their study thought that the best policy advice came from practitioners or journalists, those looking at underlying causes. Yet some ‘think tankers’ continue to take a dim view of journalism, for the very reasons which make journalism important: rapidly responding to developing events, and offering a broader perspective, usually shorn of the uncertainties inherent in deeper knowledge or analysis.

The second, broader, problem is how think tanks are perceived. US President Barack Obama famously ‘disdain[ed]’ foreign policy establishments and institutes, and those who are not engaged with them perceive them as being elitist….(More)”.