Book by Alex John London: “The foundations of research ethics are riven with fault lines emanating from a fear that if research is too closely connected to weighty social purposes an imperative to advance the common good through research will justify abrogating the rights and welfare of study participants. The result is an impoverished conception of the nature of research, an incomplete focus on actors who bear important moral responsibilities, and a system of ethics and oversight highly attuned to the dangers of research but largely silent about threats of ineffective, inefficient, and inequitable medical practices and health systems. In For the Common Good: Philosophical Foundations of Research Ethics, Alex John London defends a conception of the common good that grounds a moral imperative with two requirements. The first is to promote research that generates the information necessary to enable key social institutions to effectively, efficiently, and equitably safeguard the basic interests of individuals. The second is to ensure that research is organized as a voluntary scheme of social cooperation that respects its various contributors’ moral claims to be treated as free and equal. Connecting research to the goals of a just social order grounds a framework for assessing and managing research risk that reconciles these requirements and justifies key oversight practices in non-paternalistic terms. Reconceiving research ethics as resolving coordination problems and providing credible assurance that these requirements are being met expands the issues and actors that fall within the purview of the field and provides the foundation for a more unified and coherent approach to domestic and international research…(More)”.
The West already monopolized scientific publishing. Covid made it worse.
Samanth Subramanian at Quartz: “For nearly a decade, Jorge Contreras has been railing against the broken system of scientific publishing. Academic journals are dominated by the Western scientists, who not only fill their pages but also work for institutions that can afford the hefty subscription fees to these journals. “These issues have been brewing for decades,” said Contreras, a professor at the University of Utah’s College of Law who specializes in intellectual property in the sciences. “The covid crisis has certainly exacerbated things, though.”
The coronavirus pandemic triggered a torrent of academic papers. By August 2021, at least 210,000 new papers on covid-19 had been published, according to a Royal Society study. Of the 720,000-odd authors of these papers, nearly 270,000 were from the US, the UK, Italy or Spain.
These papers carry research forward, of course—but they also advance their authors’ careers, and earn them grants and patents. But many of these papers are often based on data gathered in the global south, by scientists who perhaps don’t have the resources to expand on their research and publish. Such scientists aren’t always credited in the papers their data give rise to; to make things worse, the papers appear in journals that are out of the financial reach of these scientists and their institutes.
These imbalances have, as Contreras said, been a part of the publishing landscape for years. (And it doesn’t occur just in the sciences; economists from the US or the UK, for instance, tend to study countries where English is the most common language.) But the pace and pressures of covid-19 have rendered these iniquities especially stark.
Scientists have paid to publish their covid-19 research—sometimes as much as $5,200 per article. Subscriber-only journals maintain their high fees, running into thousands of dollars a year; in 2020, the Dutch publishing house Elsevier, which puts out journals such as Cell and Gene, reported a profit of nearly $1 billion, at a margin higher than that of Apple or Amazon. And Western scientists are pressing to keep data out of GISAID, a genome database that compels users to acknowledge or collaborate with anyone who deposits the data…(More)”
The Attack of Zombie Science
Article by Natalia Pasternak, Carlos Orsi, Aaron F. Mertz, & Stuart Firestein: “When we think about how science is distorted, we usually think about concepts that have ample currency in public discourse, such as pseudoscience and junk science. Practices like astrology and homeopathy come wrapped in scientific concepts and jargon that can’t meet the methodological requirements of actual sciences. During the COVID-19 pandemic, pseudoscience has had a field day. Bleach, anyone? Bear bile? Yet the pandemic has brought a newer, more subtle form of distortion to light. To the philosophy of science, we humbly submit a new concept: “zombie science.”
We think of zombie science as mindless science. It goes through the motions of scientific research without a real research question to answer, it follows all the correct methodology, but it doesn’t aspire to contribute to advance knowledge in the field. Practically all the information about hydroxychloroquine during the pandemic falls into that category, including not just the living dead found in preprint repositories, but also papers published in journals that ought to have been caught by a more discerning eye. Journals, after all, invest their reputation in every piece they choose to publish. And every investment in useless science is a net loss.
From a social and historical stance, it seems almost inevitable that the penchant for productivism in the academic and scientific world would end up encouraging zombie science. If those who do not publish perish, then publishing—even nonsense or irrelevancies—is a matter of life or death. The peer-review process and the criteria for editorial importance are filters, for sure, but they are limited. Not only do they get clogged and overwhelmed due to excess submissions, they have to deal with the weaknesses of the human condition, including feelings of personal loyalty, prejudice, and vanity. Additionally, these filters fail, as the proliferation of predatory journals shows us all too well…(More)”.
Deliberate Ignorance: Choosing Not to Know
Book edited by Ralph Hertwig and Christoph Engel: “The history of intellectual thought abounds with claims that knowledge is valued and sought, yet individuals and groups often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. When is this a virtue, when is it a vice, and what can be learned from formally modeling the underlying motives? On which normative grounds can it be judged? Which institutional interventions can promote or prevent it? In this book, psychologists, economists, historians, computer scientists, sociologists, philosophers, and legal scholars explore the scope of deliberate ignorance.
Drawing from multiple examples, including the right not to know in genetic testing, collective amnesia in transformational societies, blind orchestral auditions, and “don’t ask don’t tell” policies), the contributors offer novel insights and outline avenues for future research into this elusive yet fascinating aspect of human nature…(More)”.
Surveillance Publishing
Working paper by Jefferson D. Pooley: “…This essay lingers on a prediction too: Clarivate’s business model is coming for scholarly publishing. Google is one peer, but the company’s real competitors are Elsevier, Springer Nature, Wiley, Taylor & Francis, and SAGE. Elsevier, in particular, has been moving into predictive analytics for years now. Of course the publishing giants have long profited off of academics and our university employers— by packaging scholars’ unpaid writing-and-editing labor only to sell it back to us as usuriously priced subscriptions or APCs. That’s a lucrative business that Elsevier and the others won’t give up. But they’re layering another business on top of their legacy publishing operations, in the Clarivate mold. The data trove that publishers are sitting on is, if anything, far richer than the citation graph alone. Why worry about surveillance publishing? One reason is the balance-sheet, since the companies’ trading in academic futures will further pad profits at the expense of taxpayers and students. The bigger reason is that our behavior—once alienated from us and abstracted into predictive metrics—will double back onto our work lives. Existing biases, like male academics’ propensity for selfcitation, will receive a fresh coat of algorithmic legitimacy. More broadly, the academic reward system is already distorted by metrics. To the extent that publishers’ tallies and indices get folded into grant-making, tenure-and-promotion, and other evaluative decisions, the metric tide will gain power. The biggest risk is that scholars will internalize an analytics mindset, one already encouraged by citation counts and impact factors….(More)”.
Are we witnessing the dawn of post-theory science?
Essay by Laura Spinney: “Does the advent of machine learning mean the classic methodology of hypothesise, predict and test has had its day?..
Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).
Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.
You can’t lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that – no theory, in a word. They just work and do so well. We witness the social effects of Facebook’s predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.
Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were – oversimplifications of reality. Soon, the old scientific method – hypothesise, predict, test – would be relegated to the dustbin of history. We’d stop looking for the causes of things and be satisfied with correlations.
With the benefit of hindsight, we can say that what Anderson saw is true (he wasn’t alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. “We have leapfrogged over our ability to even write the theories that are going to be useful for description,” says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. “We don’t even know what they would look like.”
But Anderson’s prediction of the end of theory looks to have been premature – or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: what’s the best way to acquire knowledge and where does science go from here?…(More)”
‘In Situ’ Data Rights
Essay by Marshall W Van Alstyne, Georgios Petropoulos, Geoffrey Parker, and Bertin Martens: “…Data portability sounds good in theory—number portability improved telephony—but this theory has its flaws.
- Context: The value of data depends on context. Removing data from that context removes value. A portability exercise by experts at the ProgrammableWeb succeeded in downloading basic Facebook data but failed on a re-upload.1 Individual posts shed the prompts that preceded them and the replies that followed them. After all, that data concerns others.
- Stagnation: Without a flow of updates, a captured stock depreciates. Data must be refreshed to stay current, and potential users must see those data updates to stay informed.
- Impotence: Facts removed from their place of residence become less actionable. We cannot use them to make a purchase when removed from their markets or reach a friend when they are removed from their social networks. Data must be reconnected to be reanimated.
- Market Failure. Innovation is slowed. Consider how markets for business analytics and B2B services develop. Lacking complete context, third parties can only offer incomplete benchmarking and analysis. Platforms that do offer market overview services can charge monopoly prices because they have context that partners and competitors do not.
- Moral Hazard: Proposed laws seek to give merchants data portability rights but these entail a problem that competition authorities have not anticipated. Regulators seek to help merchants “multihome,” to affiliate with more than one platform. Merchants can take their earned ratings from one platform to another and foster competition. But, when a merchant gains control over its ratings data, magically, low reviews can disappear! Consumers fraudulently edited their personal records under early U.K. open banking rules. With data editing capability, either side can increase fraud, surely not the goal of data portability.
Evidence suggests that following GDPR, E.U. ad effectiveness fell, E.U. Web revenues fell, investment in E.U. startups fell, the stock and flow of apps available in the E.U. fell, while Google and Facebook, who already had user data, gained rather than lost market share as small firms faced new hurdles the incumbents managed to avoid. To date, the results are far from regulators’ intentions.
We propose a new in situ data right for individuals and firms, and a new theory of benefits. Rather than take data from the platform, or ex situ as portability implies, let us grant users the right to use their data in the location where it resides. Bring the algorithms to the data instead of bringing the data to the algorithms. Users determine when and under what conditions third parties access their in situ data in exchange for new kinds of benefits. Users can revoke access at any time and third parties must respect that. This patches and repairs the portability problems…(More).”
The Quiet Before
Book by Gal Beckerman: “We tend to think of revolutions as loud: frustrations and demands shouted in the streets. But the ideas fueling them have traditionally been conceived in much quieter spaces, in the small, secluded corners where a vanguard can whisper among themselves, imagine alternate realities, and deliberate about how to achieve their goals. This extraordinary book is a search for those spaces, over centuries and across continents, and a warning that—in a world dominated by social media—they might soon go extinct.
Gal Beckerman, an editor at The New York Times Book Review, takes us back to the seventeenth century, to the correspondence that jump-started the scientific revolution, and then forward through time to examine engines of social change: the petitions that secured the right to vote in 1830s Britain, the zines that gave voice to women’s rage in the early 1990s, and even the messaging apps used by epidemiologists fighting the pandemic in the shadow of an inept administration. In each case, Beckerman shows that our most defining social movements—from decolonization to feminism—were formed in quiet, closed networks that allowed a small group to incubate their ideas before broadcasting them widely.
But Facebook and Twitter are replacing these productive, private spaces, to the detriment of activists around the world. Why did the Arab Spring fall apart? Why did Occupy Wall Street never gain traction? Has Black Lives Matter lived up to its full potential? Beckerman reveals what this new social media ecosystem lacks—everything from patience to focus—and offers a recipe for growing radical ideas again…(More)”.
Incentivising research data sharing: a scoping review
Paper by Helen Buckley Woods and Stephen Pinfield: “Numerous mechanisms exist to incentivise researchers to share their data. This scoping review aims to identify and summarise evidence of the efficacy of different interventions to promote open data practices and provide an overview of current research….Seven major themes in the literature were identified: publisher/journal data sharing policies, metrics, software solutions,research data sharing agreements in general, open science ‘badges’, funder mandates, and initiatives….
A number of key messages for data sharing include: the need to build on existing cultures and practices, meeting people where they are and tailoring interventions to support them; the importance of publicising and explaining the policy/service widely; the need to have disciplinary data champions to model good practice and drive cultural change; the requirement to resource interventions properly; and the imperative to provide robust technical infrastructure and protocols, such as labelling of data sets, use of DOIs, data standards and use of data repositories….(More)”.
The 2021 Good Tech Awards
Kevin Roose at the New York Times: “…Especially at a time when many of tech’s leaders seem more interested in building new, virtual worlds than improving the world we live in, it’s worth praising the technologists who are stepping up to solve some of our biggest problems.
So here, without further ado, are this year’s Good Tech Awards…
To DeepMind, for cracking the protein problem (and publishing its work)
One of the year’s most exciting A.I. breakthroughs came in July when DeepMind — a Google-owned artificial intelligence company — published data and open-source code from its groundbreaking AlphaFold project.
The project, which used A.I. to predict the structures of proteins, solved a problem that had vexed scientists for decades, and was hailed by experts as one of the greatest scientific discoveries of all time. And by publishing its data freely, AlphaFold set off a frenzy among researchers, some of whom are already using it to develop new drugs and better understand the proteins involved in viruses like SARS-CoV-2.
Google’s overall A.I. efforts have been fraught with controversy and missteps, but AlphaFold seems like an unequivocally good use of the company’s vast expertise and resources…
To Recidiviz and Ameelio, for bringing better tech to the criminal justice system
Prisons aren’t known as hotbeds of innovation. But two tech projects this year tried to make our criminal justice system more humane.
Recidiviz is a nonprofit tech start-up that builds open-source data tools for criminal justice reform. It was started by Clementine Jacoby, a former Google employee who saw an opportunity to corral data about the prison system and make it available to prison officials, lawmakers, activists and researchers to inform their decisions. Its tools are in use in seven states, including North Dakota, where the data tools helped prison officials assess the risk of Covid-19 outbreaks and identify incarcerated people who were eligible for early release….(More)”.