Statistics, lies and the virus: lessons from a pandemic


Tim Hartford at the Financial Times: “Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice.

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again….

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate…

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. These tactics are powerful in part because they echo the ideals of science.

It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”.  So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road.

The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn…(More)”.

Computational social science: Obstacles and opportunities


Paper by David M. J. Lazer et al: “The field of computational social science (CSS) has exploded in prominence over the past decade, with thousands of papers published using observational data, experimental designs, and large-scale simulations that were once unfeasible or unavailable to researchers. These studies have greatly improved our understanding of important phenomena, ranging from social inequality to the spread of infectious diseases. The institutions supporting CSS in the academy have also grown substantially, as evidenced by the proliferation of conferences, workshops, and summer schools across the globe, across disciplines, and across sources of data. But the field has also fallen short in important ways. Many institutional structures around the field—including research ethics, pedagogy, and data infrastructure—are still nascent. We suggest opportunities to address these issues, especially in improving the alignment between the organization of the 20th-century university and the intellectual requirements of the field….(More)”.

Law and Technology Realism


Paper by Thibault Schrepel: “One may identify two current trends in the field of “Law and Technology.” The first trend concerns technological determinism. Some argue that technology is deterministic: the state of technological advancement is the determining factor of society. Others oppose that view, claiming it is the society that affects technology. The second trend concerns technological neutrality. some say that technology is neutral, meaning the effects of technology depend entirely on the social context. Others defend the opposite: they view the effects of technology as being inevitable (regardless of the society in which it is used).

<p><em><strong>Figure 1</strong></em></p>
Figure 1

While it is commonly accepted that technology is deterministic, I am under the impression that a majority of “Law and Technology” scholars also believe that technology is non-neutral. It follows that, according to this dominant view, (1) technology drives society in good or bad directions (determinism), and that (2) certain uses of technology may lead to the reduction or enhancement of the common good (non-neutrality). Consequently, this leads to top-down tech policies where the regulator has the impossible burden of helping society control and orient technology to the best possible extent.

This article is deterministic and non-neutral.

But, here’s the catch. Most of today’s doctrine focuses almost exclusively on the negativity brought by technology (read Nick Bostrom, Frank Pasquale, Evgeny Morozov). Sure, these authors mention a few positive aspects, but still end up focusing on the negative ones. They’re asking to constrain technology on that sole basis. With this article, I want to raise another point: technology determinism can also drive society by providing solutions to centuries-old problems. In and of itself. This is not technological solutionism, as I am not arguing that technology can solve all of mankind’s problems, but it is not anti-solutionism either. I fear the extremes, anyway.

To make my point, I will discuss the issue addressed by Albert Hirschman in his famous book Exit, Voice, and Loyalty (Harvard University Press, 1970). Hirschman, at the time Professor of Economics at Harvard University, introduces the distinction between “exit” and “voice.” With exit, an individual exhibits her or his disagreement as a member of a group by leaving the group. With voice, the individual stays in the group but expresses her or his dissatisfaction in the hope of changing its functioning. Hirschman summarizes his theory on page 121, with the understanding that the optimal situation for any individual is to be capable of both “exit” and “voice“….(More)”.

Why hypothesis testers should spend less time testing hypotheses


Paper by Scheel, Anne M., Leonid Tiokhin, Peder M. Isager, and Daniel Lakens: “For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable (Meehl, 1990). In response to the replication crisis, reforms in psychology have focused on formalising procedures for testing hypotheses. These reforms were necessary and impactful. However, as an unexpected consequence, psychologists have begun to realise that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound ‘derivation chain’ between test and theory is counterproductive. Instead, various non-confirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative.

Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and their functional form, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognised and incentivised as a crucial goal in and of itself.

In this article, we discuss how shifting the focus to non-confirmatory research can tie together many loose ends of psychology’s reform movement and help us lay the foundation to develop strong, testable theories, as Paul Meehl urged us to….(More)”

The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.

Building and maintaining trust in research


Daniel Nunan at the International Journal of Market Research: “One of the many indirect consequences of the COVID pandemic for the research sector may be the impact upon consumers’ willingness to share data. This is reflected in concerns that government mandated “apps” designed to facilitate COVID testing and tracking schemes will undermine trust in the commercial collection of personal data (WARC, 2020). For example, uncertainty over the consequences of handing over data and the ways in which it might be used could reduce consumers’ willingness to share data with organizations, and reverse a trend that has seen growing levels of consumer confidence in Data Protection Regulations (Data & Direct Marketing Association [DMA], 2020). This highlights how central the role of trust has become in contemporary research practice, and how fragile the process of building trust can be due to the ever competing demands of public and private data collectors.

For researchers, there are two sides to trust. One relates to building sufficient trust with research participants to be facilitate data collection, and the second is building trust with the users of research. Trust has long been understood as a key factor in effective research relationships, with trust between researchers and users of research the key factor in determining the extent to which research is actually used (Moorman et al., 1993). In other words, a trusted messenger is just as important as the contents of the message. In recent years, there has been growing concern over declining trust in research from research participants and the general public, manifested in declining response rates and challenges in gaining participation. Understanding how to build consumer trust is more important than ever, as the shift of communication and commercial activity to digital platforms alter the mechanisms through which trust is built. Trust is therefore essential both for ensuring that accurate data can be collected, and that insights from research have necessary legitimacy to be acted upon. The two research notes in this issue provide an insight into new areas where the issue of trust needs to be considered within research practice….(More)”.

The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.

Morphing Intelligence From IQ Measurement to Artificial Brains


Book by Catherine Malabou: “What is intelligence? The concept crosses and blurs the boundaries between natural and artificial, bridging the human brain and the cybernetic world of AI. In this book, the acclaimed philosopher Catherine Malabou ventures a new approach that emphasizes the intertwined, networked relationships among the biological, the technological, and the symbolic.

Malabou traces the modern metamorphoses of intelligence, seeking to understand how neurobiological and neurotechnological advances have transformed our view. She considers three crucial developments: the notion of intelligence as an empirical, genetically based quality measurable by standardized tests; the shift to the epigenetic paradigm, with its emphasis on neural plasticity; and the dawn of artificial intelligence, with its potential to simulate, replicate, and ultimately surpass the workings of the brain. Malabou concludes that a dialogue between human and cybernetic intelligence offers the best if not the only means to build a democratic future. A strikingly original exploration of our changing notions of intelligence and the human and their far-reaching philosophical and political implications, Morphing Intelligence is an essential analysis of the porous border between symbolic and biological life at a time when once-clear distinctions between mind and machine have become uncertain….(More)”.

Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science


Book by Stuart Ritchie: “So much relies on science. But what if science itself can’t be relied on?

Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for guidance. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more.

While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. Many widely accepted and highly influential theories and claims – about ‘priming’ and ‘growth mindset’, sleep and nutrition, genes and the microbiome, as well as a host of drugs, allergies and therapies – turn out to be based on unreliable, exaggerated and even fraudulent papers. We can trace their influence in everything from austerity economics to the anti-vaccination movement, and occasionally count the cost of them in human lives….(More)”.

What science can do for democracy: a complexity science approach


Paper by Tina Eliassi-Rad et al: “Political scientists have conventionally assumed that achieving democracy is a one-way ratchet. Only very recently has the question of “democratic backsliding” attracted any research attention. We argue that democratic instability is best understood with tools from complexity science. The explanatory power of complexity science arises from several features of complex systems. Their relevance in the context of democracy is discussed. Several policy recommendations are offered to help (re)stabilize current systems of representative democracy…(More)”.