Quantified Storytelling: A Narrative Analysis of Metrics on Social Media


Book by Alex Georgakopoulou, Stefan Iversen and Carsten Stage: “This book interrogates the role of quantification in stories on social media: how do visible numbers (e.g. of views, shares, likes) and invisible algorithmic measurements shape the stories we post and engage with? The links of quantification with stories have not been explored sufficiently in storytelling research or in social media studies, despite the fact that platforms have been integrating sophisticated metrics into developing facilities for sharing stories, with a massive appeal to ordinary users, influencers and businesses alike.

With case-studies from Instagram, Reddit and Snapchat, the authors show how three types of metrics, namely content metrics, interface metrics and algorithmic metrics, affect the ways in which cancer patients share their experiences, the circulation of specific stories that mobilize counter-publics and the design of stories as facilities on platforms. The analyses document how numbers structure elements in stories, indicate and produce engagement and become resources for the tellers’ self-presentation….(More)”.

Improving data access democratizes and diversifies science


Research article by Abhishek Nagaraj, Esther Shears, and Mathijs de Vaan: “Data access is critical to empirical research, but past work on open access is largely restricted to the life sciences and has not directly analyzed the impact of data access restrictions. We analyze the impact of improved data access on the quantity, quality, and diversity of scientific research. We focus on the effects of a shift in the accessibility of satellite imagery data from Landsat, a NASA program that provides valuable remote-sensing data. Our results suggest that improved access to scientific data can lead to a large increase in the quantity and quality of scientific research. Further, better data access disproportionately enables the entry of scientists with fewer resources, and it promotes diversity of scientific research….(More)”

Research 4.0: research in the age of automation


Report by Rob Procter, Ben Glover, and Elliot Jones: “There is a growing consensus that we are at the start of a fourth industrial revolution, driven by developments in Artificial Intelligence, machine learning, robotics, the Internet of Things, 3-D printing, nanotechnology, biotechnology, 5G, new forms of energy storage and quantum computing. This report seeks to understand what impact AI is having on the UK’s research sector and what implications it has for its future, with a particular focus on academic research.

Building on our interim report, we find that AI is increasingly deployed in academic research in the UK in a broad range of disciplines. The combination of an explosion of new digital data sources with powerful new analytical tools represents a ‘double dividend’ for researchers. This is allowing researchers to investigate questions that would have been unanswerable just a decade ago. Whilst there has been considerable take-up of AI in academic research, the report highlights that steps could be taken to ensure even wider adoption of these new techniques and technologies, including wider training in the necessary skills for effective utilisation of AI, faster routes to culture change and greater multi-disciplinary collaboration.

This report recognises that the Covid-19 pandemic means universities are currently facing significant pressures, with considerable demands on their resources whilst simultaneously facing threats to income. But as we emerge from the current crisis, we urge policy makers and universities to consider the report’s recommendations and take steps to fortify the UK’s position as a place of world-leading research. Indeed, the current crisis has only reminded us of the critical importance of a highly functioning and flourishing research sector. The report recommends:

The current post-16 curriculum should be reviewed to ensure all pupils receive a grounding in basic digital, quantitative and ethical skills necessary to ensure the effective and appropriate utilisation of AI.A UK-wide audit of research computing and data infrastructure provision is conducted to consider how access might be levelled up.

UK Research and Innovation (UKRI) should consider incentivising institutions to utilise AI wherever it can offer benefits to the economy and society in their future spending on research and development.

Universities should take steps to ensure that it is easier for researchers to move between academia and industry, for example, by putting less emphasis on publications, and recognise other outputs and measures of achievement when hiring for academic posts….(More)”.

Statistics, lies and the virus: lessons from a pandemic


Tim Hartford at the Financial Times: “Will this year be 1954 all over again? Forgive me, I have become obsessed with 1954, not because it offers another example of a pandemic (that was 1957) or an economic disaster (there was a mild US downturn in 1953), but for more parochial reasons. Nineteen fifty-four saw the appearance of two contrasting visions for the world of statistics — visions that have shaped our politics, our media and our health. This year confronts us with a similar choice.

The first of these visions was presented in How to Lie with Statistics, a book by a US journalist named Darrell Huff. Brisk, intelligent and witty, it is a little marvel of numerical communication. The book received rave reviews at the time, has been praised by many statisticians over the years and is said to be the best-selling work on the subject ever published. It is also an exercise in scorn: read it and you may be disinclined to believe a number-based claim ever again….

But they can — and back in 1954, the alternative perspective was embodied in the publication of an academic paper by the British epidemiologists Richard Doll and Austin Bradford Hill. They marshalled some of the first compelling evidence that smoking cigarettes dramatically increases the risk of lung cancer. The data they assembled persuaded both men to quit smoking and helped save tens of millions of lives by prompting others to do likewise. This was no statistical trickery, but a contribution to public health that is almost impossible to exaggerate…

As described in books such as Merchants of Doubt by Erik Conway and Naomi Oreskes, this industry perfected the tactics of spreading uncertainty: calling for more research, emphasising doubt and the need to avoid drastic steps, highlighting disagreements between experts and funding alternative lines of inquiry. The same tactics, and sometimes even the same personnel, were later deployed to cast doubt on climate science. These tactics are powerful in part because they echo the ideals of science.

It is a short step from the Royal Society’s motto, “nullius in verba” (take nobody’s word for it), to the corrosive nihilism of “nobody knows anything”.  So will 2020 be another 1954? From the point of view of statistics, we seem to be standing at another fork in the road.

The disinformation is still out there, as the public understanding of Covid-19 has been muddied by conspiracy theorists, trolls and government spin doctors.  Yet the information is out there too. The value of gathering and rigorously analysing data has rarely been more evident. Faced with a complete mystery at the start of the year, statisticians, scientists and epidemiologists have been working miracles. I hope that we choose the right fork, because the pandemic has lessons to teach us about statistics — and vice versa — if we are willing to learn…(More)”.

Computational social science: Obstacles and opportunities


Paper by David M. J. Lazer et al: “The field of computational social science (CSS) has exploded in prominence over the past decade, with thousands of papers published using observational data, experimental designs, and large-scale simulations that were once unfeasible or unavailable to researchers. These studies have greatly improved our understanding of important phenomena, ranging from social inequality to the spread of infectious diseases. The institutions supporting CSS in the academy have also grown substantially, as evidenced by the proliferation of conferences, workshops, and summer schools across the globe, across disciplines, and across sources of data. But the field has also fallen short in important ways. Many institutional structures around the field—including research ethics, pedagogy, and data infrastructure—are still nascent. We suggest opportunities to address these issues, especially in improving the alignment between the organization of the 20th-century university and the intellectual requirements of the field….(More)”.

Law and Technology Realism


Paper by Thibault Schrepel: “One may identify two current trends in the field of “Law and Technology.” The first trend concerns technological determinism. Some argue that technology is deterministic: the state of technological advancement is the determining factor of society. Others oppose that view, claiming it is the society that affects technology. The second trend concerns technological neutrality. some say that technology is neutral, meaning the effects of technology depend entirely on the social context. Others defend the opposite: they view the effects of technology as being inevitable (regardless of the society in which it is used).

<p><em><strong>Figure 1</strong></em></p>
Figure 1

While it is commonly accepted that technology is deterministic, I am under the impression that a majority of “Law and Technology” scholars also believe that technology is non-neutral. It follows that, according to this dominant view, (1) technology drives society in good or bad directions (determinism), and that (2) certain uses of technology may lead to the reduction or enhancement of the common good (non-neutrality). Consequently, this leads to top-down tech policies where the regulator has the impossible burden of helping society control and orient technology to the best possible extent.

This article is deterministic and non-neutral.

But, here’s the catch. Most of today’s doctrine focuses almost exclusively on the negativity brought by technology (read Nick Bostrom, Frank Pasquale, Evgeny Morozov). Sure, these authors mention a few positive aspects, but still end up focusing on the negative ones. They’re asking to constrain technology on that sole basis. With this article, I want to raise another point: technology determinism can also drive society by providing solutions to centuries-old problems. In and of itself. This is not technological solutionism, as I am not arguing that technology can solve all of mankind’s problems, but it is not anti-solutionism either. I fear the extremes, anyway.

To make my point, I will discuss the issue addressed by Albert Hirschman in his famous book Exit, Voice, and Loyalty (Harvard University Press, 1970). Hirschman, at the time Professor of Economics at Harvard University, introduces the distinction between “exit” and “voice.” With exit, an individual exhibits her or his disagreement as a member of a group by leaving the group. With voice, the individual stays in the group but expresses her or his dissatisfaction in the hope of changing its functioning. Hirschman summarizes his theory on page 121, with the understanding that the optimal situation for any individual is to be capable of both “exit” and “voice“….(More)”.

Why hypothesis testers should spend less time testing hypotheses


Paper by Scheel, Anne M., Leonid Tiokhin, Peder M. Isager, and Daniel Lakens: “For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable (Meehl, 1990). In response to the replication crisis, reforms in psychology have focused on formalising procedures for testing hypotheses. These reforms were necessary and impactful. However, as an unexpected consequence, psychologists have begun to realise that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound ‘derivation chain’ between test and theory is counterproductive. Instead, various non-confirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative.

Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and their functional form, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognised and incentivised as a crucial goal in and of itself.

In this article, we discuss how shifting the focus to non-confirmatory research can tie together many loose ends of psychology’s reform movement and help us lay the foundation to develop strong, testable theories, as Paul Meehl urged us to….(More)”

The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.

Building and maintaining trust in research


Daniel Nunan at the International Journal of Market Research: “One of the many indirect consequences of the COVID pandemic for the research sector may be the impact upon consumers’ willingness to share data. This is reflected in concerns that government mandated “apps” designed to facilitate COVID testing and tracking schemes will undermine trust in the commercial collection of personal data (WARC, 2020). For example, uncertainty over the consequences of handing over data and the ways in which it might be used could reduce consumers’ willingness to share data with organizations, and reverse a trend that has seen growing levels of consumer confidence in Data Protection Regulations (Data & Direct Marketing Association [DMA], 2020). This highlights how central the role of trust has become in contemporary research practice, and how fragile the process of building trust can be due to the ever competing demands of public and private data collectors.

For researchers, there are two sides to trust. One relates to building sufficient trust with research participants to be facilitate data collection, and the second is building trust with the users of research. Trust has long been understood as a key factor in effective research relationships, with trust between researchers and users of research the key factor in determining the extent to which research is actually used (Moorman et al., 1993). In other words, a trusted messenger is just as important as the contents of the message. In recent years, there has been growing concern over declining trust in research from research participants and the general public, manifested in declining response rates and challenges in gaining participation. Understanding how to build consumer trust is more important than ever, as the shift of communication and commercial activity to digital platforms alter the mechanisms through which trust is built. Trust is therefore essential both for ensuring that accurate data can be collected, and that insights from research have necessary legitimacy to be acted upon. The two research notes in this issue provide an insight into new areas where the issue of trust needs to be considered within research practice….(More)”.

The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.