Use of science in public policy: Lessons from the COVID-19 pandemic efforts to ‘Follow the Science’


Paper by Barry Bozeman: “The paper asks: ‘What can we learn from COVID-19 pandemic about effective use of scientific and technical information (STI) in policymaking and how might the lessons be put to use?’ The paper employs the political rhetoric of ‘follow the science’ as a lens for examining contemporary concerns in the use of STI, including (1) ‘Breadth of Science Products’, the necessity of a broader concept of STI that includes by-products science, (2) ‘Science Dynamism’, emphasizing the uncertainty and impeachability of science, (3) ‘STI Urgency’ suggesting that STI use during widespread calamities differs from more routine applications, and (4) ‘Hyper-politicization of Science’, arguing that a step-change in the contentiousness of politics affects uses and misuses of STI. The paper concludes with a discussion, STI Curation, as a possible ingredient to improving effective use. With more attention to credibility and trust of STI and to the institutional legitimacy of curators, it should prove possible to improve the effective use of STI in public policy….(More)”.

Citizen power mobilized to fight against mosquito borne diseases


GigaBlog: “Just out in GigaByte is the latest data release from Mosquito Alert, a citizen science system for investigating and managing disease-carrying mosquitoes, and is part of our WHO-sponsored series on vector borne human diseases. Presenting 13,700 new database records in the Global Biodiversity Information Facility (GBIF) repository, all linked to photographs submitted by citizen volunteers and validated by entomological experts to determine if it provides evidence of the presence of any of the mosquito vectors of top concern in Europe. This is the latest of a new special issue of papers presenting biodiversity data for research on human diseases health, incentivising data sharing to fill important particular species and geographic gaps. As big fans of citizen science (and Mosquito Alert), its great to see this new data showcased in the series.

Vector-borne diseases account for more than 17% of all infectious diseases in humans. There are large gaps in knowledge related to these vectors, and data mobilization campaigns are required to improve data coverage to help research on vector-borne diseases and human health. As part of these efforts, GigaScience Press has partnered with the GBIF; and has been supported by TDR, the Special Programme for Research and Training in Tropical Diseases, hosted at the World Health Organization. Through this we launched this “Vectors of human disease” thematic series. Incentivising the sharing of this extremely important data, Article Processing Charges have been waived to assist with the global call for novel data. This effort has already led to the release of newly digitised location data for over 600,000 vector specimens observed across the Americas and Europe.

While paying credit to such a large number of volunteers, creating such a large public collection of validated mosquito images allows this dataset to be used to train machine-learning models for vector detection and classification. Sharing the data in this novel manner meant the authors of these papers had to set up a new credit system to evaluate contributions from multiple and diverse collaborators, which included university researchers, entomologists, and other non-academics such as independent researchers and citizen scientists. In the GigaByte paper these are acknowledged through collaborative authorship for the Mosquito Alert Digital Entomology Network and the Mosquito Alert Community…(More)”.

Addressing the socioeconomic divide in computational modeling for infectious diseases


Paper by Michele Tizzoni et al: “The COVID-19 pandemic has highlighted how structural social inequities fundamentally shape disease dynamics, yet these concepts are often at the margins of the computational modeling community. Building on recent research studies in the area of digital and computational epidemiology, we provide a set of practical and methodological recommendations to address socioeconomic vulnerabilities in epidemic models…(More)”.

Operationalising AI governance through ethics-based auditing: an industry case study


Paper by Jakob Mökander & Luciano Floridi: “Ethics-based auditing (EBA) is a structured process whereby an entity’s past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA—such as the feasibility and effectiveness of different auditing procedures—have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective…(More)”.

What makes administrative data research-ready?


Paper by Louise Mc Grath-Lone et al: “Administrative data are a valuable research resource, but are under-utilised in the UK due to governance, technical and other barriers (e.g., the time and effort taken to gain secure data access). In recent years, there has been considerable government investment in making administrative data “research-ready”, but there is no definition of what this term means. A common understanding of what constitutes research-ready administrative data is needed to establish clear principles and frameworks for their development and the realisation of their full research potential…Overall, we screened 2,375 records and identified 38 relevant studies published between 2012 and 2021. Most related to administrative data from the UK and US and particularly to health data. The term research-ready was used inconsistently in the literature and there was some conflation with the concept of data being ready for statistical analysis. From the thematic analysis, we identified five defining characteristics of research-ready administrative data: (a) accessible, (b) broad, (c) curated, (d) documented and (e) enhanced for research purposes…
Our proposed characteristics of research-ready administrative data could act as a starting point to help data owners and researchers develop common principles and standards. In the more immediate term, the proposed characteristics are a useful framework for cataloguing existing research-ready administrative databases and relevant resources that can support their development…(More)”.

A Theory of Visionary Disruption


Paper by Joshua S. Gans: “Exploitation of disruptive technologies often requires resource deployment that creates conflict if there are divergent beliefs regarding the efficacy of a new technology. This arises when a visionary agent has more optimistic beliefs about a technological opportunity. Exploration in the form of experiments can be persuasive when beliefs differ by mitigating disagreement and its costs. This paper examines experimental choice when experiments need to persuade as well as inform. It is shown that, due to resource constraints, persuasion factors more highly for entrepreneurial than incumbent firms. However, incumbent firms, despite being able to redeploy resources using authority, are constrained in adoption as exploration cannot mitigate the costs of disagreement…(More)”.

The Labor Market Impacts of Technological Change: From Unbridled Enthusiasm to Qualified Optimism to Vast Uncertainty


NBER Working Paper by David Autor: “This review considers the evolution of economic thinking on the relationship between digital technology and inequality across four decades, encompassing four related but intellectually distinct paradigms, which I refer to as the education race, the task polarization model, the automation-reinstatement race, and the era of Artificial Intelligence uncertainty. The nuance of economic understanding has improved across these epochs. Yet, traditional economic optimism about the beneficent effects of technology for productivity and welfare has eroded as understanding has advanced. Given this intellectual trajectory, it would be natural to forecast an even darker horizon ahead. I refrain from doing so because forecasting the “consequences” of technological change treats the future as a fate to be divined rather than an expedition to be undertaken. I conclude by discussing opportunities and challenges that we collectively face in shaping this future….(More)”.

Open data: The building block of 21st century (open) science


Paper by Corina Pascu and Jean-Claude Burgelman: “Given this irreversibility of data driven and reproducible science and the role machines will play in that, it is foreseeable that the production of scientific knowledge will be more like a constant flow of updated data driven outputs, rather than a unique publication/article of some sort. Indeed, the future of scholarly publishing will be more based on the publication of data/insights with the article as a narrative.

For open data to be valuable, reproducibility is a sine qua non (King2011; Piwowar, Vision and Whitlock2011) and—equally important as most of the societal grand challenges require several sciences to work together—essential for interdisciplinarity.

This trend correlates with the already ongoing observed epistemic shift in the rationale of science: from demonstrating the absolute truth via a unique narrative (article or publication), to the best possible understanding what at that moment is needed to move forward in the production of knowledge to address problem “X” (de Regt2017).

Science in the 21st century will be thus be more “liquid,” enabled by open science and data practices and supported or even co-produced by artificial intelligence (AI) tools and services, and thus a continuous flow of knowledge produced and used by (mainly) machines and people. In this paradigm, an article will be the “atomic” entity and often the least important output of the knowledge stream and scholarship production. Publishing will offer in the first place a platform where all parts of the knowledge stream will be made available as such via peer review.

The new frontier in open science as well as where most of future revenue will be made, will be via value added data services (such as mining, intelligence, and networking) for people and machines. The use of AI is on the rise in society, but also on all aspects of research and science: what can be put in an algorithm will be put; the machines and deep learning add factor “X.”

AI services for science 4 are already being made along the research process: data discovery and analysis and knowledge extraction out of research artefacts are accelerated with the use of AI. AI technologies also help to maximize the efficiency of the publishing process and make peer-review more objective5 (Table 1).

Table 1. Examples of AI services for science already being developed

Abbreviation: AI, artificial intelligence.

Source: Authors’ research based on public sources, 2021.

Ultimately, actionable knowledge and translation of its benefits to society will be handled by humans in the “machine era” for decades to come. But as computers are indispensable research assistants, we need to make what we publish understandable to them.

The availability of data that are “FAIR by design” and shared Application Programming Interfaces (APIs) will allow new ways of collaboration between scientists and machines to make the best use of research digital objects of any kind. The more findable, accessible, interoperable, and reusable (FAIR) data resources will become available, the more it will be possible to use AI to extract and analyze new valuable information. The main challenge is to master the interoperability and quality of research data…(More)”.

The Impact of Public Transparency Infrastructure on Data Journalism: A Comparative Analysis between Information-Rich and Information-Poor Countries


Paper by Lindita Camaj, Jason Martin & Gerry Lanosga: “This study surveyed data journalists from 71 countries to compare how public transparency infrastructure influences data journalism practices around the world. Emphasizing cross-national differences in data access, results suggest that technical and economic inequalities that affect the implementation of the open data infrastructures can produce unequal data access and widen the gap in data journalism practices between information-rich and information-poor countries. Further, while journalists operating in open data infrastructure are more likely to exhibit a dependency on pre-processed public data, journalists operating in closed data infrastructures are more likely to use Access to Information legislation. We discuss the implications of our results for understanding the development of data journalism models in cross-national contexts…(More)”

“Co-construction” in Deliberative Democracy: Lessons from the French Citizens’ Convention for Climate


Paper by L.G. Giraudet et al: “Launched in 2019, the French Citizens’ Convention for Climate (CCC) tasked 150 randomly-chosen citizens with proposing fair and effective measures to fight climate change. This was to be fulfilled through an “innovative co-construction procedure,” involving some unspecified external input alongside that from the citizens. Did inputs from the steering bodies undermine the citizens’ accountability for the output? Did co-construction help the output resonate with the general public, as is expected from a citizens’ assembly? To answer these questions, we build on our unique experience in observing the CCC proceedings and documenting them with qualitative and quantitative data. We find that the steering bodies’ input, albeit significant, did not impair the citizens’ agency, creativity and freedom of choice. While succeeding in creating consensus among the citizens who were involved, this co-constructive approach however failed to generate significant support among the broader public. These results call for a strengthening of the commitment structure that determines how follow-up on the proposals from a citizens’ assembly should be conducted…(More)”.