Responsible Artificial Intelligence


Book by Virginia Dignum: “In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. 


Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens….(More)”.

Causal Inference: What If


Book by Miguel A. Hernán, James M. Robins: “Causal Inference is an admittedly pretentious title for a book. Causal inference is a complex scientific task that relies on triangulating evidence from multiple sources and on the application of a variety of methodological approaches. No book can possibly provide a comprehensive description of methodologies for causal inference across the sciences. The authors of any Causal Inference book will have to choose which aspects of causal inference methodology they want to emphasize.

The title of this introduction reflects our own choices: a book that helps scientists–especially health and social scientists–generate and analyze data to make causal inferences that are explicit about both the causal question and the assumptions underlying the data analysis. Unfortunately, the scientific literature is plagued by studies in which the causal question is not explicitly stated and the investigators’ unverifiable assumptions are not declared. This casual attitude towards causal inference has led to a great deal of confusion. For example, it is not uncommon to find studies in which the effect estimates are hard to interpret because the data analysis methods cannot appropriately answer the causal question (were it explicitly stated) under the investigators’ assumptions (were they declared).

In this book, we stress the need to take the causal question seriously enough to articulate it, and to delineate the separate roles of data and assumptions for causal inference. Once these foundations are in place, causal inferences become necessarily less casual, which helps prevent confusion. The book describes various data analysis approaches that can be used to estimate the causal effect of interest under a particular set of assumptions when data are collected on each individual in a population. A key message of the book is that causal inference cannot be reduced to a collection of recipes for data analysis.

The book is divided in three parts of increasing difficulty: Part I is about causal inference without models (i.e., nonparametric identification of causal effects), Part II is about causal inference with models (i.e., estimation of causal effects with parametric models), and Part III is about causal inference from complex longitudinal data (i.e., estimation of causal effects of time-varying treatments)….(More) (Additional Material)”.

The Challenges of Sharing Data in an Era of Politicized Science


Editorial by Howard Bauchner in JAMA: “The goal of making science more transparent—sharing data, posting results on trial registries, use of preprint servers, and open access publishing—may enhance scientific discovery and improve individual and population health, but it also comes with substantial challenges in an era of politicized science, enhanced skepticism, and the ubiquitous world of social media. The recent announcement by the Trump administration of plans to proceed with an updated version of the proposed rule “Strengthening Transparency in Regulatory Science,” stipulating that all underlying data from studies that underpin public health regulations from the US Environmental Protection Agency (EPA) must be made publicly available so that those data can be independently validated, epitomizes some of these challenges. According to EPA Administrator Andrew Wheeler: “Good science is science that can be replicated and independently validated, science that can hold up to scrutiny. That is why we’re moving forward to ensure that the science supporting agency decisions is transparent and available for evaluation by the public and stakeholders.”

Virtually every time JAMA publishes an article on the effects of pollution or climate change on health, the journal immediately receives demands from critics to retract the article for various reasons. Some individuals and groups simply do not believe that pollution or climate change affects human health. Research on climate change, and the effects of climate change on the health of the planet and human beings, if made available to anyone for reanalysis could be manipulated to find a different outcome than initially reported. In an age of skepticism about many issues, including science, with the ability to use social media to disseminate unfounded and at times potentially harmful ideas, it is challenging to balance the potential benefits of sharing data with the harms that could be done by reanalysis.

Can the experience of sharing data derived from randomized clinical trials (RCTs)—either as mandated by some funders and journals or as supported by individual investigators—serve as examples as a way to safeguard “truth” in science….

Although the sharing of data may have numerous benefits, it also comes with substantial challenges particularly in highly contentious and politicized areas, such as the effects of climate change and pollution on health, in which the public dialogue appears to be based on as much fiction as fact. The sharing of data, whether mandated by funders, including foundations and government, or volunteered by scientists who believe in the principle of data transparency, is a complicated issue in the evolving world of science, analysis, skepticism, and communication. Above all, the scientific process—including original research and reanalysis of shared data—must prevail, and the inherent search for evidence, facts, and truth must not be compromised by special interests, coercive influences, or politicized perspectives. There are no simple answers, just words of caution and concern….(More)”.

Netnography: The Essential Guide to Qualitative Social Media Research


Book by Robert Kozinets: “Netnography is an adaptation of ethnography for the online world, pioneered by Robert Kozinets, and is concerned with the study of online cultures and communities as distinct social phenomena, rather than isolated content. In this landmark third edition, Netnography: The Essential Guide provides the theoretical and methodological groundwork as well as the practical applications, helping students both understand and do netnographic research projects of their own.

Packed with enhanced learning features throughout, linking concepts to structured activities in a step by step way, the book is also now accompanied by a striking new visual design and further case studies, offering the essential student resource to conducting online ethnographic research. Real world examples provided demonstrate netnography in practice across the social sciences, in media and cultural studies, anthropology, education, nursing, travel and tourism, and others….(More)”.

Uses and Reuses of Scientific Data: The Data Creators’ Advantage


Paper by Irene V. Pasquetto, Christine L. Borgman, and Morgan F. Wofford: “Open access to data, as a core principle of open science, is predicated on assumptions that scientific data can be reused by other researchers. We test those assumptions by asking where scientists find reusable data, how they reuse those data, and how they interpret data they did not collect themselves. By conducting a qualitative meta-analysis of evidence on two long-term, distributed, interdisciplinary consortia, we found that scientists frequently sought data from public collections and from other researchers for comparative purposes such as “ground-truthing” and calibration. When they sought others’ data for reanalysis or for combining with their own data, which was relatively rare, most preferred to collaborate with the data creators.

We propose a typology of data reuses ranging from comparative to integrative. Comparative data reuse requires interactional expertise, which involves knowing enough about the data to assess their quality and value for a specific comparison such as calibrating an instrument in a lab experiment. Integrative reuse requires contributory expertise, which involves the ability to perform the action, such as reusing data in a new experiment. Data integration requires more specialized scientific knowledge and deeper levels of epistemic trust in the knowledge products. Metadata, ontologies, and other forms of curation benefit interpretation for any kind of data reuse. Based on these findings, we theorize the data creators’ advantage, that those who create data have intimate and tacit knowledge that can be used as barter to form collaborations for mutual advantage. Data reuse is a process that occurs within knowledge infrastructures that evolve over time, encompassing expertise, trust, communities, technologies, policies, resources, and institutions….(More)”.

The Psychological Basis of Motivation to Take Part in Online Citizen Science


Paper by Liz Dowthwaite et al: “Increasing motivation to contribute to online citizen science projects can improve user experience and is critical in retaining and attracting users. Drawing on previous studies of motivation, this paper suggests self-determination theory as a framework for explaining the psychological constructs behind participation in Citizen Science. Through examining existing studies of motivation for 6 Zooniverse projects through this lens, the paper suggests how appealing to basic psychological needs could increase participation in online citizen science, considering current practices and directions for future developments and research….(More)”.

Ten ways to optimise evidence-based policy


Paper by Peter Bragge: “Applying knowledge to problems has occupied the minds of great philosophers, scientists and other thinkers for centuries. In more modern times, the challenge of connecting knowledge to practice has been addressed through fields such as evidence-based medicine which have conceptualised optimal healthcare as integration of best available research evidence, clinical experience and patients’ values. Similar principles apply to evidence-based public policy, and literature in this field has been growing since the turn of the century.

The exponential rise in knowledge availability has greatly enhanced the ‘supply’ side of the evidence-into-practice equation – however substantial gaps between evidence and practice remain. Policymakers are therefore increasingly looking to academia to optimise evidence-informed policy. This article presents ten considerations for optimising evidence-based policy, drawn from experience in delivering applied behaviour change research to government….(More)”.

Retrofitting Social Science for the Practical & Moral


Kenneth Prewitt at Issues: “…We cannot reach this fresh thinking without first challenging two formulations that today’s social science considers settled. First, social science should not assume that the “usefulness of useless knowledge” works as our narrative. Yes, it works for natural sciences. But the logic doesn’t translate. Second, we should back off from exaggerated promises about “evidence-based policy,” perhaps terming it “evidence-influenced politics,” a framing that is more accurate descriptively (what happens) and prescriptively (what should happen). The prominence given to these two formulations gets in the way of an alternative positioning of social science as an agent of improvement. I discuss this alternative below, under the label of the Fourth Purpose….

…the “Fourth Purpose.” This joins the three purposes traditionally associated with American universities and colleges: Education, Research, and Public Service. The latter is best described as being “a good citizen,” engaged in volunteer work; it is an attractive feature of higher education, but not in any substantial manner present in the other two core purposes.

The Fourth Purpose is an altogether different vision. It institutionalizes what Ross characterized as a social science being in the “broadest sense practical and moral.” It succeeds only by being fully present in education and research, for instance, including experiential learning in the curriculum and expanding processes that convert research findings into social benefits. This involves more than scattered centers across the university working on particular social problems. As Bollinger puts it, the university itself becomes a hybrid actor, at once academic and practical. “A university,” he says, “is more than simply an infrastructure supporting schools, departments, and faculty in their academic pursuits. As research universities enter into the realm or realms of the outside world, the ‘university’ (i.e., the sum of its parts/constituents) is going to have capacities far beyond those of any segment, as well as effects (hopefully generally positive) radiating back into the institution.”

To oversimplify a bit, the Fourth Purpose has three steps. The first occurs in the lab, library, or field—resulting in fundamental findings. The second ventures into settings where nonacademic players and judgment come into play, actions are taken, and ethical choices confronted, that is, practices of the kind mentioned earlier: translation research, knowledge brokers, boundary organizations, coproduction. Academic and nonacademic players should both come away from these settings with enriched understanding and capabilities. For academics, the skills required for this step differ from, but complement, the more familiar skills of teacher and researcher. The new skills will have to be built into the fabric of the university if the Fourth Purpose is to succeed.

The third step cycles back to the campus. It involves scholarly understandings not previously available. It requires learning something new about the original research findings as a result of how they are interpreted, used, rejected, modified, or ignored in settings that, in fact, are controlling whether the research findings will be implemented as hoped. This itself is new knowledge. If paid attention to, and the cycle is repeated, endlessly, a new form of scholarship is added to our tool kit….(More)”.

OMB rethinks ‘protected’ or ‘open’ data binary with upcoming Evidence Act guidance


Jory Heckman at Federal News Network: “The Foundations for Evidence-Based Policymaking Act has ordered agencies to share their datasets internally and with other government partners — unless, of course, doing so would break the law.

Nearly a year after President Donald Trump signed the bill into law, agencies still have only a murky idea of what data they can share, and with whom. But soon, they’ll have more nuanced options of ranking the sensitivity of their datasets before sharing them out to others.

Chief Statistician Nancy Potok said the Office of Management and Budget will soon release proposed guidelines for agencies to provide “tiered” access to their data, based on the sensitivity of that information….

OMB, as part of its Evidence Act rollout, will also rethink how agencies ensure protected access to data for research. Potok said agency officials expect to pilot a single application governmentwide for people seeking access to sensitive data not available to the public.

The pilot resembles plans for a National Secure Data Service envisioned by the Commission on Evidence-Based Policymaking, an advisory group whose recommendations laid the groundwork for the Evidence Act.

“As a state-of-the-art resource for improving government’s capacity to use the data it already collects, the National Secure Data Service will be able to temporarily link existing data and provide secure access to those data for exclusively statistical purposes in connection with approved projects,” the commission wrote in its 2017 final report.

In an effort to strike a balance between access and privacy, Potok said OMB has also asked agencies to provide a list of the statutes that prohibit them from sharing data amongst themselves….(More)”.

To What Extent Does the EU General Data Protection Regulation (GDPR) Apply to Citizen Scientist-led Health Research with Mobile Devices?


Article by Edward Dove and Jiahong Chen: “In this article, we consider the possible application of the European General Data Protection Regulation (GDPR) to “citizen scientist”-led health research with mobile devices. We argue that the GDPR likely does cover this activity, depending on the specific context and the territorial scope. Remaining open questions that result from our analysis lead us to call for a lex specialis that would provide greater clarity and certainty regarding the processing of health data for research purposes, including by these non-traditional researchers…(More)”.