Tool for Surveillance or Spotlight on Inequality? Big Data and the Law


Paper by Rebecca A. Johnson and Tanina Rostain: “The rise of big data and machine learning is a polarizing force among those studying inequality and the law. Big data and tools like predictive modeling may amplify inequalities in the law, subjecting vulnerable individuals to enhanced surveillance. But these data and tools may also serve an opposite function, shining a spotlight on inequality and subjecting powerful institutions to enhanced oversight. We begin with a typology of the role of big data in inequality and the law. The typology asks questions—Which type of individual or institutional actor holds the data? What problem is the actor trying to use the data to solve?—that help situate the use of big data within existing scholarship on law and inequality. We then highlight the dual uses of big data and computational methods—data for surveillance and data as a spotlight—in three areas of law: rental housing, child welfare, and opioid prescribing. Our review highlights asymmetries where the lack of data infrastructure to measure basic facts about inequality within the law has impeded the spotlight function….(More)”.

The Principle of Self-Selection in Crowdsourcing Contests – Theory and Evidence


Paper by Nikolaus Franke, Kathrin Reinsberger and Philipp Topic: “Self-selection has been portrayed to be one of the core reasons for the stunning success of crowdsourcing. It is widely believed that among the mass of potential problem solvers particularly those individuals decide to participate who have the best problem-solving capabilities with regard to the problem at question. Extant research assumes that this self-selection effect is beneficial based on the premise that self-selecting individuals know more about their capabilities and knowledge than the publisher of the task – which frees the organization from costly and error-prone active search.

However, the effectiveness of this core principle has hardly been analyzed, probably because it is extremely difficult to investigate characteristics of those individuals who self-select out. In a unique research design in which we overcome these difficulties by combining behavioral data from a real crowdsourcing contest with data from a survey and archival data, we find that self-selection is actually working in the right direction. Those with particularly strong problem-solving capabilities tend to self-select into the contest and those with low capabilities tend to self-select out. However, this self-selection effect is much weaker than assumed and thus much potential is being lost. This suggests that much more attention needs to be paid to the early stages of crowdsourcing contests and particularly to those the hitherto almost completely overlooked individuals who could provide great solutions but self-select out.”…(More)”.

Public perceptions on data sharing: key insights from the UK and the USA


Paper by Saira Ghafur, Jackie Van Dael, Melanie Leis and Ara Darzi, and Aziz Sheikh: “Data science and artificial intelligence (AI) have the potential to transform the delivery of health care. Health care as a sector, with all of the longitudinal data it holds on patients across their lifetimes, is positioned to take advantage of what data science and AI have to offer. The current COVID-19 pandemic has shown the benefits of sharing data globally to permit a data-driven response through rapid data collection, analysis, modelling, and timely reporting.

Despite its obvious advantages, data sharing is a controversial subject, with researchers and members of the public justifiably concerned about how and why health data are shared. The most common concern is privacy; even when data are (pseudo-)anonymised, there remains a risk that a malicious hacker could, using only a few datapoints, re-identify individuals. For many, it is often unclear whether the risks of data sharing outweigh the benefits.

A series of surveys over recent years indicate that the public holds a range of views about data sharing. Over the past few years, there have been several important data breaches and cyberattacks. This has resulted in patients and the public questioning the safety of their data, including the prospect or risk of their health data being shared with unauthorised third parties.

We surveyed people across the UK and the USA to examine public attitude towards data sharing, data access, and the use of AI in health care. These two countries were chosen as comparators as both are high-income countries that have had substantial national investments in health information technology (IT) with established track records of using data to support health-care planning, delivery, and research. The UK and USA, however, have sharply contrasting models of health-care delivery, making it interesting to observe if these differences affect public attitudes.

Willingness to share anonymised personal health information varied across receiving bodies (figure). The more commercial the purpose of the receiving institution (eg, for an insurance or tech company), the less often respondents were willing to share their anonymised personal health information in both the UK and the USA. Older respondents (≥35 years) in both countries were generally less likely to trust any organisation with their anonymised personal health information than younger respondents (<35 years)…

Despite the benefits of big data and technology in health care, our findings suggest that the rapid development of novel technologies has been received with concern. Growing commodification of patient data has increased awareness of the risks involved in data sharing. There is a need for public standards that secure regulation and transparency of data use and sharing and support patient understanding of how data are used and for what purposes….(More)”.

The Shortcomings of Transparency for Democracy


Paper by Michael Schudson: “Transparency” has become a widely recognized, even taken for granted, value in contemporary democracies, but this has been true only since the 1970s. For all of the obvious virtues of transparency for democracy, they have not always been recognized or they have been recognized, as in the U.S. Freedom of Information Act of 1966, with significant qualifications. This essay catalogs important shortcomings of transparency for democracy, as when it clashes with national security, personal privacy, and the importance of maintaining the capacity of government officials to talk frankly with one another without fear that half-formulated ideas, thoughts, and proposals will become public. And when government information becomes public, that does not make it equally available to all—publicity is not in itself democratic, as public information (as in open legislative committee hearings) is more readily accessed by empowered groups with lobbyists able to attend and monitor the provision of the information. Transparency is an element in democratic government, but it is by no means a perfect emblem of democracy….(More)”.

The Open Innovation in Science research field: a collaborative conceptualisation approach


Paper by Susanne Beck et al: “Openness and collaboration in scientific research are attracting increasing attention from scholars and practitioners alike. However, a common understanding of these phenomena is hindered by disciplinary boundaries and disconnected research streams. We link dispersed knowledge on Open Innovation, Open Science, and related concepts such as Responsible Research and Innovation by proposing a unifying Open Innovation in Science (OIS) Research Framework. This framework captures the antecedents, contingencies, and consequences of open and collaborative practices along the entire process of generating and disseminating scientific insights and translating them into innovation. Moreover, it elucidates individual-, team-, organisation-, field-, and society‐level factors shaping OIS practices. To conceptualise the framework, we employed a collaborative approach involving 47 scholars from multiple disciplines, highlighting both tensions and commonalities between existing approaches. The OIS Research Framework thus serves as a basis for future research, informs policy discussions, and provides guidance to scientists and practitioners….(More)”.

Morphing Intelligence From IQ Measurement to Artificial Brains


Book by Catherine Malabou: “What is intelligence? The concept crosses and blurs the boundaries between natural and artificial, bridging the human brain and the cybernetic world of AI. In this book, the acclaimed philosopher Catherine Malabou ventures a new approach that emphasizes the intertwined, networked relationships among the biological, the technological, and the symbolic.

Malabou traces the modern metamorphoses of intelligence, seeking to understand how neurobiological and neurotechnological advances have transformed our view. She considers three crucial developments: the notion of intelligence as an empirical, genetically based quality measurable by standardized tests; the shift to the epigenetic paradigm, with its emphasis on neural plasticity; and the dawn of artificial intelligence, with its potential to simulate, replicate, and ultimately surpass the workings of the brain. Malabou concludes that a dialogue between human and cybernetic intelligence offers the best if not the only means to build a democratic future. A strikingly original exploration of our changing notions of intelligence and the human and their far-reaching philosophical and political implications, Morphing Intelligence is an essential analysis of the porous border between symbolic and biological life at a time when once-clear distinctions between mind and machine have become uncertain….(More)”.

Calling Bullshit: The Art of Scepticism in a Data-Driven World


Book by Carl Bergstrom and Jevin West: “Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit — and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.

We’re sick of it. It’s time to do something, and as educators, one constructive thing we know how to do is to teach people. So, the aim of this course is to help students navigate the bullshit-rich modern environment by identifying bullshit, seeing through it, and combating it with effective analysis and argument.

What do we mean, exactly, by bullshit and calling bullshit? As a first approximation:

Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence.

Calling bullshit is a performative utterance, a speech act in which one publicly repudiates something objectionable. The scope of targets is broader than bullshit alone. You can call bullshit on bullshit, but you can also call bullshit on lies, treachery, trickery, or injustice.

In this course we will teach you how to spot the former and effectively perform the latter.

While bullshit may reach its apogee in the political domain, this is not a course on political bullshit. Instead, we will focus on bullshit that comes clad in the trappings of scholarly discourse. Traditionally, such highbrow nonsense has come couched in big words and fancy rhetoric, but more and more we see it presented instead in the guise of big data and fancy algorithms — and these quantitative, statistical, and computational forms of bullshit are those that we will be addressing in the present course.

Of course an advertisement is trying to sell you something, but do you know whether the TED talk you watched last night is also bullshit — and if so, can you explain why? Can you see the problem with the latest New York Times or Washington Post article fawning over some startup’s big data analytics? Can you tell when a clinical trial reported in the New England Journal or JAMA is trustworthy, and when it is just a veiled press release for some big pharma company?…(More)”.

20’s the limit: How to encourage speed reductions


Report by The Wales Centre for Public Policy: “This report has been prepared to support the Welsh Government’s plan to introduce a 20mph national default speed limit in 2022. It aims to address two main questions: 1) What specific behavioural interventions might be implemented to promote driver compliance with 20mph speed limits in residential areas; and 2) are there particular demographics, community characteristics or other features that should form the basis of a segmentation approach?

The reasons for speeding are complex, but many behaviour change
techniques have been successfully applied to road safety, including some which use behavioural insights or “nudges”.
Drivers can be segmented into three types: defiers (a small minority),
conformers (the majority) and champions (a minority). Conformers are law abiding citizens who respect social norms – getting this group to comply can achieve a tipping point.
Other sectors have shown that providing information is only effective if part of a wider package of measures and that people are most open to
change at times of disruption or learning (e.g. learner drivers)….(More)”.

Project Patient Voice


Press Release: “The U.S. Food and Drug Administration today launched Project Patient Voice, an initiative of the FDA’s Oncology Center of Excellence (OCE). Through a new website, Project Patient Voice creates a consistent source of publicly available information describing patient-reported symptoms from cancer trials for marketed treatments. While this patient-reported data has historically been analyzed by the FDA during the drug approval process, it is rarely included in product labeling and, therefore, is largely inaccessible to the public.

“Project Patient Voice has been initiated by the Oncology Center of Excellence to give patients and health care professionals unique information on symptomatic side effects to better inform their treatment choices,” said FDA Principal Deputy Commissioner Amy Abernethy, M.D., Ph.D. “The Project Patient Voice pilot is a significant step in advancing a patient-centered approach to oncology drug development. Where patient-reported symptom information is collected rigorously, this information should be readily available to patients.” 

Patient-reported outcome (PRO) data is collected using questionnaires that patients complete during clinical trials. These questionnaires are designed to capture important information about disease- or treatment-related symptoms. This includes how severe or how often a symptom or side effect occurs.

Patient-reported data can provide additional, complementary information for health care professionals to discuss with patients, specifically when discussing the potential side effects of a particular cancer treatment. In contrast to the clinician-reported safety data in product labeling, the data in Project Patient Voice is obtained directly from patients and can show symptoms before treatment starts and at multiple time points while receiving cancer treatment. 

The Project Patient Voice website will include a list of cancer clinical trials that have available patient-reported symptom data. Each trial will include a table of the patient-reported symptoms collected. Each patient-reported symptom can be selected to display a series of bar and pie charts describing the patient-reported symptom at baseline (before treatment starts) and over the first 6 months of treatment. This information provides insights into side effects not currently available in standard FDA safety tables, including existing symptoms before the start of treatment, symptoms over time, and the subset of patients who did not have a particular symptom prior to starting treatment….(More)”.

How behavioural sciences can promote truth, autonomy and democratic discourse online


Philipp Lorenz-Spreen, Stephan Lewandowsky, Cass R. Sunstein & Ralph Hertwig in Nature: “Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice….(More)”.