The Truth Is Paywalled But The Lies Are Free


Essay by Nathan J. Robinson: “…This means that a lot of the most vital information will end up locked behind the paywall. And while I am not much of a New Yorker fan either, it’s concerning that the Hoover Institute will freely give you Richard Epstein’s infamous article downplaying the threat of coronavirus, but Isaac Chotiner’s interview demolishing Epstein requires a monthly subscription, meaning that the lie is more accessible than its refutation. Eric Levitz of New York is one of the best and most prolific left political commentators we have. But unless you’re a subscriber of New York, you won’t get to hear much of what he has to say each month. 

Possibly even worse is the fact that so much academic writing is kept behind vastly more costly paywalls. A white supremacist on YouTube will tell you all about race and IQ but if you want to read a careful scholarly refutation, obtaining a legal PDF from the journal publisher would cost you $14.95, a price nobody in their right mind would pay for one article if they can’t get institutional access. (I recently gave up on trying to access a scholarly article because I could not find a way to get it for less than $39.95, though in that case the article was garbage rather than gold.) Academic publishing is a nightmarish patchwork, with lots of articles advertised at exorbitant fees on one site, and then for free on another, or accessible only through certain databases, which your university or public library may or may not have access to. (Libraries have to budget carefully because subscription prices are often nuts. A library subscription to the Journal of Coordination Chemistryfor instance, costs $11,367 annually.) 

Of course, people can find their ways around paywalls. SciHub is a completely illegal but extremely convenient means of obtaining academic research for free. (I am purely describing it, not advocating it.) You can find a free version of the article debunking race and IQ myths on ResearchGate, a site that has engaged in mass copyright infringement in order to make research accessible. Often, because journal publishers tightly control access to their copyrighted work in order to charge those exorbitant fees for PDFs, the versions of articles that you can get for free are drafts that have not yet gone through peer review, and have thus been subjected to less scrutiny. This means that the more reliable an article is, the less accessible it is. On the other hand, pseudo-scholarhip is easy to find. Right-wing think tanks like the Cato Institute, the Foundation for Economic Education, the Hoover Institution, the Mackinac Center, the American Enterprise Institute, and the Heritage Foundation pump out slickly-produced policy documents on every subject under the sun. They are utterly untrustworthy—the conclusion is always going to be “let the free market handle the problem,” no matter what the problem or what the facts of the case. But it is often dressed up to look sober-minded and non-ideological. 

It’s not easy or cheap to be an “independent researcher.” When I was writing my first book, Superpredator, I wanted to look through newspaper, magazine, and journal archives to find everything I could about Bill Clinton’s record on race. I was lucky I had a university affiliation, because this gave me access to databases like LexisNexis. If I hadn’t, the cost of finding out what I wanted to find out would likely have run into the thousands of dollars.  

A problem beyond cost, though, is convenience. I find that even when I am doing research through databases and my university library, it is often an absolute mess: the sites are clunky and constantly demanding login credentials. The amount of time wasted in figuring out how to obtain a piece of research material is a massive cost on top of the actual pricing. The federal court document database, PACER, for instance, charges 10 cents a page for access to records, which adds up quickly since legal research often involves looking through thousands of pages. They offer an exemption if you are a researcher or can’t afford it, but to get the exemption you have to fill out a three page form and provide an explanation of both why you need each document and why you deserve the exemption. This is a waste of time that inhibits people’s productivity and limits their access to knowledge.

In fact, to see just how much human potential is being squandered by having knowledge dispensed by the “free market,” let us briefly picture what “totally democratic and accessible knowledge” would look like…(More)”.

Resetting the state for the post-covid digital age


Blog by Carlos Santiso: “The COVID-19 crisis is putting our global digital resilience to the test. It has revealed the importance of a country’s digital infrastructure as the backbone of the economy, not just as an enabler of the tech economy. Digitally advanced governments, such as Estonia, have been able to put their entire bureaucracies in remote mode in a matter of days, without major disruption. And some early evidence even suggests that their productivity increased during lockdown.

With the crisis, the costs of not going digital have largely surpassed the risks of doing so. Countries and cities lagging behind have realised the necessity to boost their digital resilience and accelerate their digital transformation. Spain, for example, adopted an ambitious plan to inject 70 billion euro into in its digital transformation over the next five years, with a Digital Spain 2025 agenda comprising 10 priorities and 48 measures. In the case of Brazil, the country was already taking steps towards the digital transformation of its public sector before the COVID-19 crisis hit. The crisis is accelerating this transformation.

The great accelerator

Long before the crisis hit, the data-driven digital revolution has been challenging governments to modernise and become more agile, open and responsive. Progress has nevertheless been uneven, hindered by a variety of factors, from political resistance to budget constraints. Going digital requires the sort of whole-of government reforms that need political muscle and long-term vision to break-up traditional data silos within bureaucracies, jealous to preserve their power. In bureaucracies, information is power. Now, information has become ubiquitous and governing data, a critical challenge.

Cutting red tape will be central to the recovery. Many governments are fast-tracking regulatory simplification and administrative streamlining to reboot hard-hit economic sectors. Digitalisation is resetting the relationship between states and citizens, a Copernican revolution for our rule-based bureaucracies….(More)“.

Public perceptions on data sharing: key insights from the UK and the USA


Paper by Saira Ghafur, Jackie Van Dael, Melanie Leis and Ara Darzi, and Aziz Sheikh: “Data science and artificial intelligence (AI) have the potential to transform the delivery of health care. Health care as a sector, with all of the longitudinal data it holds on patients across their lifetimes, is positioned to take advantage of what data science and AI have to offer. The current COVID-19 pandemic has shown the benefits of sharing data globally to permit a data-driven response through rapid data collection, analysis, modelling, and timely reporting.

Despite its obvious advantages, data sharing is a controversial subject, with researchers and members of the public justifiably concerned about how and why health data are shared. The most common concern is privacy; even when data are (pseudo-)anonymised, there remains a risk that a malicious hacker could, using only a few datapoints, re-identify individuals. For many, it is often unclear whether the risks of data sharing outweigh the benefits.

A series of surveys over recent years indicate that the public holds a range of views about data sharing. Over the past few years, there have been several important data breaches and cyberattacks. This has resulted in patients and the public questioning the safety of their data, including the prospect or risk of their health data being shared with unauthorised third parties.

We surveyed people across the UK and the USA to examine public attitude towards data sharing, data access, and the use of AI in health care. These two countries were chosen as comparators as both are high-income countries that have had substantial national investments in health information technology (IT) with established track records of using data to support health-care planning, delivery, and research. The UK and USA, however, have sharply contrasting models of health-care delivery, making it interesting to observe if these differences affect public attitudes.

Willingness to share anonymised personal health information varied across receiving bodies (figure). The more commercial the purpose of the receiving institution (eg, for an insurance or tech company), the less often respondents were willing to share their anonymised personal health information in both the UK and the USA. Older respondents (≥35 years) in both countries were generally less likely to trust any organisation with their anonymised personal health information than younger respondents (<35 years)…

Despite the benefits of big data and technology in health care, our findings suggest that the rapid development of novel technologies has been received with concern. Growing commodification of patient data has increased awareness of the risks involved in data sharing. There is a need for public standards that secure regulation and transparency of data use and sharing and support patient understanding of how data are used and for what purposes….(More)”.

The Shortcomings of Transparency for Democracy


Paper by Michael Schudson: “Transparency” has become a widely recognized, even taken for granted, value in contemporary democracies, but this has been true only since the 1970s. For all of the obvious virtues of transparency for democracy, they have not always been recognized or they have been recognized, as in the U.S. Freedom of Information Act of 1966, with significant qualifications. This essay catalogs important shortcomings of transparency for democracy, as when it clashes with national security, personal privacy, and the importance of maintaining the capacity of government officials to talk frankly with one another without fear that half-formulated ideas, thoughts, and proposals will become public. And when government information becomes public, that does not make it equally available to all—publicity is not in itself democratic, as public information (as in open legislative committee hearings) is more readily accessed by empowered groups with lobbyists able to attend and monitor the provision of the information. Transparency is an element in democratic government, but it is by no means a perfect emblem of democracy….(More)”.

Project Patient Voice


Press Release: “The U.S. Food and Drug Administration today launched Project Patient Voice, an initiative of the FDA’s Oncology Center of Excellence (OCE). Through a new website, Project Patient Voice creates a consistent source of publicly available information describing patient-reported symptoms from cancer trials for marketed treatments. While this patient-reported data has historically been analyzed by the FDA during the drug approval process, it is rarely included in product labeling and, therefore, is largely inaccessible to the public.

“Project Patient Voice has been initiated by the Oncology Center of Excellence to give patients and health care professionals unique information on symptomatic side effects to better inform their treatment choices,” said FDA Principal Deputy Commissioner Amy Abernethy, M.D., Ph.D. “The Project Patient Voice pilot is a significant step in advancing a patient-centered approach to oncology drug development. Where patient-reported symptom information is collected rigorously, this information should be readily available to patients.” 

Patient-reported outcome (PRO) data is collected using questionnaires that patients complete during clinical trials. These questionnaires are designed to capture important information about disease- or treatment-related symptoms. This includes how severe or how often a symptom or side effect occurs.

Patient-reported data can provide additional, complementary information for health care professionals to discuss with patients, specifically when discussing the potential side effects of a particular cancer treatment. In contrast to the clinician-reported safety data in product labeling, the data in Project Patient Voice is obtained directly from patients and can show symptoms before treatment starts and at multiple time points while receiving cancer treatment. 

The Project Patient Voice website will include a list of cancer clinical trials that have available patient-reported symptom data. Each trial will include a table of the patient-reported symptoms collected. Each patient-reported symptom can be selected to display a series of bar and pie charts describing the patient-reported symptom at baseline (before treatment starts) and over the first 6 months of treatment. This information provides insights into side effects not currently available in standard FDA safety tables, including existing symptoms before the start of treatment, symptoms over time, and the subset of patients who did not have a particular symptom prior to starting treatment….(More)”.

Measuring Movement and Social Contact with Smartphone Data: A Real-Time Application to Covid-19


Paper by Victor Couture et al: “Tracking human activity in real time and at fine spatial scale is particularly valuable during episodes such as the COVID-19 pandemic. In this paper, we discuss the suitability of smartphone data for quantifying movement and social contact. We show that these data cover broad sections of the US population and exhibit movement patterns similar to conventional survey data. We develop and make publicly available a location exposure index that summarizes county-to-county movements and a device exposure index that quantifies social contact within venues. We use these indices to document how pandemic-induced reductions in activity vary across people and places….(More)”.

The Atlas of Surveillance


Electronic Frontier Foundation: “Law enforcement surveillance isn’t always secret. These technologies can be discovered in news articles and government meeting agendas, in company press releases and social media posts. It just hasn’t been aggregated before.

That’s the starting point for the Atlas of Surveillance, a collaborative effort between the Electronic Frontier Foundation and the University of Nevada, Reno Reynolds School of Journalism. Through a combination of crowdsourcing and data journalism, we are creating the largest-ever repository of information on which law enforcement agencies are using what surveillance technologies. The aim is to generate a resource for journalists, academics, and, most importantly, members of the public to check what’s been purchased locally and how technologies are spreading across the country.

We specifically focused on the most pervasive technologies, including drones, body-worn cameras, face recognition, cell-site simulators, automated license plate readers, predictive policing, camera registries, and gunshot detection. Although we have amassed more than 5,000 datapoints in 3,000 jurisdictions, our research only reveals the tip of the iceberg and underlines the need for journalists and members of the public to continue demanding transparency from criminal justice agencies….(More)”.

Tackling the misinformation epidemic with “In Event of Moon Disaster”


MIT Open Learning: “Can you recognize a digitally manipulated video when you see one? It’s harder than most people realize. As the technology to produce realistic “deepfakes” becomes more easily available, distinguishing fact from fiction will only get more challenging. A new digital storytelling project from MIT’s Center for Advanced Virtuality aims to educate the public about the world of deepfakes with “In Event of Moon Disaster.”

This provocative website showcases a “complete” deepfake (manipulated audio and video) of U.S. President Richard M. Nixon delivering the real contingency speech written in 1969 for a scenario in which the Apollo 11 crew were unable to return from the moon. The team worked with a voice actor and a company called Respeecher to produce the synthetic speech using deep learning techniques. They also worked with the company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips. Through these sophisticated AI and machine learning technologies, the seven-minute film shows how thoroughly convincing deepfakes can be….

Alongside the film, moondisaster.org features an array of interactive and educational resources on deepfakes. Led by Panetta and Halsey Burgund, a fellow at MIT Open Documentary Lab, an interdisciplinary team of artists, journalists, filmmakers, designers, and computer scientists has created a robust, interactive resource site where educators and media consumers can deepen their understanding of deepfakes: how they are made and how they work; their potential use and misuse; what is being done to combat deepfakes; and teaching and learning resources….(More)”.

The Coronavirus and Innovation


Essay by Scott E. Page: “The total impact of the coronavirus pandemic—the loss of life and the economic, social, and psychological costs arising from both the pandemic itself and the policies implemented to prevent its spread—defy any characterization. Though the pandemic continues to unsettle, disrupt, and challenge communities, we might take a moment to appreciate and applaud the diversity, breadth, and scope of our responses—from individual actions to national policies—and even more important, to reflect on how they will produce a post–Covid-19 world far better than the world that preceded it.

In this brief essay, I describe how our adaptive responses to the coronavirus will lead to beneficial policy innovations. I do so from the perspective of a many-model thinker. By that I mean that I will use several formal models to theoretically elucidate the potential pathways to creating a better world. I offer this with the intent that it instills optimism that our current efforts to confront this tragic and difficult challenge will do more than combat the virus now and teach us how to combat future viruses. They will, in the long run, result in an enormous number of innovations in policy, business practices, and our daily lives….(More)”.

Why Hundreds of Mathematicians Are Boycotting Predictive Policing


Courtney Linder at Popular Mechanics: “Several prominent academic mathematicians want to sever ties with police departments across the U.S., according to a letter submitted to Notices of the American Mathematical Society on June 15. The letter arrived weeks after widespread protests against police brutality, and has inspired over 1,500 other researchers to join the boycott.

These mathematicians are urging fellow researchers to stop all work related to predictive policing software, which broadly includes any data analytics tools that use historical data to help forecast future crime, potential offenders, and victims. The technology is supposed to use probability to help police departments tailor their neighborhood coverage so it puts officers in the right place at the right time….

a flow chart showing how predictive policing works

RAND

According to a 2013 research briefing from the RAND Corporation, a nonprofit think tank in Santa Monica, California, predictive policing is made up of a four-part cycle (shown above). In the first two steps, researchers collect and analyze data on crimes, incidents, and offenders to come up with predictions. From there, police intervene based on the predictions, usually taking the form of an increase in resources at certain sites at certain times. The fourth step is, ideally, reducing crime.

“Law enforcement agencies should assess the immediate effects of the intervention to ensure that there are no immediately visible problems,” the authors note. “Agencies should also track longer-term changes by examining collected data, performing additional analysis, and modifying operations as needed.”

In many cases, predictive policing software was meant to be a tool to augment police departments that are facing budget crises with less officers to cover a region. If cops can target certain geographical areas at certain times, then they can get ahead of the 911 calls and maybe even reduce the rate of crime.

But in practice, the accuracy of the technology has been contested—and it’s even been called racist….(More)”.