Governance responses to disinformation: How open government principles can inform policy options


OECD paper by Craig Matasick, Carlotta Alfonsi and Alessandro Bellantoni: “This paper provides a holistic policy approach to the challenge of disinformation by exploring a range of governance responses that rest on the open government principles of transparency, integrity, accountability and stakeholder participation. It offers an analysis of the significant changes that are affecting media and information ecosystems, chief among them the growth of digital platforms. Drawing on the implications of this changing landscape, the paper focuses on four policy areas of intervention: public communication for a better dialogue between government and citizens; direct responses to identify and combat disinformation; legal and regulatory policy; and media and civic responses that support better information ecosystems. The paper concludes with proposed steps the OECD can take to build evidence and support policy in this space…(More)”.

Why Personal Data Is a National Security Issue


Article by Susan Ariel Aaronson: “…Concerns about the national security threat from personal data held by foreigners first emerged in 2013. Several U.S. entities, including Target, J.P. Morgan, and the U.S. Office of Personnel Management were hacked. Many attributed the hacking to Chinese entities. Administration officials concluded that the Chinese government could cross-reference legally obtained and hacked-data sets to reveal information about U.S. objectives and strategy. 

Personal data troves can also be cross-referenced to identify individuals, putting both personal security as well as national security at risk. Even U.S. firms pose a direct and indirect security threat to individuals and the nation because of their failure to adequately protect personal data. For example, Facebook has a disturbing history of sharing personal data without consent and allowing its clients to use that data to manipulate users. Some app designers have enabled functionality unnecessary for their software’s operation, while others, like Anomaly 6, embedded their software in mobile apps without the permission of users or firms. Other companies use personal data without user permission to create new products. Clearview AI scraped billions of images from major web services such as Facebook, Google, and YouTube, and sold these images to law enforcement agencies around the world. 

Firms can also inadvertently aggregate personal data and in so doing threaten national security. Strava, an athletes’ social network, released a heat map of its global users’ activities in 2018. Savvy analysts were able to use the heat map to reveal secret military bases and patrol routes. Chinese-owned data firms could be a threat to national security if they share data with the Chinese government. But the problem lies in the U.S.’s failure to adequately protect personal data and police the misuse of data collected without the permission of users….(More)”.

The EU is launching a market for personal data. Here’s what that means for privacy.


Anna Artyushina at MIT Tech Review: “The European Union has long been a trendsetter in privacy regulation. Its General Data Protection Regulation (GDPR) and stringent antitrust laws have inspired new legislation around the world. For decades, the EU has codified protections on personal data and fought against what it viewed as commercial exploitation of private information, proudly positioning its regulations in contrast to the light-touch privacy policies in the United States.

The new European data governance strategy (pdf) takes a fundamentally different approach. With it, the EU will become an active player in facilitating the use and monetization of its citizens’ personal data. Unveiled by the European Commission in February 2020, the strategy outlines policy measures and investments to be rolled out in the next five years.

This new strategy represents a radical shift in the EU’s focus, from protecting individual privacy to promoting data sharing as a civic duty. Specifically, it will create a pan-European market for personal data through a mechanism called a data trust. A data trust is a steward that manages people’s data on their behalf and has fiduciary duties toward its clients.

The EU’s new plan considers personal data to be a key asset for Europe. However, this approach raises some questions. First, the EU’s intent to profit from the personal data it collects puts European governments in a weak position to regulate the industry. Second, the improper use of data trusts can actually deprive citizens of their rights to their own data.

The Trusts Project, the first initiative put forth by the new EU policies, will be implemented by 2022. With a €7 million budget, it will set up a pan-European pool of personal and nonpersonal information that should become a one-stop shop for businesses and governments looking to access citizens’ information.

Global technology companies will not be allowed to store or move Europeans’ data. Instead, they will be required to access it via the trusts. Citizens will collect “data dividends,” which haven’t been clearly defined but could include monetary or nonmonetary payments from companies that use their personal data. With the EU’s roughly 500 million citizens poised to become data sources, the trusts will create the world’s largest data market.

For citizens, this means the data created by them and about them will be held in public servers and managed by data trusts. The European Commission envisions the trusts as a way to help European businesses and governments reuse and extract value from the massive amounts of data produced across the region, and to help European citizens benefit from their information. The project documentation, however, does not specify how individuals will be compensated.

Data trusts were first proposed by internet pioneer Sir Tim Berners Lee in 2018, and the concept has drawn considerable interest since then. Just like the trusts used to manage one’s property, data trusts may serve different purposes: they can be for-profit enterprises, or they can be set up for data storage and protection, or to work for a charitable cause.

IBM and Mastercard have built a data trust to manage the financial information of their European clients in Ireland; the UK and Canada have employed data trusts to stimulate the growth of the AI industries there; and recently, India announced plans to establish its own public data trust to spur the growth of technology companies.

The new EU project is modeled on Austria’s digital system, which keeps track of information produced by and about its citizens by assigning them unique identifiers and storing the data in public repositories.

Unfortunately, data trusts do not guarantee more transparency. The trust is governed by a charter created by the trust’s settlor, and its rules can be made to prioritize someone’s interests. The trust is run by a board of directors, which means a party that has more seats gains significant control.

The Trusts Project is bound to face some governance issues of its own. Public and private actors often do not see eye to eye when it comes to running critical infrastructure or managing valuable assets. Technology companies tend to favor policies that create opportunity for their own products and services. Caught in a conflict of interest, Europe may overlook the question of privacy….(More)”.

From Desert Battlefields To Coral Reefs, Private Satellites Revolutionize The View


NPR Story: “As the U.S. military and its allies attacked the last Islamic State holdouts last year, it wasn’t clear how many civilians were still in the besieged desert town of Baghouz, Syria.

So Human Rights Watch asked a private satellite company, Planet, for its regular daily photos and also made a special request for video.

“That live video actually was instrumental in convincing us that there were thousands of civilians trapped in this pocket,” said Josh Lyons of Human Rights Watch. “Therefore, the coalition forces absolutely had an obligation to stop and to avoid bombardment of that pocket at that time.”

Which they did until the civilians fled.

Lyons, who’s based in Geneva, has a job title you wouldn’t expect at a human rights group: director of geospatial analysis. He says satellite imagery is increasingly a crucial component of human rights investigations, bolstering traditional eyewitness accounts, especially in areas where it’s too dangerous to send researchers.

“Then we have this magical sort of fusion of data between open-source, eyewitness testimony and data from space. And that becomes essentially a new gold standard for investigations,” he said.

‘A string of pearls’

Satellite photos used to be restricted to the U.S. government and a handful of other nations. Now such imagery is available to everyone, creating a new world of possibilities for human rights groups, environmentalists and researchers who monitor nuclear programs.

They get those images from a handful of private, commercial satellite companies, like Planet and Maxar….(More)”.

Building and maintaining trust in research


Daniel Nunan at the International Journal of Market Research: “One of the many indirect consequences of the COVID pandemic for the research sector may be the impact upon consumers’ willingness to share data. This is reflected in concerns that government mandated “apps” designed to facilitate COVID testing and tracking schemes will undermine trust in the commercial collection of personal data (WARC, 2020). For example, uncertainty over the consequences of handing over data and the ways in which it might be used could reduce consumers’ willingness to share data with organizations, and reverse a trend that has seen growing levels of consumer confidence in Data Protection Regulations (Data & Direct Marketing Association [DMA], 2020). This highlights how central the role of trust has become in contemporary research practice, and how fragile the process of building trust can be due to the ever competing demands of public and private data collectors.

For researchers, there are two sides to trust. One relates to building sufficient trust with research participants to be facilitate data collection, and the second is building trust with the users of research. Trust has long been understood as a key factor in effective research relationships, with trust between researchers and users of research the key factor in determining the extent to which research is actually used (Moorman et al., 1993). In other words, a trusted messenger is just as important as the contents of the message. In recent years, there has been growing concern over declining trust in research from research participants and the general public, manifested in declining response rates and challenges in gaining participation. Understanding how to build consumer trust is more important than ever, as the shift of communication and commercial activity to digital platforms alter the mechanisms through which trust is built. Trust is therefore essential both for ensuring that accurate data can be collected, and that insights from research have necessary legitimacy to be acted upon. The two research notes in this issue provide an insight into new areas where the issue of trust needs to be considered within research practice….(More)”.

Journalists’ guide to COVID data


Guide by RTDNA: “Watch a press conference, turn on a newscast, or overhear just about any phone conversation these days and you’ll hear mayors discussing R values, reporters announcing new fatalities and separated families comparing COVID case rolling averages in their counties. As coronavirus resurges across the country, medical data is no longer just the purview of epidemiologists (though a quick glance at any social media comments section shows an unlikely simultaneous surge in the number of virology experts and statisticians).

Journalists reporting on COVID, however, have a particular obligation to understand the data, to add context and to acknowledge uncertainty when reporting the numbers.

“Journalism requires more than merely reporting remarks, claims or comments. Journalism verifies, provides relevant context, tells the rest of the story and acknowledges the absence of important additional information.” – RTDNA Code of Ethics

This guide to common COVID metrics is designed to help journalists know how each data point is calculated, what it means and, importantly, what it doesn’t mean….(More)”.

Strengthening Privacy Protections in COVID-19 Mobile Phone–Enhanced Surveillance Programs


Rand Report: “Dozens of countries, including the United States, have been using mobile phone tools and data sources for COVID-19 surveillance activities, such as tracking infections and community spread, identifying populated areas at risk, and enforcing quarantine orders. These tools can augment traditional epidemiological interventions, such as contact tracing with technology-based data collection (e.g., automated signaling and record-keeping on mobile phone apps). As the response progresses, other beneficial technologies could include tools that authenticate those with low risk of contagion or that build community trust as stay-at-home orders are lifted.

However, the potential benefits that COVID-19 mobile phone–enhanced public health (“mobile”) surveillance program tools could provide are also accompanied by potential for harm. There are significant risks to citizens from the collection of sensitive data, including personal health, location, and contact data. People whose personal information is being collected might worry about who will receive the data, how those recipients might use the data, how the data might be shared with other entities, and what measures will be taken to safeguard the data from theft or abuse.

The risk of privacy violations can also impact government accountability and public trust. The possibility that one’s privacy will be violated by government officials or technology companies might dissuade citizens from getting tested for COVID-19, downloading public health–oriented mobile phone apps, or sharing symptom or location data. More broadly, real or perceived privacy violations might discourage citizens from believing government messaging or complying with government orders regarding COVID-19.

As U.S. public health agencies consider COVID-19-related mobile surveillance programs, they will need to address privacy concerns to encourage broad uptake and protect against privacy harms. Otherwise, COVID-19 mobile surveillance programs likely will be ineffective and the data collected unrepresentative of the situation on the ground….(More)“.

Policies and Strategies to Promote Grassroots Innovation Workbook


UN-ESCAP: “Grassroots innovation is a modality of inclusive innovation that enables extremely affordable, niche-adapted solutions to local problems, often unaided by public sector or outsiders.

In a context of rising income disparity among the have and have-nots, every effort should be made to convert the ideas and innovations of knowledge-rich but economically poor individuals and communities into viable means of raising income, addressing social needs, and conserving the environment. While grassroots innovation are typically bottom-up initiatives, public policies can also support the emergence, recognition and diffusion of grassroots innovations. The journey of developing a grassroots idea or invention into a viable product or service for commercial or social diffusion requires support from many actors at different stages and levels.

The Honey Bee Network has been leading the grassroots innovation movement in India. In the past three decades, it has strengthened the inclusive innovation ecosystem of the country and has become a global benchmark of frugal, friendly and flexible solutions for men and women farmers, pastoral and artisan households, mechanics, forest dwellers, fishermen etc. This workbook draws on the experience of the Honey Bee Network and discusses experiences, issues and strategies that could also be relevant for other countries….(More)”.

Democratic Innovation in Times of Crisis: Exploring Changes in Social and Political Trust


Paper by Martin Karlsson, Joachim Åström and Magnus Adenskog: “The Estonian Citizens’ Assembly (ECA) was initiated in late 2012 as a direct consequence of a legitimacy crisis of Estonian political parties and representative institutions. The spark igniting this crisis was the unraveling of a scheme of illegal party financing. The response from governmental institutions took the form of a democratic innovation involving public crowd‐sourcing and deliberative mini‐publics. This study reports on a survey among the participants in the online crowd‐sourcing process of the ECA ( = 847). The study examines how this democratic innovation influenced participants’ social and political trust as well as the impact of participants’ predispositions and level of satisfaction with the ECA on changes in trust. We find that participants that had positive predispositions and who were satisfied with the ECA were more likely to gain trust. Furthermore, we also find that the participants, in general, became more distrustful of political institutions, while their participation fostered increased social trust. This outcome differs from the intentions of the Estonian institutions which organized the ECA and sheds new light on the role of democratic innovations in the context of legitimacy crises. This is an important step forward in the scholarly understanding of the relationship between democratic innovation and trust….(More)”.

The Truth Is Paywalled But The Lies Are Free


Essay by Nathan J. Robinson: “…This means that a lot of the most vital information will end up locked behind the paywall. And while I am not much of a New Yorker fan either, it’s concerning that the Hoover Institute will freely give you Richard Epstein’s infamous article downplaying the threat of coronavirus, but Isaac Chotiner’s interview demolishing Epstein requires a monthly subscription, meaning that the lie is more accessible than its refutation. Eric Levitz of New York is one of the best and most prolific left political commentators we have. But unless you’re a subscriber of New York, you won’t get to hear much of what he has to say each month. 

Possibly even worse is the fact that so much academic writing is kept behind vastly more costly paywalls. A white supremacist on YouTube will tell you all about race and IQ but if you want to read a careful scholarly refutation, obtaining a legal PDF from the journal publisher would cost you $14.95, a price nobody in their right mind would pay for one article if they can’t get institutional access. (I recently gave up on trying to access a scholarly article because I could not find a way to get it for less than $39.95, though in that case the article was garbage rather than gold.) Academic publishing is a nightmarish patchwork, with lots of articles advertised at exorbitant fees on one site, and then for free on another, or accessible only through certain databases, which your university or public library may or may not have access to. (Libraries have to budget carefully because subscription prices are often nuts. A library subscription to the Journal of Coordination Chemistryfor instance, costs $11,367 annually.) 

Of course, people can find their ways around paywalls. SciHub is a completely illegal but extremely convenient means of obtaining academic research for free. (I am purely describing it, not advocating it.) You can find a free version of the article debunking race and IQ myths on ResearchGate, a site that has engaged in mass copyright infringement in order to make research accessible. Often, because journal publishers tightly control access to their copyrighted work in order to charge those exorbitant fees for PDFs, the versions of articles that you can get for free are drafts that have not yet gone through peer review, and have thus been subjected to less scrutiny. This means that the more reliable an article is, the less accessible it is. On the other hand, pseudo-scholarhip is easy to find. Right-wing think tanks like the Cato Institute, the Foundation for Economic Education, the Hoover Institution, the Mackinac Center, the American Enterprise Institute, and the Heritage Foundation pump out slickly-produced policy documents on every subject under the sun. They are utterly untrustworthy—the conclusion is always going to be “let the free market handle the problem,” no matter what the problem or what the facts of the case. But it is often dressed up to look sober-minded and non-ideological. 

It’s not easy or cheap to be an “independent researcher.” When I was writing my first book, Superpredator, I wanted to look through newspaper, magazine, and journal archives to find everything I could about Bill Clinton’s record on race. I was lucky I had a university affiliation, because this gave me access to databases like LexisNexis. If I hadn’t, the cost of finding out what I wanted to find out would likely have run into the thousands of dollars.  

A problem beyond cost, though, is convenience. I find that even when I am doing research through databases and my university library, it is often an absolute mess: the sites are clunky and constantly demanding login credentials. The amount of time wasted in figuring out how to obtain a piece of research material is a massive cost on top of the actual pricing. The federal court document database, PACER, for instance, charges 10 cents a page for access to records, which adds up quickly since legal research often involves looking through thousands of pages. They offer an exemption if you are a researcher or can’t afford it, but to get the exemption you have to fill out a three page form and provide an explanation of both why you need each document and why you deserve the exemption. This is a waste of time that inhibits people’s productivity and limits their access to knowledge.

In fact, to see just how much human potential is being squandered by having knowledge dispensed by the “free market,” let us briefly picture what “totally democratic and accessible knowledge” would look like…(More)”.