Netnography: The Essential Guide to Qualitative Social Media Research


Book by Robert Kozinets: “Netnography is an adaptation of ethnography for the online world, pioneered by Robert Kozinets, and is concerned with the study of online cultures and communities as distinct social phenomena, rather than isolated content. In this landmark third edition, Netnography: The Essential Guide provides the theoretical and methodological groundwork as well as the practical applications, helping students both understand and do netnographic research projects of their own.

Packed with enhanced learning features throughout, linking concepts to structured activities in a step by step way, the book is also now accompanied by a striking new visual design and further case studies, offering the essential student resource to conducting online ethnographic research. Real world examples provided demonstrate netnography in practice across the social sciences, in media and cultural studies, anthropology, education, nursing, travel and tourism, and others….(More)”.

National SDG Review: data challenges and opportunities


Press Release: “…the Partnership in Statistics for Development in the 21st Century (PARIS21) and Partners for Review launched a landmark new paper that identifies the factors preventing countries from fully exploiting their data ecosystem and proposes solutions to strengthening statistical capacities to achieve the 2030 Agenda for Sustainable Development.

Ninety percent of the data in the world has been created in the past two years, yet many countries with low statistical capacity struggle to produce, analyse and communicate the data necessary to advance sustainable development. At the same time, demand for more and better data and statistics is increasingly massively, with international agreements like the 2030 Agenda placing unprecedented demand on countries to report on more than 230 indicators.

Using PARIS21’s Capacity Development 4.0 (CD 4.0) approach, the paper shows that leveraging data available in the data ecosystem for official re­porting requires new capacity in terms of skills and knowledge, man­agement, politics and power. The paper also shows that these capacities need to be developed at both the organisational and systemic level, which involves the various channels and interactions that connect different organisations.

Aimed at national statistics offices, development professionals and others involved in the national data ecosystem, the paper provides a roadmap that can help national statistical systems develop and strengthen the capacities of traditional and new actors in the data ecosystem to improve both the fol­low-up and review process of the 2030 Agenda as well as the data architecture for sustainable development at the national level…(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.

Principles alone cannot guarantee ethical AI


Paper by Brent Mittelstadt: “Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement….(More)”.

Surveillance giants: how the business model of Google and Facebook threatens human rights


Report by Amnesty International: “Google and Facebook help connect the world and provide crucial services to billions. To participate meaningfully in today’s economy and society, and to realize their human rights, people rely on access to the internet—and to the tools Google and Facebook offer. But Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model is inherently incompatible with the right to privacy and poses a threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination….(More)”.

Responsible Data for Children


New Site and Report by UNICEF and The GovLab: “RD4C seeks to build awareness regarding the need for special attention to data issues affecting children—especially in this age of changing technology and data linkage; and to engage with governments, communities, and development actors to put the best interests of children and a child rights approach at the center of our data activities. The right data in the right hands at the right time can significantly improve outcomes for children. The challenge is to understand the potential risks and ensure that the collection, analysis and use of data on children does not undermine these benefits.

Drawing upon field-based research and established good practice, RD4C aims to highlight and support best practice data responsibility; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.

Uses and Reuses of Scientific Data: The Data Creators’ Advantage


Paper by Irene V. Pasquetto, Christine L. Borgman, and Morgan F. Wofford: “Open access to data, as a core principle of open science, is predicated on assumptions that scientific data can be reused by other researchers. We test those assumptions by asking where scientists find reusable data, how they reuse those data, and how they interpret data they did not collect themselves. By conducting a qualitative meta-analysis of evidence on two long-term, distributed, interdisciplinary consortia, we found that scientists frequently sought data from public collections and from other researchers for comparative purposes such as “ground-truthing” and calibration. When they sought others’ data for reanalysis or for combining with their own data, which was relatively rare, most preferred to collaborate with the data creators.

We propose a typology of data reuses ranging from comparative to integrative. Comparative data reuse requires interactional expertise, which involves knowing enough about the data to assess their quality and value for a specific comparison such as calibrating an instrument in a lab experiment. Integrative reuse requires contributory expertise, which involves the ability to perform the action, such as reusing data in a new experiment. Data integration requires more specialized scientific knowledge and deeper levels of epistemic trust in the knowledge products. Metadata, ontologies, and other forms of curation benefit interpretation for any kind of data reuse. Based on these findings, we theorize the data creators’ advantage, that those who create data have intimate and tacit knowledge that can be used as barter to form collaborations for mutual advantage. Data reuse is a process that occurs within knowledge infrastructures that evolve over time, encompassing expertise, trust, communities, technologies, policies, resources, and institutions….(More)”.

The Impact of Open Data on Public Procurement


Paper by Raphael Duguay, Thomas Rauter and Delphine Samuels: “We examine how the increased accessibility of public purchasing data affects competition, prices, contract allocations, and contract performance in government procurement. The European Union recently made its already public but difficult-to-access information about the process and outcomes of procurement awards available for bulk download in a user-friendly format.

Comparing government contracts above EU publication thresholds with contracts that are not, we find that increasing the public accessibility of procurement data raises the likelihood of having competitive bidding processes, increases the number of bids per contract, and facilitates market entry by new vendors. Following the open data initiative, procurement prices decrease and EU government agencies are more likely to award contracts to the lowest bidder.

However, the increased competition comes at a cost ─ firms execute government contracts with more delays and ex-post price renegotiations. These effects are stronger for new vendors, complex procurement projects, and contracts awarded solely based on price. Overall, our results suggest that open procurement data facilitates competition and lowers ex-ante procurement prices but does not necessarily increase allocative efficiency in government contracting….(More)”.

How We Became Our Data


Book by Colin Koopman: “We are now acutely aware, as if all of the sudden, that data matters enormously to how we live. How did information come to be so integral to what we can do? How did we become people who effortlessly present our lives in social media profiles and who are meticulously recorded in state surveillance dossiers and online marketing databases? What is the story behind data coming to matter so much to who we are?


In How We Became Our Data, Colin Koopman excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record-keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think. Blending philosophy, history, political theory, and media theory in conversation with thinkers like Michel Foucault, Jürgen Habermas, and Friedrich Kittler, Koopman presents an illuminating perspective on how we have come to think of our personhood—and how we can resist its erosion….(More)”.

An Open Letter to Law School Deans about Privacy Law Education in Law Schools


Daniel Solove: “Recently a group of legal academics and practitioners in the field of privacy law sent a letter to the deans of all U.S. law schools about privacy law education in law schools.  My own brief intro about this endeavor is here in italics, followed by the letter. The signatories to the letter have signed onto the letter, not this italicized intro.

Although the field of privacy law grown dramatically in past two decades, education in law schools about privacy law has significantly lagged behind.  Most U.S. law schools lack a course on privacy law. Of those that have courses, many are small seminars, often taught by adjuncts.  Of the law schools that do have a privacy course, most often just have one course. Most schools lack a full-time faculty member who focuses substantially on privacy law.

This state of affairs is a great detriment to students. I am constantly approached by students and graduates from law schools across the country who are wondering how they can learn about privacy law and enter the field. Many express great disappointment at the lack of any courses, faculty, or activities at their schools.

After years of hoping that the legal academy would wake up and respond, I came to the realization that this wasn’t going to happen on its own. The following letter [click here for the PDF version] aims to make deans aware of the privacy law field. I hope that the letter is met with action….(More)”.