Notable Privacy and Security Books from 2016


Daniel J. Solove at Technology, Academics, Policy: “Here are some notable books on privacy and security from 2016….

Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy

From my blurb: “Chris Hoofnagle has written the definitive book about the FTC’s involvement in privacy and security. This is a deep, thorough, erudite, clear, and insightful work – one of the very best books on privacy and security.”

My interview with Hoofnagle about his book: The 5 Things Every Privacy Lawyer Needs to Know about the FTC: An Interview with Chris Hoofnagle

My further thoughts on the book in my interview post above: “This is a book that all privacy and cybersecurity lawyers should have on their shelves. The book is the most comprehensive scholarly discussion of the FTC’s activities in these areas, and it also delves deep in the FTC’s history and activities in other areas to provide much-needed context to understand how it functions and reasons in privacy and security cases. There is simply no better resource on the FTC and privacy. This is a great book and a must-read. It is filled with countless fascinating things that will surprise you about the FTC, which has quite a rich and storied history. And it is an accessible and lively read too – Chris really makes the issues come alive.”

Gary T. Marx, Windows into the Soul: Surveillance and Society in an Age of High Technology

From Peter Grabosky: “The first word that came to mind while reading this book was cornucopia. After decades of research on surveillance, Gary Marx has delivered an abundant harvest indeed. The book is much more than a straightforward treatise. It borders on the encyclopedic, and is literally overflowing with ideas, observations, and analyses. Windows into the Soul commands the attention of anyone interested in surveillance, past, present, and future. The book’s website contains a rich abundance of complementary material. An additional chapter consists of an intellectual autobiography discussing the author’s interest in, and personal experience with, surveillance over the course of his career. Because of its extraordinary breadth, the book should appeal to a wide readership…. it will be of interest to scholars of deviance and social control, cultural studies, criminal justice and criminology. But the book should be read well beyond the towers of academe. The security industry, broadly defined to include private security and intelligence companies as well as state law enforcement and intelligence agencies, would benefit from the book’s insights. So too should it be read by those in the information technology industries, including the manufacturers of the devices and applications which are central to contemporary surveillance, and which are shaping our future.”

Susan C. Lawrence, Privacy and the Past: Research, Law, Archives, Ethics

From the book blurb: “When the new HIPAA privacy rules regarding the release of health information took effect, medical historians suddenly faced a raft of new ethical and legal challenges—even in cases where their subjects had died years, or even a century, earlier. In Privacy and the Past, medical historian Susan C. Lawrence explores the impact of these new privacy rules, offering insight into what historians should do when they research, write about, and name real people in their work.”

Ronald J. Krotoszynski, Privacy Revisited: A Global Perspective on the Right to Be Left Alone

From Mark Tushnet: “Professor Krotoszynski provides a valuable overview of how several constitutional systems accommodate competing interests in privacy, speech, and democracy. He shows how scholarship in comparative law can help one think about one’s own legal system while remaining sensitive to the different cultural and institutional settings of each nation’s law. A very useful contribution.”

Laura K. Donohue, The Future of Foreign Intelligence: Privacy and Surveillance in a Digital Age

Gordon Corera, Cyberspies: The Secret History of Surveillance, Hacking, and Digital Espionage

J. Macgregor Wise, Surveillance and Film…(More; See also Nonfiction Privacy + Security Books).

Beyond IRBs: Designing Ethical Review Processes for Big Data Research


Conference Proceedings by Future of Privacy Forum: “The ethical framework applying to human subject research in the biomedical and behavioral research fields dates back to the Belmont Report.Drafted in 1976 and adopted by the United States government in 1991 as the Common Rule, the Belmont principles were geared towards a paradigmatic controlled scientific experiment with a limited population of human subjects interacting directly with researchers and manifesting their informed consent. These days, researchers in academic institutions as well as private sector businesses not subject to the Common Rule, conduct analysis of a wide array of data sources, from massive commercial or government databases to individual tweets or Facebook postings publicly available online, with little or no opportunity to directly engage human subjects to obtain their consent or even inform them of research activities.

Data analysis is now used in multiple contexts, such as combatting fraud in the payment card industry, reducing the time commuters spend on the road, detecting harmful drug interactions, improving marketing mechanisms, personalizing the delivery of education in K-12 schools, encouraging exercise and weight loss, and much more. And companies deploy data research not only to maximize economic gain but also to test new products and services to ensure they are safe and effective. These data uses promise tremendous societal benefits but at the same time create new risks to privacy, fairness, due process and other civil liberties.

Increasingly, corporate officers find themselves struggling to navigate unsettled social norms and make ethical choices that are more befitting of philosophers than business managers or even lawyers. The ethical dilemmas arising from data analysis transcend privacy and trigger concerns about stigmatization, discrimination, human subject research, algorithmic decision making and filter bubbles.

The challenge of fitting the round peg of data-focused research into the square hole of existing ethical and legal frameworks will determine whether society can reap the tremendous opportunities hidden in the data exhaust of governments and cities, health care institutions and schools, social networks and search engines, while at the same time protecting privacy, fairness, equality and the integrity of the scientific process. One commentator called this “the biggest civil rights issue of our time.”…(More)”

Group Privacy: New Challenges of Data Technologies


Book edited by Linnet Taylor, Luciano Floridi,, and Bart van der Sloot: “The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps the current and emerging uses of new data technologies and clarifies the promises and dangers of group profiling in real life situations. It then balances this with an analysis of how far the current legal paradigm grants group rights to privacy and data protection, and discusses possible routes to addressing these problems. Finally, an afterword gathers the conclusions reached by the different authors and discuss future perspectives on regulating new data technologies….(More and Table of Contents)

Group Privacy in Times of Big Data. A Literature Review


Paula Helm at Digital Culture & Society: “New technologies pose new challenges on the protection of privacy and they stimulate new debates on the scope of privacy. Such debates usually concern the individuals’ right to control the flow of his or her personal information. The article however discusses new challenges posed by new technologies in terms of their impact on groups and their privacy. Two main challenges are being identified in this regard, both having to do with the formation of groups through the involvement of algorithms and the lack of civil awareness regarding the consequences of this involvement. On the one hand, there is the phenomenon of groups being created on the basis of big data without the members of such groups being aware of having been assigned and being treated as part of a certain group. Here, the challenge concerns the limits of personal law, manifesting with the disability of individuals to address possible violations of their right to privacy since they are not aware of them. On the other hand, commercially driven Websites influence the way in which groups form, grow and communicate when doing this online and they do this in such subtle way, that members oftentimes do not take into account this influence. This is why one could speak of a kind of domination here, which calls for legal regulation. The article presents different approaches addressing and dealing with those two challenges, discussing their strengths and weaknesses. Finally, a conclusion gathers the insights reached by the different approaches discussed and reflects on future challenges for further research on group privacy in times of big data….(More)”

Privacy of Public Data


Paper by Kirsten E. Martin and Helen Nissenbaum: “The construct of an information dichotomy has played a defining role in regulating privacy: information deemed private or sensitive typically earns high levels of protection, while lower levels of protection are accorded to information deemed public or non-sensitive. Challenging this dichotomy, the theory of contextual integrity associates privacy with complex typologies of information, each connected with respective social contexts. Moreover, it contends that information type is merely one among several variables that shape people’s privacy expectations and underpin privacy’s normative foundations. Other contextual variables include key actors – information subjects, senders, and recipients – as well as the principles under which information is transmitted, such as whether with subjects’ consent, as bought and sold, as required by law, and so forth. Prior work revealed the systematic impact of these other variables on privacy assessments, thereby debunking the defining effects of so-called private information.

In this paper, we shine a light on the opposite effect, challenging conventional assumptions about public information. The paper reports on a series of studies, which probe attitudes and expectations regarding information that has been deemed public. Public records established through the historical practice of federal, state, and local agencies, as a case in point, are afforded little privacy protection, or possibly none at all. Motivated by progressive digitization and creation of online portals through which these records have been made publicly accessible our work underscores the need for more concentrated and nuanced privacy assessments, even more urgent in the face of vigorous open data initiatives, which call on federal, state, and local agencies to provide access to government records in both human and machine readable forms. Within a stream of research suggesting possible guard rails for open data initiatives, our work, guided by the theory of contextual integrity, provides insight into the factors systematically shaping individuals’ expectations and normative judgments concerning appropriate uses of and terms of access to information.

Using a factorial vignette survey, we asked respondents to rate the appropriateness of a series of scenarios in which contextual elements were systematically varied; these elements included the data recipient (e.g. bank, employer, friend,.), the data subject, and the source, or sender, of the information (e.g. individual, government, data broker). Because the object of this study was to highlight the complexity of people’s privacy expectations regarding so-called public information, information types were drawn from data fields frequently held in public government records (e.g. voter registration, marital status, criminal standing, and real property ownership).

Our findings are noteworthy on both theoretical and practical grounds. In the first place, they reinforce key assertions of contextual integrity about the simultaneous relevance to privacy of other factors beyond information types. In the second place, they reveal discordance between truisms that have frequently shaped public policy relevant to privacy. …(More)”

 

neveragain.tech


neveragain.tech: “We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration’s proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs. We refuse to facilitate mass deportations of people the government believes to be undesirable…..

Today we stand together to say: not on our watch, and never again.

We commit to the following actions:

  • We refuse to participate in the creation of databases of identifying information for the United States government to target individuals based on race, religion, or national origin.
  • We will advocate within our organizations:
    • to minimize the collection and retention of data that would facilitate ethnic or religious targeting.
    • to scale back existing datasets with unnecessary racial, ethnic, and national origin data.
    • to responsibly destroy high-risk datasets and backups.
    • to implement security and privacy best practices, in particular, for end-to-end encryption to be the default wherever possible.
    • to demand appropriate legal process should the government request that we turn over user data collected by our organization, even in small amounts.
  • If we discover misuse of data that we consider illegal or unethical in our organizations:
    • We will work with our colleagues and leaders to correct it.
    • If we cannot stop these practices, we will exercise our rights and responsibilities to speak out publicly and engage in responsible whistleblowing without endangering users.
    • If we have the authority to do so, we will use all available legal defenses to stop these practices.
    • If we do not have such authority, and our organizations force us to engage in such misuse, we will resign from our positions rather than comply.
  • We will raise awareness and ask critical questions about the responsible and fair use of data and algorithms beyond our organization and our industry….(More)

What does Big Data mean to public affairs research?


Ines Mergel, R. Karl Rethemeyer, and Kimberley R. Isett at LSE’s The Impact Blog: “…Big Data promises access to vast amounts of real-time information from public and private sources that should allow insights into behavioral preferences, policy options, and methods for public service improvement. In the private sector, marketing preferences can be aligned with customer insights gleaned from Big Data. In the public sector however, government agencies are less responsive and agile in their real-time interactions by design – instead using time for deliberation to respond to broader public goods. The responsiveness Big Data promises is a virtue in the private sector but could be a vice in the public.

Moreover, we raise several important concerns with respect to relying on Big Data as a decision and policymaking tool. While in the abstract Big Data is comprehensive and complete, in practice today’sversion of Big Data has several features that should give public sector practitioners and scholars pause. First, most of what we think of as Big Data is really ‘digital exhaust’ – that is, data collected for purposes other than public sector operations or research. Data sets that might be publicly available from social networking sites such as Facebook or Twitter were designed for purely technical reasons. The degree to which this data lines up conceptually and operationally with public sector questions is purely coincidental. Use of digital exhaust for purposes not previously envisioned can go awry. A good example is Google’s attempt to predict the flu based on search terms.

Second, we believe there are ethical issues that may arise when researchers use data that was created as a byproduct of citizens’ interactions with each other or with a government social media account. Citizens are not able to understand or control how their data is used and have not given consent for storage and re-use of their data. We believe that research institutions need to examine their institutional review board processes to help researchers and their subjects understand important privacy issues that may arise. Too often it is possible to infer individual-level insights about private citizens from a combination of data points and thus predict their behaviors or choices.

Lastly, Big Data can only represent those that spend some part of their life online. Yet we know that certain segments of society opt in to life online (by using social media or network-connected devices), opt out (either knowingly or passively), or lack the resources to participate at all. The demography of the internet matters. For instance, researchers tend to use Twitter data because its API allows data collection for research purposes, but many forget that Twitter users are not representative of the overall population. Instead, as a recent Pew Social Media 2016 update shows, only 24% of all online adults use Twitter. Internet participation generally is biased in terms of age, educational attainment, and income – all of which correlate with gender, race, and ethnicity. We believe therefore that predictive insights are potentially biased toward certain parts of the population, making generalisations highly problematic at this time….(More)”

A Guide to Data Innovation for Development – From idea to proof-of-concept


Press Release: “UNDP and UN Global Pulse today released a comprehensive guide on how to integrate new sources of data into development and humanitarian work.

New and emerging data sources such as mobile phone data, social media, remote sensors and satellites have the potential to improve the work of governments and development organizations across the globe.

Entitled A Guide to Data Innovation for Development – From idea to proof-of-concept,’ this publication was developed by practitioners for practitioners. It provides step-by-step guidance for working with new sources of data to staff of UN agencies and international Non-Governmental Organizations.

The guide is a result of a collaboration of UNDP and UN Global Pulse with support from UN Volunteers. Led by UNDP innovation teams in Europe and Central Asia and Arab States, six UNDP offices in Armenia, Egypt, Kosovo[1], fYR Macedonia, Sudan and Tunisia each completed data innovation projects applicable to development challenges on the ground.

The publication builds on these successful case trials and on the expertise of data innovators from UNDP and UN Global Pulse who managed the design and development of those projects.

It provides practical guidance for jump-starting a data innovation project, from the design phase through the creation of a proof-of-concept.

The guide is structured into three sections – (I) Explore the Problem & System, (II) Assemble the Team and (III) Create the Workplan. Each of the sections comprises of a series of tools for completing the steps needed to initiate and design a data innovation project, to engage the right partners and to make sure that adequate privacy and protection mechanisms are applied.

…Download ‘A Guide to Data Innovation for Development – From idea to proof-of-concept’ here.”

Big data promise exponential change in healthcare


Gonzalo Viña in the Financial Times (Special Report: ): “When a top Formula One team is using pit stop data-gathering technology to help a drugmaker improve the way it makes ventilators for asthma sufferers, there can be few doubts that big data are transforming pharmaceutical and healthcare systems.

GlaxoSmithKline employs online technology and a data algorithm developed by F1’s elite McLaren Applied Technologies team to minimise the risk of leakage from its best-selling Ventolin (salbutamol) bronchodilator drug.

Using multiple sensors and hundreds of thousands of readings, the potential for leakage is coming down to “close to zero”, says Brian Neill, diagnostics director in GSK’s programme and risk management division.

This apparently unlikely venture for McLaren, known more as the team of such star drivers as Fernando Alonso and Jenson Button, extends beyond the work it does with GSK. It has partnered with Birmingham Children’s hospital in a £1.8m project utilising McLaren’s expertise in analysing data during a motor race to collect such information from patients as their heart and breathing rates and oxygen levels. Imperial College London, meanwhile, is making use of F1 sensor technology to detect neurological dysfunction….

Big data analysis is already helping to reshape sales and marketing within the pharmaceuticals business. Great potential, however, lies in its ability to fine tune research and clinical trials, as well as providing new measurement capabilities for doctors, insurers and regulators and even patients themselves. Its applications seem infinite….

The OECD last year said governments needed better data governance rules given the “high variability” among OECD countries about protecting patient privacy. Recently, DeepMind, the artificial intelligence company owned by Google, signed a deal with a UK NHS trust to process, via a mobile app, medical data relating to 1.6m patients. Privacy advocates say this as “worrying”. Julia Powles, a University of Cambridge technology law expert, asks if the company is being given “a free pass” on the back of “unproven promises of efficiency and innovation”.

Brian Hengesbaugh, partner at law firm Baker & McKenzie in Chicago, says the process of solving such problems remains “under-developed”… (More)

Shareveillance: Subjectivity between open and closed data


Clare Birchall in Big Data and Society: “This article attempts to question modes of sharing and watching to rethink political subjectivity beyond that which is enabled and enforced by the current data regime. It identifies and examines a ‘shareveillant’ subjectivity: a form configured by the sharing and watching that subjects have to withstand and enact in the contemporary data assemblage. Looking at government open and closed data as case studies, this article demonstrates how ‘shareveillance’ produces an anti-political role for the public. In describing shareveillance as, after Jacques Rancière, a distribution of the (digital) sensible, this article posits a politico-ethical injunction to cut into the share and flow of data in order to arrange a more enabling assemblage of data and its affects. In order to interrupt shareveillance, this article borrows a concept from Édouard Glissant and his concern with raced otherness to imagine what a ‘right to opacity’ might mean in the digital context. To assert this right is not to endorse the individual subject in her sovereignty and solitude, but rather to imagine a collective political subjectivity and relationality according to the important question of what it means to ‘share well’ beyond the veillant expectations of the state.

Two questions dominate current debates at the intersection of privacy, governance, security, and transparency: How much, and what kind of data should citizens have to share with surveillant states? And: How much data from government departments should states share with citizens? Yet, these issues are rarely expressed in terms of ‘sharing’ in the way that I will be doing in this article. More often, when thought in tandem with the digital, ‘sharing’ is used in reference to either free trials of software (‘shareware’); the practice of peer-to-peer file sharing; platforms that facilitate the pooling, borrowing, swapping, renting, or selling of resources, skills, and assets that have come to be known as the ‘sharing economy’; or the business of linking and liking on social media, which invites us to share our feelings, preferences, thoughts, interests, photographs, articles, and web links. Sharing in the digital context has been framed as a form of exchange, then, but also communication and distribution (see John, 2013; Wittel, 2011).

In order to understand the politics of open and opaque government data practices, which either share with citizens or ask citizens to share, I will extend existing commentaries on the distributive qualities of sharing by drawing on Jacques Rancière’s notion of the ‘distribution of the sensible’ (2004a) – a settlement that determines what is visible, audible, sayable, knowable and what share or role we each have within it. In the process, I articulate ‘sharing’ with ‘veillance’ (veiller ‘to watch’ is from the Latin vigilare, from vigil, ‘watchful’) to turn the focus from prevalent ways of understanding digital sharing towards a form of contemporary subjectivity. What I call ‘shareveillance’ – a state in which we are always already sharing; indeed, in which any relationship with data is only made possible through a conditional idea of sharing – produces an anti-politicised public caught between different data practices.

I will argue that both open and opaque government data initiatives involve, albeit differently pitched, forms of sharing and veillance. Government practices that share data with citizens involve veillance because they call on citizens to monitor and act upon that data – we are envisioned (‘veiled’ and hailed) as auditing and entrepreneurial subjects. Citizens have to monitor the state’s data, that is, or they are expected to innovate with it and make it profitable. Data sharing therefore apportions responsibility without power. It watches citizens watching the state, delimiting the ways in which citizens can engage with that data and, therefore, the scope of the political per se….(More)”.