Protecting One’s Own Privacy in a Big Data Economy


Anita L. Allen in the Harvard Law Review Forum: “Big Data is the vast quantities of information amenable to large-scale collection, storage, and analysis. Using such data, companies and researchers can deploy complex algorithms and artificial intelligence technologies to reveal otherwise unascertained patterns, links, behaviors, trends, identities, and practical knowledge. The information that comprises Big Data arises from government and business practices, consumer transactions, and the digital applications sometimes referred to as the “Internet of Things.” Individuals invisibly contribute to Big Data whenever they live digital lifestyles or otherwise participate in the digital economy, such as when they shop with a credit card, get treated at a hospital, apply for a job online, research a topic on Google, or post on Facebook.

Privacy advocates and civil libertarians say Big Data amounts to digital surveillance that potentially results in unwanted personal disclosures, identity theft, and discrimination in contexts such as employment, housing, and financial services. These advocates and activists say typical consumers and internet users do not understand the extent to which their activities generate data that is being collected, analyzed, and put to use for varied governmental and business purposes.

I have argued elsewhere that individuals have a moral obligation to respect not only other people’s privacy but also their own. Here, I wish to comment first on whether the notion that individuals have a moral obligation to protect their own information privacy is rendered utterly implausible by current and likely future Big Data practices; and on whether a conception of an ethical duty to self-help in the Big Data context may be more pragmatically framed as a duty to be part of collective actions encouraging business and government to adopt more robust privacy protections and data security measures….(More)”

The Signal Code


The Signal Code: “Humanitarian action adheres to the core humanitarian principles of impartiality, neutrality, independence, and humanity, as well as respect for international humanitarian and human rights law. These foundational principles are enshrined within core humanitarian doctrine, particularly the Red Cross/NGO Code of Conduct5 and the Humanitarian Charter.6 Together, these principles establish a duty of care for populations affected by the actions of humanitarian actors and impose adherence to a standard of reasonable care for those engaged in humanitarian action.

Engagement in HIAs, including the use of data and ICTs, must be consistent with these foundational principles and respect the human rights of crisis-affected people to be considered “humanitarian.” In addition to offering potential benefits to those affected by crisis, HIAs, including the use of ICTs, can cause harm to the safety, wellbeing, and the realization of the human rights of crisis-affected people. Absent a clear understanding of which rights apply to this context, the utilization of new technologies, and in particular experimental applications of these technologies, may be more likely to harm communities and violate the fundamental human rights of individuals.

The Signal Code is based on the application of the UDHR, the Nuremberg Code, the Geneva Convention, and other instruments of customary international law related to HIAs and the use of ICTs by crisis affected-populations and by humanitarians on their behalf. The fundamental human rights undergirding this Code are the rights to life, liberty, and security; the protection of privacy; freedom of expression; and the right to share in scientific advancement and its benefits as expressed in Articles 3, 12, 19, and 27 of the UDHR.7

The Signal Code asserts that all people have fundamental rights to access, transmit, and benefit from information as a basic humanitarian need; to be protected from harms that may result from the provision of information during crisis; to have a reasonable expectation of privacy and data security; to have agency over how their data is collected and used; and to seek redress and rectification when data pertaining to them causes harm or is inaccurate.

These rights are found to apply specifically to the access, collection, generation, processing, use, treatment, and transmission of information, including data, during humanitarian crises. These rights are also found herein to be interrelated and interdependent. To realize any of these rights individually requires realization of all of these rights in concert.

These rights are found to apply to all phases of the data lifecycle—before, during, and after the collection, processing, transmission, storage, or release of data. These rights are also found to be elastic, meaning that they apply to new technologies and scenarios that have not yet been identified or encountered by current practice and theory.

Data is, formally, a collection of symbols which function as a representation of information or knowledge. The term raw data is often used with two different meanings, the first being uncleaned data, that is, data that has been collected in an uncontrolled environment, and unprocessed data, which is collected data that has not been processed in such a way as to make it suitable for decision making. Colloquially, and in the humanitarian context, data is usually thought of solely in the machine readable or digital sense. For the purposes of the Signal Code, we use the term data to encompass information both in its analog and digital representations. Where it is necessary to address data solely in its digital representation, we refer to it as digital data.

No right herein may be used to abridge any other right. Nothing in this code may be interpreted as giving any state, group, or person the right to engage in any activity or perform any act that destroys the rights described herein.

The five human rights that exist specific to information and HIAs during humanitarian crises are the following:

The Right to Information
The Right to Protection
The Right to Data Security and Privacy
The Right to Data Agency
The Right to Redress and Rectification…(More)”

Rule by the lowest common denominator? It’s baked into democracy’s design


 in The Conversation: “The Trump victory, and the general disaster for Democrats this year, was the victory of ignorance, critics moan.

Writing in Foreign Policy, Georgetown’s Jason Brennan called it “the dance of the dunces” and wrote that “Trump owes his victory to the uninformed.”…

For liberals, Trump’s victory was the triumph of prejudice, bigotry and forces allied against truth and expertise in politics, science and culture at large. Trump brandishes unconcern for traditional political wisdom and protocol – much less facts – like a badge of honor, and his admirers roar with glee. His now famous rallies, the chastened media report, are often scary, sometimes giving way to violence, sometimes threatening to spark broader recriminations and social mayhem. This is a glimpse of how tyrants rise to power, some political minds worry; this is how tyrants enlist the support of rabid masses, and get them to do their bidding.

For the contemporary French philosopher Jacques Rancière, however, the Trump victory provides a useful reminder of the essential nature of democracy – a reminder of what precisely makes it vibrant. And liable to lapse into tyranny at once….

Democracy is rule by the rabble, in Plato’s view. It is the rule by the lowest common denominator. In a democracy, passions are inflamed and proliferate. Certain individuals may take advantage of and channel the storm of ignorance, Plato feared, and consolidate power out of a desire to serve their own interests.

As Rancière explains, there is a “scandal of democracy” for Plato: The best and the high born “must bow before the law of chance” and submit to the rule of the inexpert, the commoner, who knows little about politics or much else.

Merit ought to decide who rules, in Plato’s account. But democracy consigns such logic to the dustbin. The rabble may decide they want to be ruled by one of their own – and electoral conditions may favor them. Democracy makes it possible that someone who has no business ruling lands at the top. His rule may prove treacherous, and risk dooming the state. But, Rancière argues, this is a risk democracies must take. Without it, they lack legitimacy….

Rancière maintains people more happily suffer authority ascribed by chance than authority consigned by birth, merit or expertise. Liberals may be surprised about this last point. According to Rancière, expertise is no reliable, lasting or secure basis for authority. In fact, expertise soon loses authority, and with it, the legitimacy of the state. Why?…(More)”

Notable Privacy and Security Books from 2016


Daniel J. Solove at Technology, Academics, Policy: “Here are some notable books on privacy and security from 2016….

Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy

From my blurb: “Chris Hoofnagle has written the definitive book about the FTC’s involvement in privacy and security. This is a deep, thorough, erudite, clear, and insightful work – one of the very best books on privacy and security.”

My interview with Hoofnagle about his book: The 5 Things Every Privacy Lawyer Needs to Know about the FTC: An Interview with Chris Hoofnagle

My further thoughts on the book in my interview post above: “This is a book that all privacy and cybersecurity lawyers should have on their shelves. The book is the most comprehensive scholarly discussion of the FTC’s activities in these areas, and it also delves deep in the FTC’s history and activities in other areas to provide much-needed context to understand how it functions and reasons in privacy and security cases. There is simply no better resource on the FTC and privacy. This is a great book and a must-read. It is filled with countless fascinating things that will surprise you about the FTC, which has quite a rich and storied history. And it is an accessible and lively read too – Chris really makes the issues come alive.”

Gary T. Marx, Windows into the Soul: Surveillance and Society in an Age of High Technology

From Peter Grabosky: “The first word that came to mind while reading this book was cornucopia. After decades of research on surveillance, Gary Marx has delivered an abundant harvest indeed. The book is much more than a straightforward treatise. It borders on the encyclopedic, and is literally overflowing with ideas, observations, and analyses. Windows into the Soul commands the attention of anyone interested in surveillance, past, present, and future. The book’s website contains a rich abundance of complementary material. An additional chapter consists of an intellectual autobiography discussing the author’s interest in, and personal experience with, surveillance over the course of his career. Because of its extraordinary breadth, the book should appeal to a wide readership…. it will be of interest to scholars of deviance and social control, cultural studies, criminal justice and criminology. But the book should be read well beyond the towers of academe. The security industry, broadly defined to include private security and intelligence companies as well as state law enforcement and intelligence agencies, would benefit from the book’s insights. So too should it be read by those in the information technology industries, including the manufacturers of the devices and applications which are central to contemporary surveillance, and which are shaping our future.”

Susan C. Lawrence, Privacy and the Past: Research, Law, Archives, Ethics

From the book blurb: “When the new HIPAA privacy rules regarding the release of health information took effect, medical historians suddenly faced a raft of new ethical and legal challenges—even in cases where their subjects had died years, or even a century, earlier. In Privacy and the Past, medical historian Susan C. Lawrence explores the impact of these new privacy rules, offering insight into what historians should do when they research, write about, and name real people in their work.”

Ronald J. Krotoszynski, Privacy Revisited: A Global Perspective on the Right to Be Left Alone

From Mark Tushnet: “Professor Krotoszynski provides a valuable overview of how several constitutional systems accommodate competing interests in privacy, speech, and democracy. He shows how scholarship in comparative law can help one think about one’s own legal system while remaining sensitive to the different cultural and institutional settings of each nation’s law. A very useful contribution.”

Laura K. Donohue, The Future of Foreign Intelligence: Privacy and Surveillance in a Digital Age

Gordon Corera, Cyberspies: The Secret History of Surveillance, Hacking, and Digital Espionage

J. Macgregor Wise, Surveillance and Film…(More; See also Nonfiction Privacy + Security Books).

The legal macroscope: Experimenting with visual legal analytics


Nicola Lettieri, Antonio Altamura and Delfina Malandrino at InfoVis: “This work presents Knowlex, a web application designed for visualization, exploration, and analysis of legal documents coming from different sources. Understanding the legal framework relating to a given issue often requires the analysis of complex legal corpora. When a legal professional or a citizen tries to understand how a given phenomenon is disciplined, his attention cannot be limited to a single source of law but has to be directed on the bigger picture resulting from all the legal sources related to the theme under investigation. Knowlex exploits data visualization to support this activity by means of interactive maps making sense out of heterogeneous documents (norms, case law, legal literature, etc.).

Starting from a legislative measure (what we define as Root) given as input by the user, the application implements two visual analytics functionalities aiming to offer new insights on the legal corpus under investigation. The first one is an interactive node graph depicting relations and properties of the documents. The second one is a zoomable treemap showing the topics, the evolution, and the dimension of the legal literature settled over the years around the norm of interest. The article gives an overview of the research so far conducted presenting the results of a preliminary evaluation study aiming at evaluating the effectiveness of visualization in supporting legal activities as well as the effectiveness of Knowlex, the usability of the proposed system, and the overall user satisfaction when interacting with its applications…(More)”.

Group Privacy in Times of Big Data. A Literature Review


Paula Helm at Digital Culture & Society: “New technologies pose new challenges on the protection of privacy and they stimulate new debates on the scope of privacy. Such debates usually concern the individuals’ right to control the flow of his or her personal information. The article however discusses new challenges posed by new technologies in terms of their impact on groups and their privacy. Two main challenges are being identified in this regard, both having to do with the formation of groups through the involvement of algorithms and the lack of civil awareness regarding the consequences of this involvement. On the one hand, there is the phenomenon of groups being created on the basis of big data without the members of such groups being aware of having been assigned and being treated as part of a certain group. Here, the challenge concerns the limits of personal law, manifesting with the disability of individuals to address possible violations of their right to privacy since they are not aware of them. On the other hand, commercially driven Websites influence the way in which groups form, grow and communicate when doing this online and they do this in such subtle way, that members oftentimes do not take into account this influence. This is why one could speak of a kind of domination here, which calls for legal regulation. The article presents different approaches addressing and dealing with those two challenges, discussing their strengths and weaknesses. Finally, a conclusion gathers the insights reached by the different approaches discussed and reflects on future challenges for further research on group privacy in times of big data….(More)”

The Algorithm as a Human Artifact: Implications for Legal {Re}Search


Paper by Susan Nevelow Mart: “When legal researchers search in online databases for the information they need to solve a legal problem, they need to remember that the algorithms that are returning results to them were designed by humans. The world of legal research is a human-constructed world, and the biases and assumptions the teams of humans that construct the online world bring to the task are imported into the systems we use for research. This article takes a look at what happens when six different teams of humans set out to solve the same problem: how to return results relevant to a searcher’s query in a case database. When comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database. An average of forty percent of the cases were unique to one database, and only about 7% of the cases were returned in search results in all six databases. It is fair to say that each different set of engineers brought very different biases and assumptions to the creation of each search algorithm. One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. The study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm. From the law professor seeking to set up a corpus of cases to study, the trial lawyer seeking that one elusive case, the legal research professor showing students the limitations of algorithms, researchers who want full results will need to mine multiple resources with multiple searches. And more accountability about the nature of the algorithms being deployed would allow all researchers to craft searches that would be optimally successful….(More)”.

Privacy of Public Data


Paper by Kirsten E. Martin and Helen Nissenbaum: “The construct of an information dichotomy has played a defining role in regulating privacy: information deemed private or sensitive typically earns high levels of protection, while lower levels of protection are accorded to information deemed public or non-sensitive. Challenging this dichotomy, the theory of contextual integrity associates privacy with complex typologies of information, each connected with respective social contexts. Moreover, it contends that information type is merely one among several variables that shape people’s privacy expectations and underpin privacy’s normative foundations. Other contextual variables include key actors – information subjects, senders, and recipients – as well as the principles under which information is transmitted, such as whether with subjects’ consent, as bought and sold, as required by law, and so forth. Prior work revealed the systematic impact of these other variables on privacy assessments, thereby debunking the defining effects of so-called private information.

In this paper, we shine a light on the opposite effect, challenging conventional assumptions about public information. The paper reports on a series of studies, which probe attitudes and expectations regarding information that has been deemed public. Public records established through the historical practice of federal, state, and local agencies, as a case in point, are afforded little privacy protection, or possibly none at all. Motivated by progressive digitization and creation of online portals through which these records have been made publicly accessible our work underscores the need for more concentrated and nuanced privacy assessments, even more urgent in the face of vigorous open data initiatives, which call on federal, state, and local agencies to provide access to government records in both human and machine readable forms. Within a stream of research suggesting possible guard rails for open data initiatives, our work, guided by the theory of contextual integrity, provides insight into the factors systematically shaping individuals’ expectations and normative judgments concerning appropriate uses of and terms of access to information.

Using a factorial vignette survey, we asked respondents to rate the appropriateness of a series of scenarios in which contextual elements were systematically varied; these elements included the data recipient (e.g. bank, employer, friend,.), the data subject, and the source, or sender, of the information (e.g. individual, government, data broker). Because the object of this study was to highlight the complexity of people’s privacy expectations regarding so-called public information, information types were drawn from data fields frequently held in public government records (e.g. voter registration, marital status, criminal standing, and real property ownership).

Our findings are noteworthy on both theoretical and practical grounds. In the first place, they reinforce key assertions of contextual integrity about the simultaneous relevance to privacy of other factors beyond information types. In the second place, they reveal discordance between truisms that have frequently shaped public policy relevant to privacy. …(More)”

 

‘’Everyone sees everything’: Overhauling Ukraine’s corrupt contracting sector


Open Contracting Stories: “When Yuriy Bugay, a Maidan revolutionary, showed up for work at Kiev’s public procurement office for the first time, it wasn’t the most uplifting sight. The 27-year-old had left his job in the private sector after joining a group of activists during the protests in Kiev’s main square, with dreams of reforming Ukraine’s dysfunctional public institutions. They chose one of the country’s most broken sectors, public procurement, as their starting point, and within a year, their project had been adopted by Ukraine’s economy ministry, Bugay’s new employer.

…The initial team behind the reform was made up of an eclectic bunch of several hundreds volunteers that included NGO workers, tech experts, businesspeople and civil servants. They decided the best way to make government deals more open was to create an e-procurement system, which they called ProZorro (meaning “transparent” in Ukrainian). Built on open source software, the system has been designed to make it possible for government bodies to conduct procurement deals electronically, in a transparent manner, while also making the state’s information about public contracts easily accessible online for anyone to see. Although it was initially conceived as a tool for fighting corruption, the potential benefits of the system are much broader — increasing competition, reducing the time and money spent on contracting processes, helping buyers make better decisions and making procurement fairer for suppliers….

In its pilot phase, ProZorro saved over UAH 1.5 billion (US$55 million) for more than 3,900 government agencies and state-owned enterprises across Ukraine. This pilot, which won a prestigious World Procurement Award in 2016, was so successful that Ukraine’s parliament passed a new public procurement law requiring all government contracting to be carried out via ProZorro from 1 August 2016. Since then, potential savings to the procurement budget have snowballed. As of November 2016, they stand at an estimated UAH 5.97 billion (US$233 million), with more than 15,000 buyers and 47,000 commercial suppliers using the new system.

At the same time, the team behind the project has evolved and professionalized….(More)”

How Should a Society Be?


Brian Christian: “This is another example where AI—in this case, machine-learning methods—intersects with these ethical and civic questions in an ultimately promising and potentially productive way. As a society we have these values in maxim form, like equal opportunity, justice, fairness, and in many ways they’re deliberately vague. This deliberate flexibility and ambiguity are what allows things to be a living document that stays relevant. But here we are in this world where we have to say of some machine-learning model, is this racially fair? We have to define these terms, computationally or numerically.

It’s problematic in the short term because we have no idea what we’re doing; we don’t have a way to approach that problem yet. In the slightly longer term—five or ten years—there’s a profound opportunity to come together as a polis and get precise about what we mean by justice or fairness with respect to certain protected classes. Does that mean it’s got an equal false positive rate? Does that mean it has an equal false negative rate? What is the tradeoff that we’re willing to make? What are the constraints that we want to put on this model-building process? That’s a profound question, and we haven’t needed to address it until now. There’s going to be a civic conversation in the next few years about how to make these concepts explicit….(More) (Video)”