Access My Info (AMI)


About: “What do companies know about you? How do they handle your data? And who do they share it with?

Access My Info (AMI) is a project that can help answer these questions by assisting you in making data access requests to companies. AMI includes a web application that helps users send companies data access requests, and a research methodology designed to understand the responses companies make to these requests. Past AMI projects have shed light on how companies treat user data and contribute to digital privacy reforms around the world.

What are data access requests?

A data access request is a letter you can send to any company with products/services that you use. The request asks that the company disclose all the information it has about you and whether or not it has shared your data with any third-parties. If the place where you live has data protection laws that include the right to data access then companies may be legally obligated to respond…

AMI has made personal data requests in jurisdictions around the world and found common patterns.

  1. There are significant gaps between data access laws on paper and the law in practice;
  2. People have consistently encountered barriers to accessing their data.

Together with our partners in each jurisdiction, we have used Access My Info to set off a dialog between users, civil society, regulators, and companies…(More)”

Seeing Like a Finite State Machine


Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”

“Mind the Five”: Guidelines for Data Privacy and Security in Humanitarian Work With Undocumented Migrants and Other Vulnerable Populations


Paper by Sara Vannini, Ricardo Gomez and Bryce Clayton Newell: “The forced displacement and transnational migration of millions of people around the world is a growing phenomenon that has been met with increased surveillance and datafication by a variety of actors. Small humanitarian organizations that help irregular migrants in the United States frequently do not have the resources or expertise to fully address the implications of collecting, storing, and using data about the vulnerable populations they serve. As a result, there is a risk that their work could exacerbate the vulnerabilities of the very same migrants they are trying to help. In this study, we propose a conceptual framework for protecting privacy in the context of humanitarian information activities (HIA) with irregular migrants. We draw from a review of the academic literature as well as interviews with individuals affiliated with several US‐based humanitarian organizations, higher education institutions, and nonprofit organizations that provide support to undocumented migrants. We discuss 3 primary issues: (i) HIA present both technological and human risks; (ii) the expectation of privacy self‐management by vulnerable populations is problematic; and (iii) there is a need for robust, actionable, privacy‐related guidelines for HIA. We suggest 5 recommendations to strengthen the privacy protection offered to undocumented migrants and other vulnerable populations….(More)”.

Principles alone cannot guarantee ethical AI


Paper by Brent Mittelstadt: “Artificial intelligence (AI) ethics is now a global topic of discussion in academic and policy circles. At least 84 public–private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. According to recent meta-analyses, AI ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach for the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement….(More)”.

Surveillance giants: how the business model of Google and Facebook threatens human rights


Report by Amnesty International: “Google and Facebook help connect the world and provide crucial services to billions. To participate meaningfully in today’s economy and society, and to realize their human rights, people rely on access to the internet—and to the tools Google and Facebook offer. But Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model is inherently incompatible with the right to privacy and poses a threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination….(More)”.

Responsible Data for Children


New Site and Report by UNICEF and The GovLab: “RD4C seeks to build awareness regarding the need for special attention to data issues affecting children—especially in this age of changing technology and data linkage; and to engage with governments, communities, and development actors to put the best interests of children and a child rights approach at the center of our data activities. The right data in the right hands at the right time can significantly improve outcomes for children. The challenge is to understand the potential risks and ensure that the collection, analysis and use of data on children does not undermine these benefits.

Drawing upon field-based research and established good practice, RD4C aims to highlight and support best practice data responsibility; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.

How We Became Our Data


Book by Colin Koopman: “We are now acutely aware, as if all of the sudden, that data matters enormously to how we live. How did information come to be so integral to what we can do? How did we become people who effortlessly present our lives in social media profiles and who are meticulously recorded in state surveillance dossiers and online marketing databases? What is the story behind data coming to matter so much to who we are?


In How We Became Our Data, Colin Koopman excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record-keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think. Blending philosophy, history, political theory, and media theory in conversation with thinkers like Michel Foucault, Jürgen Habermas, and Friedrich Kittler, Koopman presents an illuminating perspective on how we have come to think of our personhood—and how we can resist its erosion….(More)”.

An Open Letter to Law School Deans about Privacy Law Education in Law Schools


Daniel Solove: “Recently a group of legal academics and practitioners in the field of privacy law sent a letter to the deans of all U.S. law schools about privacy law education in law schools.  My own brief intro about this endeavor is here in italics, followed by the letter. The signatories to the letter have signed onto the letter, not this italicized intro.

Although the field of privacy law grown dramatically in past two decades, education in law schools about privacy law has significantly lagged behind.  Most U.S. law schools lack a course on privacy law. Of those that have courses, many are small seminars, often taught by adjuncts.  Of the law schools that do have a privacy course, most often just have one course. Most schools lack a full-time faculty member who focuses substantially on privacy law.

This state of affairs is a great detriment to students. I am constantly approached by students and graduates from law schools across the country who are wondering how they can learn about privacy law and enter the field. Many express great disappointment at the lack of any courses, faculty, or activities at their schools.

After years of hoping that the legal academy would wake up and respond, I came to the realization that this wasn’t going to happen on its own. The following letter [click here for the PDF version] aims to make deans aware of the privacy law field. I hope that the letter is met with action….(More)”.

Americans’ views about privacy, surveillance and data-sharing


Pew Research Center: “In key ways, today’s digitally networked society runs on quid pro quos: People exchange details about themselves and their activities for services and products on the web or apps. Many are willing to accept the deals they are offered in return for sharing insight about their purchases, behaviors and social lives. At times, their personal information is collected by government on the grounds that there are benefits to public safety and security.

A majority of Americans are concerned about this collection and use of their data, according to a new report from Pew Research Center….

Americans vary in their attitudes toward data-sharing in the pursuit of public good. Though many Americans don’t think they benefit much from the collection of their data, and they find that the potential risks of this practice outweigh the benefits, there are some scenarios in which the public is more likely to accept the idea of data-sharing. In line with findings in a 2015 Center survey showing that some Americans are comfortable with trade-offs in sharing data, about half of U.S. adults (49%) say it is acceptable for the government to collect data about all Americans in order to assess potential terrorist threats. That compares with 31% who feel it is unacceptable to collect data about all Americans for that purpose. By contrast, just one-quarter say it is acceptable for smart speaker makers to share users’ audio recordings with law enforcement to help with criminal investigations, versus 49% who find that unacceptable….(More)”.

Google’s ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans


Rob Copeland at Wall Street Journal: “Google is engaged with one of the U.S.’s largest health-care systems on a project to collect and crunch the detailed personal-health information of millions of people across 21 states.

The initiative, code-named “Project Nightingale,” appears to be the biggest effort yet by a Silicon Valley giant to gain a toehold in the health-care industry through the handling of patients’ medical data. Amazon.com Inc., Apple Inc.  and Microsoft Corp. are also aggressively pushing into health care, though they haven’t yet struck deals of this scope.

Google began Project Nightingale in secret last year with St. Louis-based Ascension, a Catholic chain of 2,600 hospitals, doctors’ offices and other facilities, with the data sharing accelerating since summer, according to internal documents.

The data involved in the initiative encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth….

Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and the documents.

In a news release issued after The Wall Street Journal reported on Project Nightingale on Monday, the companies said the initiative is compliant with federal health law and includes robust protections for patient data….(More)”.