Access My Info (AMI)


About: “What do companies know about you? How do they handle your data? And who do they share it with?

Access My Info (AMI) is a project that can help answer these questions by assisting you in making data access requests to companies. AMI includes a web application that helps users send companies data access requests, and a research methodology designed to understand the responses companies make to these requests. Past AMI projects have shed light on how companies treat user data and contribute to digital privacy reforms around the world.

What are data access requests?

A data access request is a letter you can send to any company with products/services that you use. The request asks that the company disclose all the information it has about you and whether or not it has shared your data with any third-parties. If the place where you live has data protection laws that include the right to data access then companies may be legally obligated to respond…

AMI has made personal data requests in jurisdictions around the world and found common patterns.

  1. There are significant gaps between data access laws on paper and the law in practice;
  2. People have consistently encountered barriers to accessing their data.

Together with our partners in each jurisdiction, we have used Access My Info to set off a dialog between users, civil society, regulators, and companies…(More)”

Technology & the Law of Corporate Responsibility – The Impact of Blockchain


Blogpost by Elizabeth Boomer: “Blockchain, a technology regularly associated with digital currency, is increasingly being utilized as a corporate social responsibility tool in major international corporations. This intersection of law, technology, and corporate responsibility was addressed earlier this month at the World Bank Law, Justice, and Development Week 2019, where the theme was Rights, Technology and Development. The law related to corporate responsibility for sustainable development is increasingly visible due in part to several lawsuits against large international corporations, alleging the use of child and forced labor. In addition, the United Nations has been working for some time on a treaty on business and human rights to encourage corporations to avoid “causing or contributing to adverse human rights impacts through their own activities and [to] address such impacts when they occur.”

DeBeersVolvo, and Coca-Cola, among other industry leaders, are using blockchain, a technology that allows digital information to be distributed and analyzed, but not copied or manipulated, to trace the source of materials and better manage their supply chains. These initiatives have come as welcome news in industries where child or forced labor in the supply chain can be hard to detect, e.g. conflict minerals, sugar, tobacco, and cacao. The issue is especially difficult when trying to trace the mining of cobalt for lithium ion batteries, increasingly used in electric cars, because the final product is not directly traceable to a single source.

While non governmental organizations (NGOs) have been advocating for improved corporate performance in supply chains regarding labor and environmental standards for years, blockchain may be a technological tool that could reliably trace information regarding various products – from food to minerals – that go through several layers of suppliers before being certified as slave- or child labor- free.

Child labor and forced labor are still common in some countries. The majority of countries worldwide have ratified International Labour Organization (ILO) Convention No. 182, prohibiting the worst forms of child labor (186 ratifications), as well as the ILO Convention prohibiting forced labor (No. 29, with 178 ratifications), and the abolition of forced labor (Convention No. 105, with 175 ratifications). However, the ILO estimates that approximately 40 million men and women are engaged in modern day slavery and 152 million children are subject to child labor, 38% of whom are working in hazardous conditions. The enduring existence of forced labor and child labor raises difficult ethical questions, because in many contexts, the victim does not have a viable alternative livelihood….(More)”.

Seeing Like a Finite State Machine


Henry Farrell at the Crooked Timber: “…So what might a similar analysis say about the marriage of authoritarianism and machine learning? Something like the following, I think. There are two notable problems with machine learning. One – that while it can do many extraordinary things, it is not nearly as universally effective as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that it identifies may be the product of the problematic data that goes in, which is (to the extent that it is accurate) often the product of biased social processes. When this data is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and so on), the bias may feed upon itself.

This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases. When there are groups that are victimized by such biases, they may mobilize against it (although they will find it harder to mobilize against algorithms than overt discrimination). When there are obvious inefficiencies or social, political or economic problems that result from biases, then there will be ways for people to point out these inefficiencies or problems.

These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may be nearly invisible to a state whose data is polluted both by the means employed to observe and classify it, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to correct it. This of course, will be likely to be reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all study Comrade Xi thought to discover the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate ways.

In short, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could help correct against errors. This disaster would unfold in two ways. The first will involve enormous human costs: self-reinforcing bias will likely increase discrimination against out-groups, of the sort that we are seeing against the Uighur today. The second will involve more ordinary self-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott’s account of High Modernism in that they are not as immediately visible, but that may also be more pernicious, and more damaging to the political health and viability of the regime for just that reason….(More)”

How Data Can Help in the Fight Against the Opioid Epidemic in the United States


Report by Joshua New: “The United States is in the midst of an opioid epidemic 20 years in the making….

One of the most pernicious obstacles in the fight against the opioid epidemic is that, until relatively recently, it was difficult to measure the epidemic in any comprehensive capacity beyond such high-level statistics. A lack of granular data and authorities’ inability to use data to inform response efforts allowed the epidemic to grow to devastating proportions. The maxim “you can’t manage what you can’t measure” has never been so relevant, and this failure to effectively leverage data has undoubtedly cost many lives and caused severe social and economic damage to communities ravaged by opioid addiction, with authorities limited in their ability to fight back.

Many factors contributed to the opioid epidemic, including healthcare providers not fully understanding the potential ramifications of prescribing opioids, socioeconomic conditions that make addiction more likely, and drug distributors turning a blind eye to likely criminal behavior, such as pharmacy workers illegally selling opioids on the black market. Data will not be able to solve these problems, but it can make public health officials and other stakeholders more effective at responding to them. Fortunately, recent efforts to better leverage data in the fight against the opioid epidemic have demonstrated the potential for data to be an invaluable and effective tool to inform decision-making and guide response efforts. Policymakers should aggressively pursue more data-driven strategies to combat the opioid epidemic while learning from past mistakes that helped contribute to the epidemic to prevent similar situations in the future.

The scope of this paper is limited to opportunities to better leverage data to help address problems primarily related to the abuse of prescription opioids, rather than the abuse of illicitly manufactured opioids such as heroin and fentanyl. While these issues may overlap, such as when a person develops an opioid use disorder from prescribed opioids and then seeks heroin when they are unable to obtain more from their doctor, the opportunities to address the abuse of prescription opioids are more clear-cut….(More)”.

Manual of Digital Earth


Book by Huadong Guo, Michael F. Goodchild and Alessandro Annoni: “This open access book offers a summary of the development of Digital Earth over the past twenty years. By reviewing the initial vision of Digital Earth, the evolution of that vision, the relevant key technologies, and the role of Digital Earth in helping people respond to global challenges, this publication reveals how and why Digital Earth is becoming vital for acquiring, processing, analysing and mining the rapidly growing volume of global data sets about the Earth.

The main aspects of Digital Earth covered here include: Digital Earth platforms, remote sensing and navigation satellites, processing and visualizing geospatial information, geospatial information infrastructures, big data and cloud computing, transformation and zooming, artificial intelligence, Internet of Things, and social media. Moreover, the book covers in detail the multi-layered/multi-faceted roles of Digital Earth in response to sustainable development goals, climate changes, and mitigating disasters, the applications of Digital Earth (such as digital city and digital heritage), the citizen science in support of Digital Earth, the economic value of Digital Earth, and so on. This book also reviews the regional and national development of Digital Earth around the world, and discusses the role and effect of education and ethics. Lastly, it concludes with a summary of the challenges and forecasts the future trends of Digital Earth.By sharing case studies and a broad range of general and scientific insights into the science and technology of Digital Earth, this book offers an essential introduction for an ever-growing international audience….(More)”.

The Right to Be Seen


Anne-Marie Slaughter and Yuliya Panfil at Project Syndicate: “While much of the developed world is properly worried about myriad privacy outrages at the hands of Big Tech and demanding – and securing – for individuals a “right to be forgotten,” many around the world are posing a very different question: What about the right to be seen?

Just ask the billion people who are locked out of services we take for granted – things like a bank account, a deed to a house, or even a mobile phone account – because they lack identity documents and thus can’t prove who they are. They are effectively invisible as a result of poor data.

The ability to exercise many of our most basic rights and privileges – such as the right to vote, drive, own property, and travel internationally – is determined by large administrative agencies that rely on standardized information to determine who is eligible for what. For example, to obtain a passport it is typically necessary to present a birth certificate. But what if you do not have a birth certificate? To open a bank account requires proof of address. But what if your house doesn’t have an address?

The inability to provide such basic information is a barrier to stability, prosperity, and opportunity. Invisible people are locked out of the formal economy, unable to vote, travel, or access medical and education benefits. It’s not that they are undeserving or unqualified, it’s that they are data poor.

In this context, the rich digital record provided by our smartphones and other sensors could become a powerful tool for good, so long as the risks are acknowledged. These gadgets, which have become central to our social and economic lives, leave a data trail that for many of us is the raw material that fuels what Harvard’s Shoshana Zuboff calls “surveillance capitalism.” Our Google location history shows exactly where we live and work. Our email activity reveals our social networks. Even the way we hold our smartphone can give away early signs of Parkinson’s.

But what if citizens could harness the power of these data for themselves, to become visible to administrative gatekeepers and access the rights and privileges to which they are entitled? Their virtual trail could then be converted into proof of physical facts.

That is beginning to happen. In India, slum dwellers are using smartphone location data to put themselves on city maps for the first time and register for addresses that they can then use to receive mail and register for government IDs. In Tanzania, citizens are using their mobile payment histories to build their credit scores and access more traditional financial services. And in Europe and the United States, Uber drivers are fighting for their rideshare data to advocate for employment benefits….(More)”.

“Mind the Five”: Guidelines for Data Privacy and Security in Humanitarian Work With Undocumented Migrants and Other Vulnerable Populations


Paper by Sara Vannini, Ricardo Gomez and Bryce Clayton Newell: “The forced displacement and transnational migration of millions of people around the world is a growing phenomenon that has been met with increased surveillance and datafication by a variety of actors. Small humanitarian organizations that help irregular migrants in the United States frequently do not have the resources or expertise to fully address the implications of collecting, storing, and using data about the vulnerable populations they serve. As a result, there is a risk that their work could exacerbate the vulnerabilities of the very same migrants they are trying to help. In this study, we propose a conceptual framework for protecting privacy in the context of humanitarian information activities (HIA) with irregular migrants. We draw from a review of the academic literature as well as interviews with individuals affiliated with several US‐based humanitarian organizations, higher education institutions, and nonprofit organizations that provide support to undocumented migrants. We discuss 3 primary issues: (i) HIA present both technological and human risks; (ii) the expectation of privacy self‐management by vulnerable populations is problematic; and (iii) there is a need for robust, actionable, privacy‐related guidelines for HIA. We suggest 5 recommendations to strengthen the privacy protection offered to undocumented migrants and other vulnerable populations….(More)”.

Netnography: The Essential Guide to Qualitative Social Media Research


Book by Robert Kozinets: “Netnography is an adaptation of ethnography for the online world, pioneered by Robert Kozinets, and is concerned with the study of online cultures and communities as distinct social phenomena, rather than isolated content. In this landmark third edition, Netnography: The Essential Guide provides the theoretical and methodological groundwork as well as the practical applications, helping students both understand and do netnographic research projects of their own.

Packed with enhanced learning features throughout, linking concepts to structured activities in a step by step way, the book is also now accompanied by a striking new visual design and further case studies, offering the essential student resource to conducting online ethnographic research. Real world examples provided demonstrate netnography in practice across the social sciences, in media and cultural studies, anthropology, education, nursing, travel and tourism, and others….(More)”.

National SDG Review: data challenges and opportunities


Press Release: “…the Partnership in Statistics for Development in the 21st Century (PARIS21) and Partners for Review launched a landmark new paper that identifies the factors preventing countries from fully exploiting their data ecosystem and proposes solutions to strengthening statistical capacities to achieve the 2030 Agenda for Sustainable Development.

Ninety percent of the data in the world has been created in the past two years, yet many countries with low statistical capacity struggle to produce, analyse and communicate the data necessary to advance sustainable development. At the same time, demand for more and better data and statistics is increasingly massively, with international agreements like the 2030 Agenda placing unprecedented demand on countries to report on more than 230 indicators.

Using PARIS21’s Capacity Development 4.0 (CD 4.0) approach, the paper shows that leveraging data available in the data ecosystem for official re­porting requires new capacity in terms of skills and knowledge, man­agement, politics and power. The paper also shows that these capacities need to be developed at both the organisational and systemic level, which involves the various channels and interactions that connect different organisations.

Aimed at national statistics offices, development professionals and others involved in the national data ecosystem, the paper provides a roadmap that can help national statistical systems develop and strengthen the capacities of traditional and new actors in the data ecosystem to improve both the fol­low-up and review process of the 2030 Agenda as well as the data architecture for sustainable development at the national level…(More)”.

Why Data Is Not the New Oil


Blogpost by Alec Stapp: “Data is the new oil,” said Jaron Lanier in a recent op-ed for The New York Times. Lanier’s use of this metaphor is only the latest instance of what has become the dumbest meme in tech policy. As the digital economy becomes more prominent in our lives, it is not unreasonable to seek to understand one of its most important inputs. But this analogy to the physical economy is fundamentally flawed. Worse, introducing regulations premised upon faulty assumptions like this will likely do far more harm than good. Here are seven reasons why “data is the new oil” misses the mark:

1. Oil is rivalrous; data is non-rivalrous

If someone uses a barrel of oil, it can’t be consumed again. But, as Alan McQuinn, a senior policy analyst at the Information Technology and Innovation Foundation, noted, “when consumers ‘pay with data’ to access a website, they still have the same amount of data after the transaction as before. As a result, users have an infinite resource available to them to access free online services.” Imposing restrictions on data collection makes this infinite resource finite. 

2. Oil is excludable; data is non-excludable

Oil is highly excludable because, as a physical commodity, it can be stored in ways that prevent use by non-authorized parties. However, as my colleagues pointed out in a recent comment to the FTC: “While databases may be proprietary, the underlying data usually is not.” They go on to argue that this can lead to under-investment in data collection:

[C]ompanies that have acquired a valuable piece of data will struggle both to prevent their rivals from obtaining the same data as well as to derive competitive advantage from the data. For these reasons, it also  means that firms may well be more reluctant to invest in data generation than is socially optimal. In fact, to the extent this is true there is arguably more risk of companies under-investing in data  generation than of firms over-investing in order to create data troves with which to monopolize a market. This contrasts with oil, where complete excludability is the norm.

3. Oil is fungible; data is non-fungible

Oil is a commodity, so, by definition, one barrel of oil of a given grade is equivalent to any other barrel of that grade. Data, on the other hand, is heterogeneous. Each person’s data is unique and may consist of a practically unlimited number of different attributes that can be collected into a profile. This means that oil will follow the law of one price, while a dataset’s value will be highly contingent on its particular properties and commercialization potential.

4. Oil has positive marginal costs; data has zero marginal costs

There is a significant expense to producing and distributing an additional barrel of oil (as low as $5.49 per barrel in Saudi Arabia; as high as $21.66 in the U.K.). Data is merely encoded information (bits of 1s and 0s), so gathering, storing, and transferring it is nearly costless (though, to be clear, setting up systems for collecting and processing can be a large fixed cost). Under perfect competition, the market clearing price is equal to the marginal cost of production (hence why data is traded for free services and oil still requires cold, hard cash)….(More)”.