Privacy and/or Trade


Paper by Anupam Chander and Paul M. Schwartz: “International privacy and trade law developed together, but now are engaged in significant conflict. Current efforts to reconcile the two are likely to fail, and the result for globalization favors the largest international companies able to navigate the regulatory thicket. In a landmark finding, this Article shows that more than sixty countries outside the European Union are now evaluating whether foreign countries have privacy laws that are adequate to receive personal data. This core test for deciding on the permissibility of global data exchanges is currently applied in a nonuniform fashion with ominous results for the data flows that power trade today.

The promise of a global internet, with access for all, including companies from the Global South, is increasingly remote. This Article uncovers the forgotten and fateful history of the international regulation of privacy and trade that led to our current crisis and evaluates possible solutions to the current conflict. It proposes a Global Agreement on Privacy enforced within the trade order, but with external data privacy experts developing the treaty’s substantive norms….(More)”.

How privacy’s past may shape its future


Essay by Alessandro Acquisti, Laura Brandimarte and Jeff Hancock: “Continued expansion of human activities into digital realms gives rise to concerns about digital privacy and its invasions, often expressed in terms of data rights and internet surveillance. It may thus be tempting to construe privacy as a modern phenomenon—something our ancestors lacked and technological innovation and urban growth made possible. Research from history, anthropology, and ethnography suggests otherwise. The evidence for peoples seeking to manage the boundaries of private and public spans time and space, social class, and degree of technological sophistication. Privacy—not merely hiding of data, but the selective opening and closing of the self to others—appears to be both culturally specific and culturally universal. But what could explain the simultaneous universality and diversity of a human drive for privacy? An account of the evolutionary roots of privacy may offer an answer and teach us about privacy’s digital future and how to manage it….(More)”.

Oversight Board publishes policy advisory opinion on the sharing of private residential information


Press Release by Oversight Board: “Last year, Meta requested a policy advisory opinion from the Board on the sharing of private residential addresses and images, and the contexts in which this information may be published on Facebook and Instagram. Meta considers this to be a difficult question as while access to such information can be relevant to journalism and civic activism, “exposing this information without consent can create a risk to residents’ safety and infringe on an individual’s privacy.”

Meta’s request noted several potential harms linked to releasing personal information, including residential addresses and images. These include “doxing,” (which refers to the release of documents, abbreviated as “dox”) where information which can identify someone is revealed online. Meta noted that doxing can have negative real-world consequences, such as harassment or stalking…

The Board understands that the sharing of private residential addresses and images represents a potentially serious violation of the right to privacy both for people who use Facebook and Instagram, and those who do not.

Once this information is shared, the harms that can result, such as doxing, are difficult to remedy. Harms resulting from doxing disproportionately affect groups such as women, children and LGBTQIA+ people, and can include emotional distress, loss of employment and even physical harm or death.

As the potential for harm is particularly context specific, it is challenging to develop objective and universal indicators that would allow content reviewers to distinguish the sharing of content that would be harmful from shares that would not be. That is why the Board believes that the Privacy Violations policy should be more protective of privacy.

International human rights standards permit necessary and proportionate restrictions on expression to protect people’s right to privacy. As such, the Board favors narrowing the exceptions to the Privacy Violations policy to help Meta better protect the private residential information of people both on and off its platforms.

In exchanges with the Board, Meta stressed that “ensuring that the “publicly available” definition does not exempt content from removal that poses a risk of offline harm” is a “persistent concern.” Public records and other sources of what could be considered “publicly available” information still require resources and effort to be accessed by the general public. On social media, however, such information may be shared and accessed more quickly, and on a much bigger scale, which significantly increases the risk of harm. As such, the Board proposes removing the “publicly available” exception for the sharing of both private residential addresses and images that meet certain criteria….(More)”.

Suicide hotline shares data with for-profit spinoff, raising ethical questions


Alexandra Levine at Politico: “Crisis Text Line is one of the world’s most prominent mental health support lines, a tech-driven nonprofit that uses big data and artificial intelligence to help people cope with traumas such as self-harm, emotional abuse and thoughts of suicide.

But the data the charity collects from its online text conversations with people in their darkest moments does not end there: The organization’s for-profit spinoff uses a sliced and repackaged version of that information to create and market customer service software.

Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable.”

In turn, Loris has pledged to share some of its revenue with Crisis Text Line. The nonprofit also holds an ownership stake in the company, and the two entities shared the same CEO for at least a year and a half. The two call their relationship a model for how commercial enterprises can help charitable endeavors thrive…(More).”

The UN is testing technology that processes data confidentially


The Economist: “Reasons of confidentiality mean that many medical, financial, educational and other personal records, from the analysis of which much public good could be derived, are in practice unavailable. A lot of commercial data are similarly sequestered. For example, firms have more granular and timely information on the economy than governments can obtain from surveys. But such intelligence would be useful to rivals. If companies could be certain it would remain secret, they might be more willing to make it available to officialdom.

A range of novel data-processing techniques might make such sharing possible. These so-called privacy-enhancing technologies (PETs) are still in the early stages of development. But they are about to get a boost from a project launched by the United Nations’ statistics division. The UN PETs Lab, which opened for business officially on January 25th, enables national statistics offices, academic researchers and companies to collaborate to carry out projects which will test various PETs, permitting technical and administrative hiccups to be identified and overcome.

The first such effort, which actually began last summer, before the PETs Lab’s formal inauguration, analysed import and export data from national statistical offices in America, Britain, Canada, Italy and the Netherlands, to look for anomalies. Those could be a result of fraud, of faulty record keeping or of innocuous re-exporting.

For the pilot scheme, the researchers used categories already in the public domain—in this case international trade in things such as wood pulp and clocks. They thus hoped to show that the system would work, before applying it to information where confidentiality matters.

They put several kinds of PETs through their paces. In one trial, OpenMined, a charity based in Oxford, tested a technique called secure multiparty computation (SMPC). This approach involves the data to be analysed being encrypted by their keeper and staying on the premises. The organisation running the analysis (in this case OpenMined) sends its algorithm to the keeper, who runs it on the encrypted data. That is mathematically complex, but possible. The findings are then sent back to the original inquirer…(More)”.

Why Privacy Matters


Book by Neil Richards: “Many people tell us that privacy is dead, or that it is dying, but such talk is a dangerous fallacy. This book explains what privacy is, what privacy isn’t, and why privacy matters. Privacy is the extent to which human information is known or used, and it is fundamentally about the social power that human information provides over other people. The best way to ensure that power is checked and channeled in ways that benefit humans and their society is through rules—rules about human information. And because human information rules of some sort are inevitable, we should craft our privacy rules to promote human values. The book suggests three such values that our human information rules should promote: identity, freedom, and protection. Identity allows us to be thinking, self-defining humans; freedom lets us be citizens; while protection safeguards our roles as situated consumers and workers, allowing us, as members of society, to trust and rely on other people so that we can live our lives and hopefully build a better future together…(More)”.

Octagon Measurement: Public Attitudes toward AI Ethics


Paper by Yuko Ikkatai, Tilman Hartwig, Naohiro Takanashi & Hiromi M. Yokoyama: “Artificial intelligence (AI) is rapidly permeating our lives, but public attitudes toward AI ethics have only partially been investigated quantitatively. In this study, we focused on eight themes commonly shared in AI guidelines: “privacy,” “accountability,” “safety and security,” “transparency and explainability,” “fairness and non-discrimination,” “human control of technology,” “professional responsibility,” and “promotion of human values.” We investigated public attitudes toward AI ethics using four scenarios in Japan. Through an online questionnaire, we found that public disagreement/agreement with using AI varied depending on the scenario. For instance, anxiety over AI ethics was high for the scenario where AI was used with weaponry. Age was significantly related to the themes across the scenarios, but gender and understanding of AI differently related depending on the themes and scenarios. While the eight themes need to be carefully explained to the participants, our Octagon measurement may be useful for understanding how people feel about the risks of the technologies, especially AI, that are rapidly permeating society and what the problems might be…(More)”.

Privacy Is Power: How Tech Policy Can Bolster Democracy


Essay by Andrew Imbrie, Daniel Baer, Andrew Trask, Anna Puglisi, Erik Brattberg, and Helen Toner: “…History is rarely forgiving, but as we adopt the next phase of digital tools, policymakers can avoid the errors of the past. Privacy-enhancing technologies, or PETs, are a collection of technologies with applications ranging from improved medical diagnostics to secure voting systems and messaging platforms. PETs allow researchers to harness big data to solve problems affecting billions of people while also protecting privacy. …

PETs are ripe for coordination among democratic allies and partners, offering a way for them to jointly develop standards and practical applications that benefit the public good. At an AI summit last July, U.S. Secretary of State Antony Blinken noted the United States’ interest in “increasing access to shared public data sets for AI training and testing, while still preserving privacy,” and National Security Adviser Jake Sullivan pointed to PETs as a promising area “to overcome data privacy challenges while still delivering the value of big data.” Given China’s advantages in scale, the United States and like-minded partners should foster emerging technologies that play to their strengths in medical research and discovery, energy innovation, trade facilitation, and reform around money laundering. Driving innovation and collaboration within and across democracies is important not only because it will help ensure those societies’ success but also because there will be a first-mover advantage in the adoption of PETs for governing the world’s private data–sharing networks.

Accelerating the development of PETs for the public good will require an international approach. Democratic governments will not be the trendsetters on PETs; instead, policymakers for these governments should focus on nurturing the ecosystems these technologies need to flourish. The role for policymakers is not to decide the fate of specific protocols or techniques but rather to foster a conducive environment for researchers to experiment widely and innovate responsibly.    

Democracies should identify shared priorities and promote basic research to mature the technological foundations of PETs. The underlying technologies require greater investment in algorithmic development and hardware to optimize the chips and mitigate the costs of network overhead. To support the computational requirements for PETs, for example, the National Science Foundation could create an interface through CloudBank and provide cloud compute credits to researchers without access to these resources. The United States could also help incubate an international network of research universities collaborating on these technologies.

Second, science-funding agencies in democracies should host competitions to incentivize new PETs protocols and standards—the collaboration between the United States and the United Kingdom announced in early December is a good example. The goal should be to create free, open-source protocols and avoid the fragmentation of the market and the proliferation of proprietary standards. The National Institute of Standards and Technology and other similar bodies should develop standards and measurement tools for PETs; governments and companies should form public-private partnerships to fund open-source protocols over the long term. Open-source protocols are especially important in the early days of PET development, because closed-source PET implementations by profit-seeking actors can be leveraged to build data monopolies. For example, imagine a scenario where all U.S. cancer data could be controlled by a single company because all the hospitals are running their proprietary software. And you have to become a customer to join the network…(More)”.

Data trust and data privacy in the COVID-19 period


Paper by Nicholas Biddle et al: “In this article, we focus on data trust and data privacy, and how attitudes may be changing during the COVID-19 period. On balance, it appears that Australians are more trusting of organizations with regards to data privacy and less concerned about their own personal information and data than they were prior to the spread of COVID-19. The major determinant of this change in trust with regards to data was changes in general confidence in government institutions. Despite this improvement in trust with regards to data privacy, trust levels are still low….(More)”.

Surveillance Publishing


Working paper by Jefferson D. Pooley: “…This essay lingers on a prediction too: Clarivate’s business model is coming for scholarly publishing. Google is one peer, but the company’s real competitors are Elsevier, Springer Nature, Wiley, Taylor & Francis, and SAGE. Elsevier, in particular, has been moving into predictive analytics for years now. Of course the publishing giants have long profited off of academics and our university employers— by packaging scholars’ unpaid writing-and-editing labor only to sell it back to us as usuriously priced subscriptions or APCs. That’s a lucrative business that Elsevier and the others won’t give up. But they’re layering another business on top of their legacy publishing operations, in the Clarivate mold. The data trove that publishers are sitting on is, if anything, far richer than the citation graph alone. Why worry about surveillance publishing? One reason is the balance-sheet, since the companies’ trading in academic futures will further pad profits at the expense of taxpayers and students. The bigger reason is that our behavior—once alienated from us and abstracted into predictive metrics—will double back onto our work lives. Existing biases, like male academics’ propensity for selfcitation, will receive a fresh coat of algorithmic legitimacy. More broadly, the academic reward system is already distorted by metrics. To the extent that publishers’ tallies and indices get folded into grant-making, tenure-and-promotion, and other evaluative decisions, the metric tide will gain power. The biggest risk is that scholars will internalize an analytics mindset, one already encouraged by citation counts and impact factors….(More)”.