The Privacy Project


The New York Times: “Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs — in anonymity, even autonomy — are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The Times is embarking on this months long project to explore the technology and where it’s taking us, and to convene debate about how it can best help realize human potential….(More)”

Does Privacy Matter?

What Do They Know, and How Do They Know It?

What Should Be Done About This?

What Can I Do?

(View all Privacy articles…)

Empathy and cooperation go hand in hand


PennToday: “It’s a big part of what makes us human: we cooperate. But humans aren’t saints. Most of us are more likely to help someone we consider good than someone we consider a jerk. 

How we form these moral assessments of others has a lot to do with cultural and social norms, as well as our capacity for empathy, the extent to which we can take on the perspective of another person.

In a new analysis, researchers from the University of Pennsylvania investigate cooperation with an evolutionary approach. Using game-theory-driven models, they show that a capacity for empathy fosters cooperation, according to senior author Joshua Plotkin, an evolutionary biologist. The models also show that the extent to which empathy promotes cooperation depends on a given society’s system for moral evaluation. 

“Having not just the capacity but the willingness to take into account someone else’s perspective when forming moral judgments tends to promote cooperation,” says Plotkin. 

What’s more, the group’s analysis points to a heartening conclusion. All else being equal, empathy tends to spread throughout a population under most scenarios. 

“We asked, ‘can empathy evolve?’” explains Arunas Radzvilavicius, the study’s lead author and a postdoctoral researcher who works with Plotkin. “What if individuals start copying the empathetic way of observing each other’s interactions? And we saw that empathy soared through the population.”

Plotkin and Radzvilavicius coauthored the study, published today in eLife, with Alexander Stewart, an assistant professor at the University of Houston.

Plenty of scientists have probed the question of why individuals cooperate through indirect reciprocity, a scenario in which one person helps another not because of a direct quid pro quo but because they know that person to be “good.” But the Penn group gave the study a nuance that others had not explored. Whereas other studies have assumed that reputations are universally known, Plotkin, Radzvilavicius, and Stewart realized this did not realistically describe human society, where individuals may differ in their opinion of others’ reputations.

“In large, modern societies, people disagree a lot about each other’s moral reputations,” Plotkin says. 

The researchers incorporated this variation in opinions into their models, which imagine someone choosing either to donate or not to donate to a second person based on that individual’s reputation. The researchers found that cooperation was less likely to be sustained when people disagree about each other’s reputations.

That’s when they decided to incorporate empathy, or theory of mind, which, in the context of the study, entails the ability to understand the perspective of another person.

Doing so allowed cooperation to win out over more selfish strategies….(More)”.

Open Justice: Public Entrepreneurs Learn to Use New Technology to Increase the Efficiency, Legitimacy, and Effectiveness of the Judiciary


The GovLab: “Open justice is a growing movement to leverage new technologies – including big data, digital platforms, blockchain and more – to improve legal systems by making the workings of courts easier to understand, scrutinize and improve. Through the use of new technology, open justice innovators are enabling greater efficiency, fairness, accountability and a reduction in corruption in the third branch of government. For example, the open data portal ‘Atviras Teismas’ Lithuania (translated ‘open court’ Lithuania) is a platform for monitoring courts and judges through performance metrics’. This portal serves to make the courts of Lithuania transparent and benefits both courts and citizens by presenting comparative data on the Lithuanian Judiciary.

To promote more Open Justice projects, the GovLab in partnership with the Electoral Tribunal of the Federal Judiciary (TEPJF) of Mexico, launched an historic, first of its kind, online course on Open Justice. Designed primarily for lawyers, judges, and public officials – but also intended to appeal to technologists, and members of the public – the Spanish-language course consists of 10 modules.

Each of the ten modules comprises:

  1. A short video-based lecture
  2. An original Open Justice reader
  3. Associated additional readings
  4. A self-assessment quiz
  5. A demonstration of a platform or tool
  6. An interview with a global practitioner

Among those featured in the interviews are Felipe Moreno of Jusbrasil, Justin Erlich of OpenJustice California, Liam Hayes of Aurecon, UK, Steve Ghiassi of Legaler, Australia, and Sara Castillo of Poder Judicial, Chile….(More)”.

Building Trust in Human Centric Artificial Intelligence


Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: “Artificial intelligence (AI) has the potential to transform our world for the better: it can improve healthcare, reduce energy consumption, make cars safer, and enable farmers to use water and natural resources more efficiently. AI can be used to predict environmental and climate change, improve financial risk management and provides the tools to manufacture, with less waste, products tailored to our needs. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement agencies to fight crime more efficiently.

AI can benefit the whole of society and the economy. It is a strategic technology that is now being developed and used at a rapid pace across the world. Nevertheless, AI also brings with it new challenges for the future of work, and raises legal and ethical questions.

To address these challenges and make the most of the opportunities which AI offers, the Commission published a European strategy in April 2018. The strategy places people at the centre of the development of AI — human-centric AI. It is a three-pronged approach to boost the EU’s technological and industrial capacity and AI uptake across the economy, prepare for socio-economic changes, and ensure an appropriate ethical and legal framework.

To deliver on the AI strategy, the Commission developed together with Member States a coordinated plan on AI, which it presented in December 2018, to create synergies, pool data — the raw material for many AI applications — and increase joint investments. The aim is to foster cross-border cooperation and mobilise all players to increase public and private investments to at least EUR 20 billion annually over the next decade.

The Commission doubled its investments in AI in Horizon 2020 and plans to invest EUR 1 billion annually from Horizon Europe and the Digital Europe Programme, in support notably of common data spaces in health, transport and manufacturing, and large experimentation facilities such as smart hospitals and infrastructures for automated vehicles and a strategic research agenda.

To implement such a common strategic research, innovation and deployment agenda the Commission has intensified its dialogue with all relevant stakeholders from industry, research institutes and public authorities. The new Digital Europe programme will also be crucial in helping to make AI available to small and medium-size enterprises across all Member States through digital innovation hubs, strengthened testing and experimentation facilities, data spaces and training programmes.

Building on its reputation for safe and high-quality products, Europe’s ethical approach to AI strengthens citizens’ trust in the digital development and aims at building a competitive advantage for European AI companies. The purpose of this Communication is to launch a comprehensive piloting phase involving stakeholders on the widest scale in order to test the practical implementation of ethical guidance for AI development and use…(More)”.

Unblocking the Bottlenecks and Making the Global Supply Chain Transparent: How Blockchain Technology Can Update Global Trade


Paper by Hanna C Norberg: “Blockchain technology is still in its infancy, but already it has begun to revolutionize global trade. Its lure is irresistible because of the simplicity with which it can replace the standard methods of documentation, smooth out logistics, increase transparency, speed up transactions, and ameliorate the planning and tracking of trade.

Blockchain essentially provides the supply chain with an unalterable ledger of verified transactions, and thus enables trust every step of the way through the trade process. Every stakeholder involved in that process – from producer to warehouse worker to shipper to financial institution to recipient at the final destination – can trust that the information contained in that indelible ledger is accurate. Fraud will no longer be an issue, middlemen can be eliminated, shipments tracked, quality control maintained to highest standards and consumers can make decisions based on more than the price. Blockchain dramatically reduces the amount of paperwork involved, along with the myriad of agents typically involved in the process, all of this resulting in soaring efficiencies. Making the most of this new technology, however, requires solid policy. Most people have only a vague idea of what blockchain is. There needs to be a basic understanding of what blockchain can and can’t do, and how it works in the economy and in trade. Once they become familiar with the technology, policy-makers must move on to thinking about what technological issues could be mitigated, solved or improved.

Governments need to explore blockchain’s potential through its use in public-sector projects that demonstrate its workings, its potential and its inevitable limitations. Although blockchain is not nearly as evolved now as the internet was in 2005, co-operation among all stakeholders on issues like taxonomy or policy guides on basic principles is crucial. Those stakeholders include government, industry, academia and civil society. All this must be done while keeping in mind the global nature of blockchain and that blockchain regulations need to be made in synch with regulations on other issues are adjacent to the technology, such as electronic signatures. However, work can be done in the global arena through international initiatives and organizations such as the ISO….(More)”.

Filling a gap: the clandestine gang fixing Rome illegally


Giorgio Ghiglione in The Guardian: “It is 6am on a Sunday and the streets of the Ostiense neighbourhood in southern Rome are empty. The metro has just opened and nearby cafes still await their first customers.

Seven men and women are working hard, their faces obscured by scarves and hoodies as they unload bags of cement and sand from a car near the Basilica of St Paul Outside the Walls.

They are not criminals. Members of the secret Gap organisation, they hide their identities because what they are doing – fixing a broken pavement without official permission – is technically illegal.

City maintenance – or the lack of it – has long been a hot-button issue in Italy’s capital. There are an estimated 10,000 potholesin the city – a source of frustration for the many Romans who travel by scooter. Garbage collection has also become a major problem since the city’s landfill was closed in 2013, with periodic “waste crises” where trash piles up in the streets. Cases of exploding buses and the collapse of a metro escalatormade international headlines.

The seven clandestine pavement-fixers are part of a network of about 20 activists quietly doing the work that the city authorities have failed to do. Gap stands for Gruppi Artigiani Pronto Intervento, (“groups of artisan emergency services”) but is also a tribute to the partisans of Gruppi di Azione Patriottica, who fought the fascists during the second world war.

“We chose this name because many of our parents or grandparents were partisans and we liked the idea of honouring their memory,” says one of the activists, a fiftysomething architect who goes by the pseudonym Renato. While the modern-day Gap aren’t risking their lives, their modus operandi is inspired by resistance saboteurs: they identify a target, strike and disappear unseen into the city streets.

Gap have been busy over the past few months. In December they repaired the fountain, built in the 1940s, of the Principe di Piemonte primary school. In January they painted a pedestrian crossing on a dangerous major road. Their latest work, the pavement fixing in Ostiense, involved filling a deep hole that regularly filled with water when it rained….(More)”.

The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand


NBER Paper by Daron Acemoglu and Pascual Restrepo: “Artificial Intelligence is set to influence every aspect of our lives, not least the way production is organized. AI, as a technology platform, can automate tasks previously performed by labor or create new tasks and activities in which humans can be productively employed. Recent technological change has been biased towards automation, with insufficient focus on creating new tasks where labor can be productively employed. The consequences of this choice have been stagnating labor demand, declining labor share in national income, rising inequality and lower productivity growth. The current tendency is to develop AI in the direction of further automation, but this might mean missing out on the promise of the “right” kind of AI with better economic and social outcomes….(More)”.

The Automated Administrative State


Paper by Danielle Citron and Ryan Calo: “The administrative state has undergone radical change in recent decades. In the twentieth century, agencies in the United States generally relied on computers to assist human decision-makers. In the twenty-first century, computers are making agency decisions themselves. Automated systems are increasingly taking human beings out of the loop. Computers terminate Medicaid to cancer patients and deny food stamps to individuals. They identify parents believed to owe child support and initiate collection proceedings against them. Computers purge voters from the rolls and deem small businesses ineligible for federal contracts [1].

Automated systems built in the early 2000s eroded procedural safeguards at the heart of the administrative state. When government makes important decisions that affect our lives, liberty, and property, it owes us “due process”— understood as notice of, and a chance to object to, those decisions. Automated systems, however, frustrate these guarantees. Some systems like the “no-fly” list were designed and deployed in secret; others lacked record-keeping audit trails, making review of the law and facts supporting a system’s decisions impossible. Because programmers working at private contractors lacked training in the law, they distorted policy when translating it into code [2].

Some of us in the academy sounded the alarm as early as the 1990s, offering an array of mechanisms to ensure the accountability and transparency of automated administrative state [3]. Yet the same pathologies continue to plague government decision-making systems today. In some cases, these pathologies have deepened and extended. Agencies lean upon algorithms that turn our personal data into predictions, professing to reflect who we are and what we will do. The algorithms themselves increasingly rely upon techniques, such as deep learning, that are even less amenable to scrutiny than purely statistical models. Ideals of what the administrative law theorist Jerry Mashaw has called “bureaucratic justice” in the form of efficiency with a “human face” feel impossibly distant [4].

The trend toward more prevalent and less transparent automation in agency decision-making is deeply concerning. For a start, we have yet to address in any meaningful way the widening gap between the commitments of due process and the actual practices of contemporary agencies [5]. Nonetheless, agencies rush to automate (surely due to the influence and illusive promises of companies seeking lucrative contracts), trusting algorithms to tell us if criminals should receive probation, if public school teachers should be fired, or if severely disabled individuals should receive less than the maximum of state-funded nursing care [6]. Child welfare agencies conduct intrusive home inspections because some system, which no party to the interaction understands, has rated a poor mother as having a propensity for violence. The challenges of preserving due process in light of algorithmic decision-making is an area of renewed and active attention within academia, civil society, and even the courts [7].

Second, and routinely overlooked, we are applying the new affordances of artificial intelligence in precisely the wrong contexts…(More)”.

Is the Singularity the New Wild West? On Social Entrepreneurship in Extended Reality


Paper by Abigail Devereaux: “Augmented and virtual reality, whose ubiquitous convergence is known as extended reality (XR), are technologies that imbue a user’s apparent surroundings with some degree of virtuality. In this article, we are interested in how social entrepreneurs might utilize innovative technological methods in XR to solve social problems presented by XR. Social entrepreneurship in XR presents novel challenges and opportunities not present in traditional regulatory spaces, as XR changes the environment in which choices are made.

Furthermore, the challenges presented by rapidly advancing XR may require much more agile forms of governance than are available from public institutions, even under widespread algorithmic governance. Social entrepreneurship in blockchain solutions may very well be able to meet some of these challenges, as we show. Thus, we expect a new infrastructure to arise to address challenges presented by XR, built by social entrepreneurs in XR, and that may eventually be used as an alternative to public instantiations of governance. Our central thesis is that the dynamic, immersive, and agile nature of XR both provides an unusually fertile ground for the development of alternative forms of governance and essentially necessitates this development by contrast with relatively inagile institutions of public governance….(More)”.

The Market for Data Privacy


Paper by Tarun Ramadorai, Antoine Uettwiller and Ansgar Walther: “We scrape a comprehensive set of US firms’ privacy policies to facilitate research on the supply of data privacy. We analyze these data with the help of expert legal evaluations, and also acquire data on firms’ web tracking activities. We find considerable and systematic variation in privacy policies along multiple dimensions including ease of access, length, readability, and quality, both within and between industries. Motivated by a simple theory of big data acquisition and usage, we analyze the relationship between firm size, knowledge capital intensity, and privacy supply. We find that large firms with intermediate data intensity have longer, legally watertight policies, but are more likely to share user data with third parties….(More)”.