Democracy as Failure


Paper by Aziz Z. Huq: “The theory and the practice of democracy alike are entangled with the prospect of failure. This is so in the sense that a failure of one kind or another is almost always to be found at democracy’s inception. Further, different kinds of shortfalls dog its implementation. No escape is found in theory, which precipitates internal contradictions that can only be resolved by compromising important democratic values. A stable democratic equilibrium proves elusive because of the tendency of discrete lapses to catalyze wider, systemically disruption. Worse, the very pervasiveness of local failure also obscures the tipping point at which systemic change occurs. Social coordination in defense of democracy is therefore very difficult, and its failure correspondingly more likely. This thicket of intimate entanglements has implications for both the proper description and normative analysis of democracy. At a minimum, the nexus of democracy and failure elucidates the difficulty of dichotomizing democracies into the healthy and the ailing. It illuminates the sound design of democratic institutions by gesturing toward resources usefully deployed to mitigate the costs of inevitable failure. Finally, it casts light on the public psychology best adapted to persisting democracy. To grasp the proximity of democracy’s entanglements with failure is thus to temper the aspiration for popular self-government as a steady-state equilibrium, to open new questions about the appropriate political psychology for a sound democracy, and to limn new questions about democracy’s optimal institutional specification….(More)”.

Data Trusts, Health Data, and the Professionalization of Data Management


Paper by Keith Porcaro: “This paper explores how trusts can provide a legal model for professionalizing health data management. Data is potential. Over time, data collected for one purpose can support others. Clinical records at a hospital, created to manage a patient’s care, can be internally analyzed to identify opportunities for process and safety improvements at a hospital, or externally analyzed with other records to identify optimal treatment patterns. Data also carries the potential for harm. Personal data can be leaked or exposed. Proprietary models can be used to discriminate against patients, or price them out of care.

As novel uses of data proliferate, an individual data holder may be ill-equipped to manage complex new data relationships in a way that maximizes value and minimizes harm. A single organization may be limited by management capacity or risk tolerance. Organizations across sectors have digitized unevenly or late, and may not have mature data controls and policies. Collaborations that involve multiple organizations may face coordination problems, or disputes over ownership.

Data management is still a relatively young field. Most models of external data-sharing are based on literally transferring data—copying data between organizations, or pooling large datasets together under the control of a third party—rather than facilitating external queries of a closely held dataset.

Few models to date have focused on the professional management of data on behalf of a data holder, where the data holder retains control over not only their data, but the inferences derived from their data. Trusts can help facilitate the professionalization of data management. Inspired by the popularity of trusts for managing financial investments, this paper argues that data trusts are well-suited as a vehicle for open-ended professional management of data, where a manager’s discretion is constrained by fiduciary duties and a trust document that defines the data holder’s goals…(More)”.

The Pathologies of Digital Consent


Paper by Neil M. Richards and Woodrow Hartzog: “Consent permeates both our law and our lives — especially in the digital context. Consent is the foundation of the relationships we have with search engines, social networks, commercial web sites, and any one of the dozens of other digitally mediated businesses we interact with regularly. We are frequently asked to consent to terms of service, privacy notices, the use of cookies, and so many other commercial practices. Consent is important, but it’s possible to have too much of a good thing. As a number of scholars have documented, while consent models permeate the digital consumer landscape, the practical conditions of these agreements fall far short of the gold standard of knowing and voluntary consent. Yet as scholars, advocates, and consumers, we lack a common vocabulary for talking about the different ways in which digital consents can be flawed.

This article offers four contributions to improve our understanding of consent in the digital world. First, we offer a conceptual vocabulary of “the pathologies of consent” — a framework for talking about different kinds of defects that consent models can suffer, such as unwitting consent, coerced consent, and incapacitated consent. Second, we offer three conditions for when consent will be most valid in the digital context: when choice is infrequent, when the potential harms resulting from that choice are vivid and easy to imagine, and where we have the correct incentives choose consciously and seriously. The further we fall from these conditions, the more a particular consent will be pathological and thus suspect. Third, we argue that out theory of consent pathologies sheds light on the so-called “privacy paradox” — the notion that there is a gap between what consumers say about wanting privacy and what they actually do in practice. Understanding the “privacy paradox” in terms of consent pathologies shows how consumers are not hypocrites who say one thing but do another. On the contrary, the pathologies of consent reveal how consumers can be nudged and manipulated by powerful companies against their actual interests, and that this process is easier when consumer protection law falls far from the gold standard. In light of these findings, we offer a fourth contribution — the theory of consumer trust we have suggested in prior work and which we further elaborate here as an alternative to our over-reliance on consent and its many pathologies….(More)”.

Echo Chambers May Not Be as Dangerous as You Think, New Study Finds


News Release: “In the wake of the 2016 American presidential election, western media outlets have become almost obsessed with echo chambers. With headlines like “Echo Chambers are Dangerous” and “Are You in a Social Media Echo Chamber?,” news media consumers have been inundated by articles discussing the problems with spending most of one’s time around likeminded people.

But are social bubbles really all that bad? Perhaps not.

A new study from the Annenberg School for Communication at the University of Pennsylvania and the School of Media and Public Affairs at George Washington University, published today in the Proceedings of the National Academy of Sciences, shows that collective intelligence — peer learning within social networks — can increase belief accuracy even in politically homogenous groups.

“Previous research showed that social information processing could work in mixed groups,” says lead author and Annenberg alum Joshua Becker (Ph.D. ’18), who is currently a postdoctoral fellow at Northwestern University’s Kellogg School of Management. “But theories of political polarization argued that social influence within homogenous groups should only amplify existing biases.”

It’s easy to imagine that networked collective intelligence would work when you’re asking people neutral questions, such as how many jelly beans are in a jar. But what about probing hot button political topics? Because people are more likely to adjust the facts of the world to match their beliefs than vice versa, prior theories claimed that a group of people who agree politically would be unable to use collective reasoning to arrive at a factual answer if it challenges their beliefs.

“Earlier this year, we showed that when Democrats and Republicans interact with each other within properly designed social media networks, it can eliminate polarization and improve both groups’ understanding of contentious issues such as climate change,” says senior author Damon Centola, Associate Professor of Communication at the Annenberg School. “Remarkably, our new findings show that properly designed social media networks can even lead to improved understanding of contentious topics within echo chambers.”

Becker and colleagues devised an experiment in which participants answered fact-based questions that stir up political leanings, like “How much did unemployment change during Barack Obama’s presidential administration?” or “How much has the number of undocumented immigrants changed in the last 10 years?” Participants were placed in groups of only Republicans or only Democrats and given the opportunity to change their responses based on the other group members’ answers.

The results show that individual beliefs in homogenous groups became 35% more accurate after participants exchanged information with one another. And although people’s beliefs became more similar to their own party members, they also became more similar to members of the other political party, even without any between-group exchange. This means that even in homogenous groups — or echo chambers — social influence increases factual accuracy and decreases polarization.

“Our results cast doubt on some of the gravest concerns about the role of echo chambers in contemporary democracy,” says co-author Ethan Porter, Assistant Professor of Media and Public Affairs at George Washington University. “When it comes to factual matters, political echo chambers need not necessarily reduce accuracy or increase polarization. Indeed, we find them doing the opposite….(More)… (Full Paper: “The Wisdom of Partisan Crowds“)

Opportunities and Challenges of Emerging Technologies for the Refugee System


Research Paper by Roya Pakzad: “Efforts are being made to use information and communications technologies (ICTs) to improve accountability in providing refugee aid. However, there remains a pressing need for increased accountability and transparency when designing and deploying humanitarian technologies. This paper outlines the challenges and opportunities of emerging technologies, such as machine learning and blockchain, in the refugee system.

The paper concludes by recommending the creation of quantifiable metrics for sharing information across both public and private initiatives; the creation of the equivalent of a “Hippocratic oath” for technologists working in the humanitarian field; the development of predictive early-warning systems for human rights abuses; and greater accountability among funders and technologists to ensure the sustainability and real-world value of humanitarian apps and other digital platforms….(More)”

The Voluntariness of Voluntary Consent: Consent Searches and the Psychology of Compliance


Paper by Roseanna Sommers and Vanessa K. Bohns: “Consent-based searches are by far the most ubiquitous form of search undertaken by police. A key legal inquiry in these cases is whether consent was granted voluntarily. This Essay suggests that fact finders’ assessments of voluntariness are likely to be impaired by a systematic bias in social perception. Fact finders are likely to underappreciate the degree to which suspects feel pressure to comply with police officers’ requests to perform searches.

In two preregistered laboratory studies, we approached a total of 209 participants (“Experi- encers”) with a highly intrusive request: to unlock their password-protected smartphones and hand them over to an experimenter to search through while they waited in another room. A sepa- rate 194 participants (“Forecasters”) were brought into the lab and asked whether a reasonable person would agree to the same request if hypothetically approached by the same researcher. Both groups then reported how free they felt, or would feel, to refuse the request.

Study 1 found that whereas most Forecasters believed a reasonable person would refuse the experimenter’s request, most Experiencers—100 out of 103 people—promptly unlocked their phones and handed them over. Moreover, Experiencers reported feeling significantly less free to refuse than did Forecasters contemplating the same situation hypothetically.

Study 2 tested an intervention modeled after a commonly proposed reform of consent searches, in which the experimenter explicitly advises participants that they have the right to with- hold consent. We found that this advisory did not significantly reduce compliance rates or make Experiencers feel more free to say no. At the same time, the gap between Experiencers and Fore- casters remained significant.

These findings suggest that decision makers judging the voluntariness of consent consistently underestimate the pressure to comply with intrusive requests. This is problematic because it indi- cates that a key justification for suspicionless consent searches—that they are voluntary—relies on an assessment that is subject to bias. The results thus provide support to critics who would like to see consent searches banned or curtailed, as they have been in several states.

The results also suggest that a popular reform proposal—requiring police to advise citizens of their right to refuse consent—may have little effect. This corroborates previous observational studies, which find negligible effects of Miranda warnings on confession rates among interrogees, and little change in rates of consent once police start notifying motorists of their right to refuse vehicle searches. We suggest that these warnings are ineffective because they fail to address the psychology of compliance. The reason people comply with police, we contend, is social, not informational. The social demands of police-citizen interactions persist even when people are informed of their rights. It is time to abandon the myth that notifying people of their rights makes them feel empowered to exercise those rights…(More)”.

Ethics of identity in the time of big data


Paper by James Brusseau in First Monday: “Compartmentalizing our distinct personal identities is increasingly difficult in big data reality. Pictures of the person we were on past vacations resurface in employers’ Google searches; LinkedIn which exhibits our income level is increasingly used as a dating web site. Whether on vacation, at work, or seeking romance, our digital selves stream together.

One result is that a perennial ethical question about personal identity has spilled out of philosophy departments and into the real world. Ought we possess one, unified identity that coherently integrates the various aspects of our lives, or, incarnate deeply distinct selves suited to different occasions and contexts? At bottom, are we one, or many?

The question is not only palpable today, but also urgent because if a decision is not made by us, the forces of big data and surveillance capitalism will make it for us by compelling unity. Speaking in favor of the big data tendency, Facebook’s Mark Zuckerberg promotes the ethics of an integrated identity, a single version of selfhood maintained across diverse contexts and human relationships.

This essay goes in the other direction by sketching two ethical frameworks arranged to defend our compartmentalized identities, which amounts to promoting the dis-integration of our selves. One framework connects with natural law, the other with language, and both aim to create a sense of selfhood that breaks away from its own past, and from the unifying powers of big data technology….(More)”.

Digital inequalities in the age of artificial intelligence and big data


Paper by Christoph Lutz: “In this literature review, I summarize key concepts and findings from the rich academic literature on digital inequalities. I propose that digital inequalities research should look more into labor‐ and big data‐related questions such as inequalities in online labor markets and the negative effects of algorithmic decision‐making for vulnerable population groups.

The article engages with the sociological literature on digital inequalities and explains the general approach to digital inequalities, based on the distinction of first‐, second‐, and third‐level digital divides. First, inequalities in access to digital technologies are discussed. This discussion is extended to emerging technologies, including the Internet‐of‐things and artificial intelligence‐powered systems such as smart speakers. Second, inequalities in digital skills and technology use are reviewed and connected to the discourse on new forms of work such as the sharing economy or gig economy. Third and finally, the discourse on the outcomes, in the form of benefits or harms, from digital technology use is taken up.

Here, I propose to integrate the digital inequalities literature more strongly with critical algorithm studies and recent discussions about datafication, digital footprints, and information privacy….(More)”.

Revisiting the causal effect of democracy on long-run development


Blog post by Markus Eberhardt: “In a recent paper, Acemoglu et al. (2019), henceforth “ANRR”, demonstrated a significant and large causal effect of democracy on long-run growth. By adopting a simple binary indicator for democracy, and accounting for the dynamics of development, these authors found that a shift to democracy leads to a 20% higher level of development in the long run.1

The findings are remarkable in three ways: 

  1. Previous research often emphasised that a simple binary measure for democracy was perhaps “too blunt a concept” (Persson and Tabellini 2006) to provide robust empirical evidence.
  2.  Positive effects of democracy on growth were typically only a “short-run boost” (Rodrik and Wacziarg 2005). 
  3. The empirical findings are robust across a host of empirical estimators with different assumptions about the data generating process, including one adopting a novel instrumentation strategy (regional waves of democratisation).

ANRR’s findings are important because, as they highlight in a column on Vox, there is “a belief that democracy is bad for economic growth is common in both academic political economy as well as the popular press.” For example, Posner (2010) wrote that “[d]ictatorship will often be optimal for very poor countries”. 

The simplicity of ANRR’s empirical setup, the large sample of countries, the long time horizon (1960 to 2010), and the robust positive – and remarkably stable – results across the many empirical methods they employ send a very powerful message against such doubts that democracy does cause growth.

I agree with their conclusion, but with qualifications. …(More)”.

Nowcasting the Local Economy: Using Yelp Data to Measure Economic Activity


Paper by Edward L. Glaeser, Hyunjin Kim and Michael Luca: “Can new data sources from online platforms help to measure local economic activity? Government datasets from agencies such as the U.S. Census Bureau provide the standard measures of local economic activity at the local level. However, these statistics typically appear only after multi-year lags, and the public-facing versions are aggregated to the county or ZIP code level. In contrast, crowdsourced data from online platforms such as Yelp are often contemporaneous and geographically finer than official government statistics. Glaeser, Kim, and Luca present evidence that Yelp data can complement government surveys by measuring economic activity in close to real time, at a granular level, and at almost any geographic scale. Changes in the number of businesses and restaurants reviewed on Yelp can predict changes in the number of overall establishments and restaurants in County Business Patterns. An algorithm using contemporaneous and lagged Yelp data can explain 29.2 percent of the residual variance after accounting for lagged CBP data, in a testing sample not used to generate the algorithm. The algorithm is more accurate for denser, wealthier, and more educated ZIP codes….(More)”.

See all papers presented at the NBER Conference on Big Data for 21st Century Economic Statistics here.