Human migration: the big data perspective


Alina Sîrbu et al at the International Journal of Data Science and Analytics: “How can big data help to understand the migration phenomenon? In this paper, we try to answer this question through an analysis of various phases of migration, comparing traditional and novel data sources and models at each phase. We concentrate on three phases of migration, at each phase describing the state of the art and recent developments and ideas. The first phase includes the journey, and we study migration flows and stocks, providing examples where big data can have an impact. The second phase discusses the stay, i.e. migrant integration in the destination country. We explore various data sets and models that can be used to quantify and understand migrant integration, with the final aim of providing the basis for the construction of a novel multi-level integration index. The last phase is related to the effects of migration on the source countries and the return of migrants….(More)”.

The Law and Economics of Online Republication


Paper by Ronen Perry: “Jerry publishes unlawful content about Newman on Facebook, Elaine shares Jerry’s post, the share automatically turns into a tweet because her Facebook and Twitter accounts are linked, and George immediately retweets it. Should Elaine and George be liable for these republications? The question is neither theoretical nor idiosyncratic. On occasion, it reaches the headlines, as when Jennifer Lawrence’s representatives announced she would sue every person involved in the dissemination, through various online platforms, of her illegally obtained nude pictures. Yet this is only the tip of the iceberg. Numerous potentially offensive items are reposted daily, their exposure expands in widening circles, and they sometimes “go viral.”

This Article is the first to provide a law and economics analysis of the question of liability for online republication. Its main thesis is that liability for republication generates a specter of multiple defendants which might dilute the originator’s liability and undermine its deterrent effect. The Article concludes that, subject to several exceptions and methodological caveats, only the originator should be liable. This seems to be the American rule, as enunciated in Batzel v. Smith and Barrett v. Rosenthal. It stands in stark contrast to the prevalent rules in other Western jurisdictions and has been challenged by scholars on various grounds since its very inception.

The Article unfolds in three Parts. Part I presents the legal framework. It first discusses the rules applicable to republication of self-created content, focusing on the emergence of the single publication rule and its natural extension to online republication. It then turns to republication of third-party content. American law makes a clear-cut distinction between offline republication which gives rise to a new cause of action against the republisher (subject to a few limited exceptions), and online republication which enjoys an almost absolute immunity under § 230 of the Communications Decency Act. Other Western jurisdictions employ more generous republisher liability regimes, which usually require endorsement, a knowing expansion of exposure or repetition.

Part II offers an economic justification for the American model. Law and economics literature has showed that attributing liability for constant indivisible harm to multiple injurers, where each could have single-handedly prevented that harm (“alternative care” settings), leads to dilution of liability. Online republication scenarios often involve multiple tortfeasors. However, they differ from previously analyzed phenomena because they are not alternative care situations, and because the harm—increased by the conduct of each tortfeasor—is not constant and indivisible. Part II argues that neither feature precludes the dilution argument. It explains that the impact of the multiplicity of injurers in the online republication context on liability and deterrence provides a general justification for the American rule. This rule’s relatively low administrative costs afford additional support.

Part III considers the possible limits of the theoretical argument. It maintains that exceptions to the exclusive originator liability rule should be recognized when the originator is unidentifiable or judgment-proof, and when either the republisher’s identity or the republication’s audience was unforeseeable. It also explains that the rule does not preclude liability for positive endorsement with a substantial addition, which constitutes a new original publication, or for the dissemination of illegally obtained content, which is an independent wrong. Lastly, Part III addresses possible challenges to the main argument’s underlying assumptions, namely that liability dilution is a real risk and that it is undesirable….(More)”.

A controlled trial for reproducibility


Marc P. Raphael, Paul E. Sheehan & Gary J. Vora at Nature: “In 2016, the US Defense Advanced Research Projects Agency (DARPA) told eight research groups that their proposals had made it through the review gauntlet and would soon get a few million dollars from its Biological Technologies Office (BTO). Along with congratulations, the teams received a reminder that their award came with an unusual requirement — an independent shadow team of scientists tasked with reproducing their results.

Thus began an intense, multi-year controlled trial in reproducibility. Each shadow team consists of three to five researchers, who visit the ‘performer’ team’s laboratory and often host visits themselves. Between 3% and 8% of the programme’s total funds go to this independent validation and verification (IV&V) work. But DARPA has the flexibility and resources for such herculean efforts to assess essential techniques. In one unusual instance, an IV&V laboratory needed a sophisticated US$200,000 microscopy and microfluidic set-up to make an accurate assessment.

These costs are high, but we think they are an essential investment to avoid wasting taxpayers’ money and to advance fundamental research towards beneficial applications. Here, we outline what we’ve learnt from implementing this programme, and how it could be applied more broadly….(More)”.

Big Data and Democracy


Paper by Freek van Gils, Wieland Müller and Jens Prufer: “Recent technological developments have raised concerns about threats to democracy because of their potential to distort election outcomes: (a) data-driven voter research enabling political microtargeting, and (b) growing news consumption via social media and news aggregators that obfuscate the origin of news items, leading to voters’ unawareness about a news sender’s identity. We provide a theoretical framework in which we can analyze the effects that microtargeting by political interest groups and unawareness have on election outcomes in comparison to \conventional” news reporting. We show which voter groups suffer from which technological development, (a) or (b). While both microtargeting and unawareness have negative effects on voter welfare, we show that only unawareness can flip an election. Our model framework allows the theory-based discussion of policy proposals, such as to ban microtargeting or to require news platforms to signal the political orientation of a news item’s originator…(More)”.

Mediated Democracy – Linking Digital Technology to Political Agency


Paper by Jeanette Hofmann: “Although the relationship between digitalisation and democracy is subject of growing public attention, the nature of this relationship is rarely addressed in a systematic manner. The common understanding is that digital media are the driver of the political change we are facing today. This paper argues against such a causal approach und proposes a co-evolutionary perspective instead. Inspired by Benedict Anderson’s “Imagined Communities” and recent research on mediatisation, it introduces the concept of mediated democracy. This concept reflects the simple idea that representative democracy requires technical mediation, and that the rise of modern democracy and of communication media are therefore closely intertwined. Hence, mediated democracy denotes a research perspective, not a type of democracy. It explores the changing interplay of democratic organisation and communication media as a contingent constellation, which could have evolved differently. Specific forms of communication media emerge in tandem with larger societal formations and mutually enable each other. Following this argument, the current constellation reflects a transformation of representative democracy and the spread of digital media. The latter is interpreted as a “training ground” for experimenting with new forms of democratic agency….(More)”.

Using Technology to ‘Co-Create’ EU Policies


Paper by Gianluca Sgueo: “What will European Union (EU) decision-making look like in the next decade and beyond? Is technological progress promoting more transparent, inclusive and participatory decision-making at EU level?

Technology has dramatically changed both the number and quality of connections between citizens and public administrations. With technological progress, citizens have gained improved access to public authorities through new digital communication channels. Innovative, tech-based, approaches to policy-making have become the subject of a growing debate between academics and politicians. Theoretical approaches such as ‘CrowdLaw’, ‘Policy-Making 3.0’, ‘liquid’, ‘do-it- yourself’ or ‘technical’ democracy and ‘democratic innovations’ share the positive outlook towards technology; and technology is seen as the medium through which policies can be ‘co-created’ by decision-makers and stakeholders. Co-creation is mutually beneficial. Decision-makers gain legitimacy by incorporating the skills, knowledge and expertise of citizens, who in turn have the opportunity to shape new policies according to their needs and expectations.

EU institutions are at the forefront of experimentation with technologically innovative approaches to make decision-making more transparent and accessible to stakeholders. Efforts in modernising EU participatory channels through technology have evolved over time: from redressing criticism on democratic deficits, through fostering digital interactions with stakeholders, up to current attempts at designing policy-making in a friendly and participative manner.

While technological innovation holds the promise of making EU policy-making even more participatory, it is not without challenges. To begin with, technology is resource consuming. There are legal challenges associated with both over- and under-regulation of the use of technology in policy-making. Furthermore, technological innovation raises ethical concerns. It may increase inequality, for instance, or infringe personal privacy… (More)“.

Beyond a Human Rights-based approach to AI Governance: Promise, Pitfalls, Plea


Paper by Nathalie A. Smuha: “This paper discusses the establishment of a governance framework to secure the development and deployment of “good AI”, and describes the quest for a morally objective compass to steer it. Asserting that human rights can provide such compass, this paper first examines what a human rights-based approach to AI governance entails, and sets out the promise it propagates. Subsequently, it examines the pitfalls associated with human rights, particularly focusing on the criticism that these rights may be too Western, too individualistic, too narrow in scope and too abstract to form the basis of sound AI governance. After rebutting these reproaches, a plea is made to move beyond the calls for a human rights-based approach, and start taking the necessary steps to attain its realisation. It is argued that, without elucidating the applicability and enforceability of human rights in the context of AI; adopting legal rules that concretise those rights where appropriate; enhancing existing enforcement mechanisms; and securing an underlying societal infrastructure that enables human rights in the first place, any human rights-based governance framework for AI risks falling short of its purpose….(More)”.

Crowdsourcing hypothesis tests: making transparent how design choices shape research results


Paper by J.F. Landy and Leonid Tiokhin: “To what extent are research results influenced by subjective decisions that scientists make as they design studies?

Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses.

Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim….(More)”.

Testing Transparency


Paper by Brigham Daniels, Mark Buntaine & Tanner Bangerter: “In modern democracies, governmental transparency is thought to have great value. When it comes to addressing administrative corruption and mismanagement, many would agree with Justice Brandeis’s observation
that sunlight is the best disinfectant. Beyond this, many credit transparency with enabling meaningful citizen participation.

But even though transparency appears highly correlated with successful
governance in developed democracies, assumptions about administrative
transparency have remained empirically untested. Testing effects of transparency would prove particularly helpful in developing democracies
where transparency norms have not taken hold or only have done so slowly.

In these contexts, does administrative transparency really create the sorts of benefits attributed to it? Transparency might grease the gears of developed democracies, but what good is grease when many of the gears seem to be broken or missing entirely?

This Article presents empirical results from a first-of-its-kind field study that tested two major promises of administrative transparency in a developing democracy: that transparency increases public participation in government affairs and that it increases government accountability. To test these hypotheses, we used two randomized controlled trials.

Surprisingly, we found transparency had no significant effect in almost
any of our quantitative measurements, although our qualitative results
suggested that when transparency interventions exposed corruption, some
limited oversight could result. Our findings are particularly significant for
developing democracies and show, at least in this context, that Justice
Brandeis may have oversold the cleansing effects of transparency.

A few rays of transparency shining light on government action do not disinfect the system and cure government corruption and mismanagement. Once corruption and mismanagement are identified, it takes effective government institutions and action from civil society to successfully act as a disinfectant…(More)”.

Ex ante knowledge for infectious disease outbreaks : Introducing the organizational network governance approach


Chapter by Jörg Raab et al: “The core question addressed is to what extent ex ante knowledge can be made available from a network governance perspective to deal with a crisis such as an infectious disease outbreak. Such outbreaks are often characterized by a lack of information and knowledge, changing and unforeseen conditions as well as a myriad of organizations becoming involved on the one hand but also organizations which do not become adequately involved. We introduce the organizational network governance approach as an exploratory approach to produce useful ex ante information for limiting the transmission of a virus and its impact. We illustrate the usefulness of our approach introducing two fictitious but realistic outbreak scenarios: the West Nile Virus (WNV), which is transmitted via mosquitos and the outbreak of a New Asian Coronavirus (NAC) which is characterized by human to human transmission. Both viruses can lead to serious illnesses or even death as well as large health care and economic costs.

Our organizational network governance approach turns out to be effective in generating information to produce recommendations for strengthening the organizational context in order to limit the transmission of a virus and its impact. We also suggest how the organizational network governance approach could be further developed…(More)”.