The ethical and legal landscape of brain data governance


Paper by Paschal Ochang , Bernd Carsten Stahl, and Damian Eke: “Neuroscience research is producing big brain data which informs both advancements in neuroscience research and drives the development of advanced datasets to provide advanced medical solutions. These brain data are produced under different jurisdictions in different formats and are governed under different regulations. The governance of data has become essential and critical resulting in the development of various governance structures to ensure that the quality, availability, findability, accessibility, usability, and utility of data is maintained. Furthermore, data governance is influenced by various ethical and legal principles. However, it is still not clear what ethical and legal principles should be used as a standard or baseline when managing brain data due to varying practices and evolving concepts. Therefore, this study asks what ethical and legal principles shape the current brain data governance landscape? A systematic scoping review and thematic analysis of articles focused on biomedical, neuro and brain data governance was carried out to identify the ethical and legal principles which shape the current brain data governance landscape. The results revealed that there is currently a large variation of how the principles are presented and discussions around the terms are very multidimensional. Some of the principles are still at their infancy and are barely visible. A range of principles emerged during the thematic analysis providing a potential list of principles which can provide a more comprehensive framework for brain data governance and a conceptual expansion of neuroethics…(More)”.

Liquid Democracy. Two Experiments on Delegation in Voting


Paper by Joseph Campbell, Alessandra Casella, Lucas de Lara, Victoria R. Mooers & Dilip Ravindran: “Under Liquid Democracy (LD), decisions are taken by referendum, but voters are allowed to delegate their votes to other voters. Theory shows that in common interest problems where experts are correctly identified, the outcome can be superior to simple majority voting. However, even when experts are correctly identified, delegation must be used sparely because it reduces the variety of independent information sources. We report the results of two experiments, each studying two treatments: in one treatment, participants have the option of delegating to better informed individuals; in the second, participants can choose to abstain. The first experiment follows a tightly controlled design planned for the lab; the second is a perceptual task run online where information about signals’ precision is ambiguous. The two designs are very different, but the experiments reach the same result: in both, delegation rates are unexpectedly high and higher than abstention rates, and LD underperforms relative to both universal voting and abstention…(More)”.

The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective


Paper by Dorine Eva van Norren: “This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics…

Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms…(More)”.

The Strength of Knowledge Ties


Paper by Luca Maria Aiello: “Social relationships are probably the most important things we have in our life. They help us to get new jobslive longer, and be happier. At the scale of cities, networks of diverse social connections determine the economic prospects of a population. The strength of social ties is believed one of the key factors that regulate these outcomes. According to Granovetter’s classic theory about tie strength, information flows through social ties of two strengths: weak ties that are used infrequently but bridge distant groups that tend to posses diverse knowledge; and strong ties, that are used frequently, knit communities together, and provide dependable sources of support.

For decades, tie strength has been quantified using the frequency of interaction. Yet, frequency does not reflect Granovetter’s initial conception of strength, which in his view is a multidimensional concept, such as the “combination of the amount of time, the emotional intensity, intimacy, and services which characterize the tie.” Frequency of interaction is traditionally used as a proxy for more complex social processes mostly because it is relatively easy to measure (e.g., the number of calls in phone records). But what if we had a way to measure these social processes directly?

We used advanced techniques in Natural Language Processing (NLP) to quantify whether the text of a message conveys knowledge (whether the message provides information about a specific domain) or support (expressions of emotional or practical help), and applied it to a large conversation network from Reddit composed by 630K users resident in the United States, linked by 12.8M ties. Our hypothesis was that the resulting knowledge and support networks would fare better in predicting social outcomes than a traditional social network weighted by interaction frequency. In particular, borrowing a classic experimental setup, we tested whether the diversity of social connections of Reddit users resident in a specific US state would correlate with the economic opportunities in that state (estimated with GDP per capita)…(More)”.

The 15-Minute City Quantified Using Mobility Data


Paper by Timur Abbiasov et al: “Americans travel 7 to 9 miles on average for shopping and recreational activities, which is far longer than the 15-minute (walking) city advocated by ecologically-oriented urban planners. This paper provides a comprehensive analysis of local trip behavior in US cities using GPS data on individual trips from 40 million mobile devices. We define local usage as the share of trips made within 15-minutes walking distance from home, and find that the median US city resident makes only 12% of their daily trips within such a short distance. We find that differences in access to local services can explain eighty percent of the variation in 15-minute usage across metropolitan areas and 74 percent of the variation in usage within metropolitan areas. Differences in historic zoning permissiveness within New York suggest a causal link between access and usage, and that less restrictive zoning rules, such as permitting more mixed-use development, would lead to shorter travel times. Finally, we document a strong correlation between local usage and experienced segregation for poorer, but not richer, urbanites, which suggests that 15-minute cities may also exacerbate the social isolation of marginalized communities…(More)”.

Smart City Technologies: A Political Economy Introduction to Their Governance Challenges


Paper by Beatriz Botero Arcila: “Smart cities and smart city technologies are terms used to refer to computational models of urbanism and to data-driven and algorithmically intermediated technologies. Smart city technologies intend to plan for and deliver new efficiencies, insights, and conveniences on city services. At the same time, in instances when these tools are involved in decision-making processes that don’t have right or wrong mathematical answers, they present important challenges related to cementing inequality, discrimination, and surveillance. This chapter is an introduction to the governance challenges smart city technologies pose. It includes an overview of the literature, focusing on the risks they pose and it includes a case study of surveillance technologies as an example of the adoption and diffusion patterns of smart city technologies. This is a political economy approach to smart city technologies, which emphasizes the adoption, development, and diffusion patterns of these technologies as a function of institutional, market and ideological dynamics. Such an approach should allow scholars and policymakers to find points of intervention at the level of the institutions and infrastructures that sustain the current shape of these technologies to address and prevent some of risks and harms they create. This should help interested parties add some nuance to binary analyses and identify different actors, institutions, and infrastructures that can be instances of intervention to shape their effects and create change. It should also help those working on developing these tools to imagine how institutions and infrastructures must be shaped to realize their benefits…(More)”.

Is bigger better? A study of the effect of group size on collective intelligence in online groups


Paper by Nada Hashmi, G. Shankaranarayanan and Thomas W. Malone: “What is the optimal size for online groups that use electronic communication and collaboration tools? Previous research typically suggested optimal group sizes of about 5 to 7 members, but this research predominantly examined in-person groups. Here we investigate online groups whose members communicate with each other using two electronic collaboration tools: text chat and shared editing. Unlike previous research that studied groups performing a single task, here we measure group performance using a test of collective intelligence (CI) that includes a combination of tasks specifically chosen to predict performance on a wide range of other tasks [72]. Our findings suggest that there is a curvilinear relationship between group size and performance and that the optimal group size in online groups is between 25 and 35. This, in turn, suggests that online groups may now allow more people to be productively involved in group decision-making than was possible with in-person groups in the past…(More)”.

All Eyes on Them: A Field Experiment on Citizen Oversight and Electoral Integrity


Paper by Natalia Garbiras-Díaz and Mateo Montenegro: “Can information and communication technologies help citizens monitor their elections? We analyze a large-scale field experiment designed to answer this question in Colombia. We leveraged Facebook advertisements sent to over 4 million potential voters to encourage citizen reporting of electoral irregularities. We also cross-randomized whether candidates were informed about the campaign in a subset of municipalities. Total reports, and evidence-backed ones, experienced a large increase. Across a wide array of measures, electoral irregularities decreased. Finally, the reporting campaign reduced the vote share of candidates dependent on irregularities. This light-touch intervention is more cost-effective than monitoring efforts traditionally used by policymakers…(More)”.

A Landscape of Open Science Policies Research


Paper by Alejandra Manco: “This literature review aims to examine the approach given to open science policy in the different studies. The main findings are that the approach given to open science has different aspects: policy framing and its geopolitical aspects are described as an asymmetries replication and epistemic governance tool. The main geopolitical aspects of open science policies described in the literature are the relations between international, regional, and national policies. There are also different components of open science covered in the literature: open data seems much discussed in the works in the English language, while open access is the main component discussed in the Portuguese and Spanish speaking papers. Finally, the relationship between open science policies and the science policy is framed by highlighting the innovation and transparency that open science can bring into it…(More)”

When do “Nudges” Increase Welfare?


Paper by Hunt Allcott, Daniel Cohen, William Morrison & Dmitry Taubinsky: “Policymakers are increasingly interested in non-standard policy instruments (NPIs), or “nudges,” such as simplified information disclosure and warning labels. We characterize the welfare effects of NPIs using public finance sufficient statistic approaches, allowing for endogenous prices, market power, and optimal or suboptimal taxes. While many empirical evaluations have focused on whether NPIs increase ostensibly beneficial behaviors on average, we show that this can be a poor guide to welfare. Welfare also depends on whether the NPI reduces the variance of distortions from heterogenous biases and externalities, and the average effect becomes irrelevant with zero pass-through or optimal taxes. We apply our framework to randomized experiments evaluating automotive fuel economy labels and sugary drink health labels. In both experiments, the labels increase ostensibly beneficial behaviors but also may decrease welfare in our model, because they increase the variance of distortions…(More)”.