Addressing the Global Data Divide through Digital Trade Law


Paper by Binit Agrawal and Neha Mishra: “The global data divide has emerged as a major policy challenge threatening equitable development, poverty alleviation, and access to information. Further, it has polarised countries on either side of the data schism, who have often reacted by implementing conflicting and sub-optimal measures. This paper surveys such policy measures, the politics behind them, and the footprints they have left on the digital trade or electronic commerce rules contained in free trade agreements (FTAs). First, this paper details an understanding of what constitutes the global data divide, focusing on three components, namely access, regulation, and use. Second, the paper surveys electronic commerce or digital trade rules in FTAs to understand whether existing rules deal with the widening data divide in a comprehensive manner and, if so, how. Our primary argument is that the existing FTA disciplines are deficient in addressing the global data divide. Key problems include insufficient participation by developing countries in framing digital trade rules, non-recognition of the data divide affecting developing countries, and lack of robust and implementable mechanisms to bridge the data divide. Finally, we present a proposal to reform digital trade rules in line with best practices emerging in FTA practice and the main areas where gaps must be bridged. Our proposals include enhancing technical assistance and capacity-building support, developing a tailored special and differential treatment (SDT) mechanism, incentivising the removal of data-related barriers by designing appropriate bargains in negotiations, and boosting international regulatory cooperation through innovative and creative mechanisms….(More)”.

A Comparative Study of Citizen Crowdsourcing Platforms and the Use of Natural Language Processing (NLP) for Effective Participatory Democracy


Paper by Carina Antonia Hallin: ‘The use of crowdsourcing platforms to harness citizen insights for policymaking has gained increasing importance in regional and national policy planning. Participatory democracy using crowdsourcing platforms includes various initiatives, such as generating ideas for new law reforms (Aitamurto and Landemore 2015], economic development, and solving challenges related to how to create inclusive social actions and interventions for better, healthier, and more prosperous local communities (Bentley and Pugalis, 2014). Such case observations, coupled with the increasing prevalence of internet-based communication, point to the real benefits of implementing participatory democracies on a mass scale in which citizens are invited to contribute their ideas, opinions, and deliberations (Salganik and Levy 2015). By adopting collective intelligence platforms, public authorities can harness local knowledge from citizens to find the right ‘policy mix’ and collaborate with citizens and relevant actors in the policymaking processes. This comparative study aims to validate the adoption of collective intelligence and artificial intelligence/natural language processing (NLP) on crowdsourcing platforms for effective participatory democracy and policymaking in local governments. The study compares 15 citizen crowdsourcing platforms, including Natural language Processing (NLP), for policymaking across Europe and the United States. The study offers a framework for working with citizen crowdsourcing platforms and exploring the usefulness of NLP on the platforms for effective participatory democracy…(More)”.

The ethical and legal landscape of brain data governance


Paper by Paschal Ochang , Bernd Carsten Stahl, and Damian Eke: “Neuroscience research is producing big brain data which informs both advancements in neuroscience research and drives the development of advanced datasets to provide advanced medical solutions. These brain data are produced under different jurisdictions in different formats and are governed under different regulations. The governance of data has become essential and critical resulting in the development of various governance structures to ensure that the quality, availability, findability, accessibility, usability, and utility of data is maintained. Furthermore, data governance is influenced by various ethical and legal principles. However, it is still not clear what ethical and legal principles should be used as a standard or baseline when managing brain data due to varying practices and evolving concepts. Therefore, this study asks what ethical and legal principles shape the current brain data governance landscape? A systematic scoping review and thematic analysis of articles focused on biomedical, neuro and brain data governance was carried out to identify the ethical and legal principles which shape the current brain data governance landscape. The results revealed that there is currently a large variation of how the principles are presented and discussions around the terms are very multidimensional. Some of the principles are still at their infancy and are barely visible. A range of principles emerged during the thematic analysis providing a potential list of principles which can provide a more comprehensive framework for brain data governance and a conceptual expansion of neuroethics…(More)”.

Liquid Democracy. Two Experiments on Delegation in Voting


Paper by Joseph Campbell, Alessandra Casella, Lucas de Lara, Victoria R. Mooers & Dilip Ravindran: “Under Liquid Democracy (LD), decisions are taken by referendum, but voters are allowed to delegate their votes to other voters. Theory shows that in common interest problems where experts are correctly identified, the outcome can be superior to simple majority voting. However, even when experts are correctly identified, delegation must be used sparely because it reduces the variety of independent information sources. We report the results of two experiments, each studying two treatments: in one treatment, participants have the option of delegating to better informed individuals; in the second, participants can choose to abstain. The first experiment follows a tightly controlled design planned for the lab; the second is a perceptual task run online where information about signals’ precision is ambiguous. The two designs are very different, but the experiments reach the same result: in both, delegation rates are unexpectedly high and higher than abstention rates, and LD underperforms relative to both universal voting and abstention…(More)”.

The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective


Paper by Dorine Eva van Norren: “This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics…

Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms…(More)”.

The Strength of Knowledge Ties


Paper by Luca Maria Aiello: “Social relationships are probably the most important things we have in our life. They help us to get new jobslive longer, and be happier. At the scale of cities, networks of diverse social connections determine the economic prospects of a population. The strength of social ties is believed one of the key factors that regulate these outcomes. According to Granovetter’s classic theory about tie strength, information flows through social ties of two strengths: weak ties that are used infrequently but bridge distant groups that tend to posses diverse knowledge; and strong ties, that are used frequently, knit communities together, and provide dependable sources of support.

For decades, tie strength has been quantified using the frequency of interaction. Yet, frequency does not reflect Granovetter’s initial conception of strength, which in his view is a multidimensional concept, such as the “combination of the amount of time, the emotional intensity, intimacy, and services which characterize the tie.” Frequency of interaction is traditionally used as a proxy for more complex social processes mostly because it is relatively easy to measure (e.g., the number of calls in phone records). But what if we had a way to measure these social processes directly?

We used advanced techniques in Natural Language Processing (NLP) to quantify whether the text of a message conveys knowledge (whether the message provides information about a specific domain) or support (expressions of emotional or practical help), and applied it to a large conversation network from Reddit composed by 630K users resident in the United States, linked by 12.8M ties. Our hypothesis was that the resulting knowledge and support networks would fare better in predicting social outcomes than a traditional social network weighted by interaction frequency. In particular, borrowing a classic experimental setup, we tested whether the diversity of social connections of Reddit users resident in a specific US state would correlate with the economic opportunities in that state (estimated with GDP per capita)…(More)”.

The 15-Minute City Quantified Using Mobility Data


Paper by Timur Abbiasov et al: “Americans travel 7 to 9 miles on average for shopping and recreational activities, which is far longer than the 15-minute (walking) city advocated by ecologically-oriented urban planners. This paper provides a comprehensive analysis of local trip behavior in US cities using GPS data on individual trips from 40 million mobile devices. We define local usage as the share of trips made within 15-minutes walking distance from home, and find that the median US city resident makes only 12% of their daily trips within such a short distance. We find that differences in access to local services can explain eighty percent of the variation in 15-minute usage across metropolitan areas and 74 percent of the variation in usage within metropolitan areas. Differences in historic zoning permissiveness within New York suggest a causal link between access and usage, and that less restrictive zoning rules, such as permitting more mixed-use development, would lead to shorter travel times. Finally, we document a strong correlation between local usage and experienced segregation for poorer, but not richer, urbanites, which suggests that 15-minute cities may also exacerbate the social isolation of marginalized communities…(More)”.

Smart City Technologies: A Political Economy Introduction to Their Governance Challenges


Paper by Beatriz Botero Arcila: “Smart cities and smart city technologies are terms used to refer to computational models of urbanism and to data-driven and algorithmically intermediated technologies. Smart city technologies intend to plan for and deliver new efficiencies, insights, and conveniences on city services. At the same time, in instances when these tools are involved in decision-making processes that don’t have right or wrong mathematical answers, they present important challenges related to cementing inequality, discrimination, and surveillance. This chapter is an introduction to the governance challenges smart city technologies pose. It includes an overview of the literature, focusing on the risks they pose and it includes a case study of surveillance technologies as an example of the adoption and diffusion patterns of smart city technologies. This is a political economy approach to smart city technologies, which emphasizes the adoption, development, and diffusion patterns of these technologies as a function of institutional, market and ideological dynamics. Such an approach should allow scholars and policymakers to find points of intervention at the level of the institutions and infrastructures that sustain the current shape of these technologies to address and prevent some of risks and harms they create. This should help interested parties add some nuance to binary analyses and identify different actors, institutions, and infrastructures that can be instances of intervention to shape their effects and create change. It should also help those working on developing these tools to imagine how institutions and infrastructures must be shaped to realize their benefits…(More)”.

Is bigger better? A study of the effect of group size on collective intelligence in online groups


Paper by Nada Hashmi, G. Shankaranarayanan and Thomas W. Malone: “What is the optimal size for online groups that use electronic communication and collaboration tools? Previous research typically suggested optimal group sizes of about 5 to 7 members, but this research predominantly examined in-person groups. Here we investigate online groups whose members communicate with each other using two electronic collaboration tools: text chat and shared editing. Unlike previous research that studied groups performing a single task, here we measure group performance using a test of collective intelligence (CI) that includes a combination of tasks specifically chosen to predict performance on a wide range of other tasks [72]. Our findings suggest that there is a curvilinear relationship between group size and performance and that the optimal group size in online groups is between 25 and 35. This, in turn, suggests that online groups may now allow more people to be productively involved in group decision-making than was possible with in-person groups in the past…(More)”.

All Eyes on Them: A Field Experiment on Citizen Oversight and Electoral Integrity


Paper by Natalia Garbiras-Díaz and Mateo Montenegro: “Can information and communication technologies help citizens monitor their elections? We analyze a large-scale field experiment designed to answer this question in Colombia. We leveraged Facebook advertisements sent to over 4 million potential voters to encourage citizen reporting of electoral irregularities. We also cross-randomized whether candidates were informed about the campaign in a subset of municipalities. Total reports, and evidence-backed ones, experienced a large increase. Across a wide array of measures, electoral irregularities decreased. Finally, the reporting campaign reduced the vote share of candidates dependent on irregularities. This light-touch intervention is more cost-effective than monitoring efforts traditionally used by policymakers…(More)”.