Paper by Raphael Koster et al: “Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation…(More)”.
The People Versus The Algorithm: Stakeholders and AI Accountability
Paper by Jbid Arsenyan and Julia Roloff: “As artificial intelligence (AI) applications are used for a wide range of tasks, the question about who is responsible for detecting and remediating problems caused by AI applications remains disputed. We argue that responsibility attributions proposed by management scholars fail to enable a practical solution as two aspects are overlooked: the difficulty to design a complex algorithm that does not produce adverse outcomes, and the conflict of interest inherited in some AI applications by design as proprietors and users employ the application for different purposes. In this conceptual paper, we argue that effective accountability can only be delivered through solutions that enable stakeholders to employ their collective intelligence effectively in compiling problem reports and analyze problem patterns. This allows stakeholders, including governments, to hold providers of AI applications accountable, and ensure that appropriate corrections are carried out in a timely manner…(More)”.
What Might Hannah Arendt Make of Big Data?: On Thinking, Natality, and Narrative with Big Data
Paper by Daniel Brennan: “…considers the phenomenon of Big Data through the work of Hannah Arendt on technology and on thinking. By exploring the nuance to Arendt’s critique of technology, and its relation to the social and political spheres of human activity, the paper presents a case for considering the richness of Arendt’s thought for approaching moral questions of Big Data. The paper argues that the nuances of Arendt’s writing contribute a sceptical, yet also hopeful lens to the moral potential of Big Data. The scepticism is due to the potential of big data to reduce humans to a calculable, and thus manipulatable entity. Such warnings are rife throughout Arendt’s oeuvre. The hope is found in the unique way that Arendt conceives of thinking, as having a conversation with oneself, unencumbered by ideological, or fixed accounts of how things are, in a manner which challenges preconceived notions of the self and world. If thinking can be aided by Big Data, then there is hope for Big Data to contribute to the project of natality that characterises Arendt’s understanding of social progress. Ultimately, the paper contends that Arendt’s definition of what constitutes thinking is the mediator to make sense of the morally ambivalence surrounding Big Data. By focussing on Arendt’s account of the moral value of thinking, the paper provides an evaluative framework for interrogating uses of Big Data…(More)”.
Legislating Data Loyalty
Paper by Woodrow Hartzog and Neil M. Richards: “ Lawmakers looking to embolden privacy law have begun to consider imposing duties of loyalty on organizations trusted with people’s data and online experiences. The idea behind loyalty is simple: organizations should not process data or design technologies that conflict with the best interests of trusting parties. But the logistics and implementation of data loyalty need to be developed if the concept is going to be capable of moving privacy law beyond its “notice and consent” roots to confront people’s vulnerabilities in their relationship with powerful data collectors.
In this short Essay, we propose a model for legislating data loyalty. Our model takes advantage of loyalty’s strengths—it is well-established in our law, it is flexible, and it can accommodate conflicting values. Our Essay also explains how data loyalty can embolden our existing data privacy rules, address emergent dangers, solve privacy’s problems around consent and harm, and establish an antibetrayal ethos as America’s privacy identity.
We propose that lawmakers use a two-step process to (1) articulate a primary, general duty of loyalty, then (2) articulate “subsidiary” duties that are more specific and sensitive to context. Subsidiary duties regarding collection, personalization, gatekeeping, persuasion, and mediation would target the most opportunistic contexts for self-dealing and result in flexible open-ended duties combined with highly specific rules. In this way, a duty of data loyalty is not just appealing in theory—it can be effectively implemented in practice just like the other duties of loyalty our law has recognized for hundreds of years. Loyalty is thus not only flexible, but it is capable of breathing life into America’s historically tepid privacy frameworks…(More)”.
Efficient and stable data-sharing in a public transit oligopoly as a coopetitive game
Paper by Qi Liu and Joseph Y.J. Chow: “In this study, various forms of data sharing are axiomatized. A new way of studying coopetition, especially data-sharing coopetition, is proposed. The problem of the Bayesian game with signal dependence on actions is observed; and a method to handle such dependence is proposed. We focus on fixed-route transit service markets. A discrete model is first presented to analyze the data-sharing coopetition of an oligopolistic transit market when an externality effect exists. Given a fixed data sharing structure, a Bayesian game is used to capture the competition under uncertainty while a coalition formation model is used to determine the stable data-sharing decisions. A new method of composite coalition is proposed to study efficient markets. An alternative continuous model is proposed to handle large networks using simulation. We apply these models to various types of networks. Test results show that perfect information may lead to perfect selfishness. Sharing more data does not necessarily improve transit service for all groups, at least if transit operators remain non-cooperative. Service complementarity does not necessarily guarantee a grand data-sharing coalition. These results can provide insights on policy-making, like whether city authorities should enforce compulsory data-sharing along with cooperation between operators or setup a voluntary data-sharing platform…(More)”.
Moral Expansiveness Around the World: The Role of Societal Factors Across 36 Countries
Paper by Kelly Kirkland et al: “What are the things that we think matter morally, and how do societal factors influence this? To date, research has explored several individual-level and historical factors that influence the size of our ‘moral circles.’ There has, however, been less attention focused on which societal factors play a role. We present the first multi-national exploration of moral expansiveness—that is, the size of people’s moral circles across countries. We found low generalized trust, greater perceptions of a breakdown in the social fabric of society, and greater perceived economic inequality were associated with smaller moral circles. Generalized trust also helped explain the effects of perceived inequality on lower levels of moral inclusiveness. Other inequality indicators (i.e., Gini coefficients) were, however, unrelated to moral expansiveness. These findings suggest that societal factors, especially those associated with generalized trust, may influence the size of our moral circles…(More)”.
The Truth in Fake News: How Disinformation Laws Are Reframing the Concepts of Truth and Accuracy on Digital Platforms
Paper by Paolo Cavaliere: “The European Union’s (EU) strategy to address the spread of disinformation, and most notably the Code of Practice on Disinformation and the forthcoming Digital Services Act, tasks digital platforms with a range of actions to minimise the distribution of issue-based and political adverts that are verifiably false or misleading. This article discusses the implications of the EU’s approach with a focus on its categorical approach, specifically what it means to conceptualise disinformation as a form of advertisement and by what standards digital platforms are expected to assess the truthful or misleading nature of the content they distribute because of this categorisation. The analysis will show how the emerging EU anti-disinformation framework marks a departure from the European Court of Human Rights’ consolidated standards of review for public interest and commercial speech and the tests utilised to assess their accuracy….(More)”.
Non-human humanitarianism: when ‘AI for good’ can be harmful
Paper by Mirca Madianou: “Artificial intelligence (AI) applications have been introduced in humanitarian operations in order to help with the significant challenges the sector is facing. This article focuses on chatbots which have been proposed as an efficient method to improve communication with, and accountability to affected communities. Chatbots, together with other humanitarian AI applications such as biometrics, satellite imaging, predictive modelling and data visualisations, are often understood as part of the wider phenomenon of ‘AI for social good’. The article develops a decolonial critique of humanitarianism and critical algorithm studies which focuses on the power asymmetries underpinning both humanitarianism and AI. The article asks whether chatbots, as exemplars of ‘AI for good’, reproduce inequalities in the global context. Drawing on a mixed methods study that includes interviews with seven groups of stakeholders, the analysis observes that humanitarian chatbots do not fulfil claims such as ‘intelligence’. Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power. The article concludes that ‘AI for good’ is an ‘enchantment of technology’ that reworks the colonial legacies of humanitarianism whilst also occluding the power dynamics at play…(More)”.
Opportunities and challenges of using social media big data to assess mental health consequences of the COVID-19 crisis and future major events
Paper by Martin Tušl et al : “The present commentary discusses how social media big data could be used in mental health research to assess the impact of major global crises such as the COVID-19 pandemic. We first provide a brief overview of the COVID-19 situation and the challenges associated with the assessment of its global impact on mental health using conventional methods. We then propose social media big data as a possible unconventional data source, provide illustrative examples of previous studies, and discuss the advantages and challenges associated with their use for mental health research. We conclude that social media big data represent a valuable resource for mental health research, however, several methodological limitations and ethical concerns need to be addressed to ensure safe use…(More)”.
Parallel Worlds: Revealing the Inequity of Access to Urban Spaces in Mexico City Through Mobility Data
Paper by Emmanuel Letouzé et al: “The near-ubiquitous use of mobile devices generates mobility data that can paint pictures of urban behavior at unprecedented levels of granularity and complexity. In the current period of intense sociopolitical polarization, mobility data can help reveal which urban spaces serve to attenuate or accentuate socioeconomic divides. If urban spaces served to bridge class divides, people from different socioeconomic groups would be prone to mingle in areas further removed from their homes, creating opportunities for sharing experiences in the physical world. In an opposing scenario, people would remain among neighbors and peers, creating “local urban bubbles” that reflect and reinforce social inequities and their adverse effects on social mixity, cohesion, and trust. These questions are especially salient in cities with high levels of socioeconomic inequality, such as Mexico City.
Building on a joint research project between Data-Pop Alliance and Oxfam Mexico titled “Mundos Paralelos” [Parallel Worlds], this paper leverages privacy-preserving mobility data to unveil the unequal use and appropriation of urban spaces by the inhabitants of Mexico City. This joint research harnesses a year (2018–2019) of anonymized mobility data to perform mobility and behavioral analysis of specific groups at high spatial resolution. Its main findings suggest that Mexico City is a spatially fragmented, even segregated city: although distinct socioeconomic groups do meet in certain spaces, a pattern emerges where certain points of interest are exclusive to the high- and low-income groups analyzed in this paper. The results demonstrate that spatial inequality in Mexico City is marked by unequal access to government services and cultural sites, which translates into unequal experiences of urban life and biased access to the city. The paper concludes with a series of public policy recommendations to foster a more equitable and inclusive appropriation of public space…(More)”.