Paper by Igor Calzada: “In an era of rapid technological advancement, decisions about the ownership and governance of emerging technologies like Artificial Intelligence will shape the future of both urban and rural environments in the Global North and South. This article explores how AI can move beyond the noise of algorithms by adopting a technological humanistic approach to enable Social Innovation, focusing on global inequalities and digital justice. Using a fieldwork Action Research methodology, based on the Smart Rural Communities project in Colombia and Mozambique, the study develops a framework for integrating AI with SI. Drawing on insights from the AI4SI International Summer School held in Donostia-San Sebastián in 2024, the article examines the role of decentralized Web3 technologies—such as Blockchain, Decentralized Autonomous Organizations, and Data Cooperatives—in enhancing data sovereignty and fostering inclusive and participatory governance. The results demonstrate how decentralization can empower marginalized communities in the Global South by promoting digital justice and addressing the imbalance of power in digital ecosystems. The conclusion emphasizes the potential for AI and decentralized technologies to bridge the digital divide, offering practical recommendations for scaling these innovations to support equitable, community-driven governance and address systemic inequalities across the Global North and South…(More)”.
The ABC’s of Who Benefits from Working with AI: Ability, Beliefs, and Calibration
Paper by Andrew Caplin: “We use a controlled experiment to show that ability and belief calibration jointly determine the benefits of working with Artificial Intelligence (AI). AI improves performance more for people with low baseline ability. However, holding ability constant, AI assistance is more valuable for people who are calibrated, meaning they have accurate beliefs about their own ability. People who know they have low ability gain the most from working with AI. In a counterfactual analysis, we show that eliminating miscalibration would cause AI to reduce performance inequality nearly twice as much as it already does…(More)”.
Orphan Articles: The Dark Matter of Wikipedia
Paper by Akhil Arora, Robert West, Martin Gerlach: “With 60M articles in more than 300 language versions, Wikipedia is the largest platform for open and freely accessible knowledge. While the available content has been growing continuously at a rate of around 200K new articles each month, very little attention has been paid to the accessibility of the content. One crucial aspect of accessibility is the integration of hyperlinks into the network so the articles are visible to readers navigating Wikipedia. In order to understand this phenomenon, we conduct the first systematic study of orphan articles, which are articles without any incoming links from other Wikipedia articles, across 319 different language versions of Wikipedia. We find that a surprisingly large extent of content, roughly 15\% (8.8M) of all articles, is de facto invisible to readers navigating Wikipedia, and thus, rightfully term orphan articles as the dark matter of Wikipedia. We also provide causal evidence through a quasi-experiment that adding new incoming links to orphans (de-orphanization) leads to a statistically significant increase of their visibility in terms of the number of pageviews. We further highlight the challenges faced by editors for de-orphanizing articles, demonstrate the need to support them in addressing this issue, and provide potential solutions for developing automated tools based on cross-lingual approaches. Overall, our work not only unravels a key limitation in the link structure of Wikipedia and quantitatively assesses its impact, but also provides a new perspective on the challenges of maintenance associated with content creation at scale in Wikipedia…(More)”.
Who Owns AI?
Paper by Amy Whitaker: “While artificial intelligence (AI) stands to transform artistic practice and creative industries, little has been theorized about who owns AI for creative work. Lawsuits brought against AI companies such as OpenAI and Meta under copyright law invite novel reconsideration of the value of creative work. This paper synthesizes across copyright, hybrid practice, and cooperative governance to work toward collective ownership and decision-making. This paper adds to research in arts entrepreneurship because copyright and shared value is so vital to the livelihood of working artists, including writers, filmmakers, and others in the creative industries. Sarah Silverman’s lawsuit against OpenAI is used as the main case study. The conceptual framework of material and machine, one and many, offers a lens onto value creation and shared ownership of AI. The framework includes a reinterpretation of the fourth factor of fair use under U.S. copyright law to refocus on the doctrinal language of value. AI uses the entirety of creative work in a way that is overlooked because of the small scale of one whole work relative to the overall size of the AI model. Yet a theory of value for creative work gives it dignity in its smallness, the way that one vote still has dignity in a national election of millions. As we navigate these frontiers of AI, experimental models pioneered by artists may be instructive far outside the arts…(More)”.
AI-enhanced collective intelligence
Paper by Hao Cui and Taha Yasseri: “Current societal challenges exceed the capacity of humans operating either alone or collectively. As AI evolves, its role within human collectives will vary from an assistive tool to a participatory member. Humans and AI possess complementary capabilities that, together, can surpass the collective intelligence of either humans or AI in isolation. However, the interactions in humanAI systems are inherently complex, involving intricate processes and interdependencies. This review incorporates perspectives from complex network science to conceptualize a multilayer representation of human-AI collective intelligence, comprising cognition, physical, and information layers. Within this multilayer network, humans and AI agents exhibit varying characteristics; humans differ in diversity from surface-level to deep-level attributes, while AI agents range in degrees of functionality and anthropomorphism. We explore how agents’ diversity and interactions influence the system’s collective intelligence and analyze real-world instances of AI-enhanced collective intelligence. We conclude by considering potential challenges and future developments in this field….(More)” See also: Where and When AI and CI Meet: Exploring the Intersection of Artificial and Collective Intelligence
The Deletion Remedy
Paper by Daniel Wilf-Townsend: “A new remedy has emerged in the world of technology governance. Where someone has wrongfully obtained or used data, this remedy requires them to delete not only that data, but also to delete tools such as machine learning models that they have created using the data. Model deletion, also called algorithmic disgorgement or algorithmic destruction, has been increasingly sought in both private litigation and public enforcement actions. As its proponents note, model deletion can improve the regulation of privacy, intellectual property, and artificial intelligence by providing more effective deterrence and better management of ongoing harms
But, this article argues, model deletion has a serious flaw. In its current form, it has the possibility of being a grossly disproportionate penalty. Model deletion requires the destruction of models whose training included illicit data in any degree, with no consideration of how much (or even whether) that data contributed to any wrongful gains or ongoing harms. Model deletion could thereby cause unjust losses in litigation and chill useful technologies.
This article works toward a well-balanced doctrine of model deletion by building on the remedy’s equitable origins. It identifies how traditional considerations in equity—such as a defendant’s knowledge and culpability, the balance of the hardships, and the availability of more tailored alternatives—can be applied in model deletion cases to mitigate problems of disproportionality. By accounting for proportionality, courts and agencies can develop a doctrine of model deletion that takes advantage of its benefits while limiting its potential excesses…(More)”.
China: Autocracy 2.0
Paper by David Y. Yang: “Autocracy 2.0, exemplified by modern China, is economically robust, technologically advanced, globally engaged, and controlled through subtle and sophisticated methods. What defines China’s political economy, and what drives Autocracy 2.0? What is its future direction? I start by discussing two key challenges autocracies face: incentives and information. I then describe Autocracy 1.0’s reliance on fear and repression to address these issues. It makes no credible promises, using coercion for compliance, resulting in a low-information environment. Next, I introduce Autocracy 2.0, highlighting its significant shift in handling commitment and information challenges. China uses economic incentives to align interests with regime survival, fostering support. It employs advanced bureaucratic structures and technology to manage incentives and information, enabling success in a high-information environment. Finally, I explore Autocracy 3.0’s potential. In China, forces might revert to Autocracy 1.0, using technology for state control as growth slows but aspirations stay high. Globally, modern autocracies, led by China, are becoming major geopolitical forces, challenging the liberal democratic order…(More)”.
Harnessing digital footprint data for population health: a discussion on collaboration, challenges and opportunities in the UK
Paper by Romana Burgess et al: “Digital footprint data are inspiring a new era in population health and well-being research. Linking these novel data with other datasets is critical for future research wishing to use these data for the public good. In order to succeed, successful collaboration among industry, academics and policy-makers is vital. Therefore, we discuss the benefits and obstacles for these stakeholder groups in using digital footprint data for research in the UK. We advocate for policy-makers’ inclusion in research efforts, stress the exceptional potential of digital footprint research to impact policy-making and explore the role of industry as data providers, with a focus on shared value, commercial sensitivity, resource requirements and streamlined processes. We underscore the importance of multidisciplinary approaches, consumer trust and ethical considerations in navigating methodological challenges and further call for increased public engagement to enhance societal acceptability. Finally, we discuss how to overcome methodological challenges, such as reproducibility and sharing of learnings, in future collaborations. By adopting a multiperspective approach to outlining the challenges of working with digital footprint data, our contribution helps to ensure that future research can navigate these challenges effectively while remaining reproducible, ethical and impactful…(More)”
What roles can democracy labs play in co-creating democratic innovations for sustainability?
Article by Inês Campos et al: “This perspective essay proposes Democracy Labs as new processes for developing democratic innovations that help tackle complex socio-ecological challenges within an increasingly unequal and polarised society, against the backdrop of democratic backsliding. Next to the current socio-ecological crisis, rapid technological innovations present both opportunities and challenges for democracy and call for democratic innovations. These innovations (e.g., mini-publics, collaborative governance and e-participation) offer alternative mechanisms for democratic participation and new forms of active citizenship, as well as new feedback mechanisms between citizens and traditional institutions of representative democracy. This essay thus introduces Democracy Labs, as citizen-centred processes for co-creating democratic innovations to inspire future transdisciplinary research and practice for a more inclusive and sustainable democracy. The approach is illustrated with examples from a Democracy Lab in Lisbon, reflecting on requirements for recruiting participants, the relevance of combining sensitising, reflection and ideation stages, and the importance of careful communication and facilitation processes guiding participants through co-creation activities…(More)”
Synthetic Data and Social Science Research
Paper by Jordan C. Stanley & Evan S. Totty: “Synthetic microdata – data retaining the structure of original microdata while replacing original values with modeled values for the sake of privacy – presents an opportunity to increase access to useful microdata for data users while meeting the privacy and confidentiality requirements for data providers. Synthetic data could be sufficient for many purposes, but lingering accuracy concerns could be addressed with a validation system through which the data providers run the external researcher’s code on the internal data and share cleared output with the researcher. The U.S. Census Bureau has experience running such systems. In this chapter, we first describe the role of synthetic data within a tiered data access system and the importance of synthetic data accuracy in achieving a viable synthetic data product. Next, we review results from a recent set of empirical analyses we conducted to assess accuracy in the Survey of Income & Program Participation (SIPP) Synthetic Beta (SSB), a Census Bureau product that made linked survey-administrative data publicly available. Given this analysis and our experience working on the SSB project, we conclude with thoughts and questions regarding future implementations of synthetic data with validation…(More)”