Paper by Longchao Da: “The integration of generative artificial intelligence (GenAI) into transportation planning has the potential to revolutionize tasks such as demand forecasting, infrastructure design, policy evaluation, and traffic simulation. However, there is a critical need for a systematic framework to guide the adoption of GenAI in this interdisciplinary domain. In this survey, we, a multidisciplinary team of researchers spanning computer science and transportation engineering, present the first comprehensive framework for leveraging GenAI in transportation planning. Specifically, we introduce a new taxonomy that categorizes existing applications and methodologies into two perspectives: transportation planning tasks and computational techniques. From the transportation planning perspective, we examine the role of GenAI in automating descriptive, predictive, generative simulation, and explainable tasks to enhance mobility systems. From the computational perspective, we detail advancements in data preparation, domain-specific fine-tuning, and inference strategies such as retrieval-augmented generation and zero-shot learning tailored to transportation applications. Additionally, we address critical challenges, including data scarcity, explainability, bias mitigation, and the development of domain-specific evaluation frameworks that align with transportation goals like sustainability, equity, and system efficiency. This survey aims to bridge the gap between traditional transportation planning methodologies and modern AI techniques, fostering collaboration and innovation. By addressing these challenges and opportunities, we seek to inspire future research that ensures ethical, equitable, and impactful use of generative AI in transportation planning…(More)”.
AI-Facilitated Collective Judgements
Article by Manon Revel and Théophile Pénigaud: “This article unpacks the design choices behind longstanding and newly proposed computational frameworks aimed at finding common grounds across collective preferences and examines their potential future impacts, both technically and normatively. It begins by situating AI-assisted preference elicitation within the historical role of opinion polls, emphasizing that preferences are shaped by the decision-making context and are seldom objectively captured. With that caveat in mind, we explore AI-facilitated collective judgment as a discovery tool for fostering reasonable representations of a collective will, sense-making, and agreement-seeking. At the same time, we caution against dangerously misguided uses, such as enabling binding decisions, fostering gradual disempowerment or post-rationalizing political outcomes…(More)”.
Artificial intelligence for digital citizen participation: Design principles for a collective intelligence architecture
Paper by Nicolas Bono Rossello, Anthony Simonofski, and Annick Castiaux: “The challenges posed by digital citizen participation and the amount of data generated by Digital Participation Platforms (DPPs) create an ideal context for the implementation of Artificial Intelligence (AI) solutions. However, current AI solutions in DPPs focus mainly on technical challenges, often neglecting their social impact and not fully exploiting AI’s potential to empower citizens. The goal of this paper is thus to investigate how to design digital participation platforms that integrate technical AI solutions while considering the social context in which they are implemented. Using Collective Intelligence as kernel theory, and through a literature review and a focus group, we generate design principles for the development of a socio-technically aware AI architecture. These principles are then validated by experts from the field of AI and citizen participation. The principles suggest optimizing the alignment of AI solutions with project goals, ensuring their structured integration across multiple levels, enhancing transparency, monitoring AI-driven impacts, dynamically allocating AI actions, empowering users, and balancing cognitive disparities. These principles provide a theoretical basis for future AI-driven artifacts, and theories in digital citizen participation…(More)”.
Bridging the Data Provenance Gap Across Text, Speech and Video
Paper by Shayne Longpre et al: “Progress in AI is driven largely by the scale and quality of training data. Despite this, there is a deficit of empirical analysis examining the attributes of well-established datasets beyond text. In this work we conduct the largest and first-of-its-kind longitudinal audit across modalities–popular text, speech, and video datasets–from their detailed sourcing trends and use restrictions to their geographical and linguistic representation. Our manual analysis covers nearly 4000 public datasets between 1990-2024, spanning 608 languages, 798 sources, 659 organizations, and 67 countries. We find that multimodal machine learning applications have overwhelmingly turned to web-crawled, synthetic, and social media platforms, such as YouTube, for their training sets, eclipsing all other sources since 2019. Secondly, tracing the chain of dataset derivations we find that while less than 33% of datasets are restrictively licensed, over 80% of the source content in widely-used text, speech, and video datasets, carry non-commercial restrictions. Finally, counter to the rising number of languages and geographies represented in public AI training datasets, our audit demonstrates measures of relative geographical and multilingual representation have failed to significantly improve their coverage since 2013. We believe the breadth of our audit enables us to empirically examine trends in data sourcing, restrictions, and Western-centricity at an ecosystem-level, and that visibility into these questions are essential to progress in responsible AI. As a contribution to ongoing improvements in dataset transparency and responsible use, we release our entire multimodal audit, allowing practitioners to trace data provenance across text, speech, and video…(More)”.
Reconciling open science with technological sovereignty
Paper by C. Huang & L. Soete: “In history, open science has been effective in facilitating knowledge sharing and promoting and diffusing innovations. However, as a result of geopolitical tensions, technological sovereignty has recently been increasingly emphasized in various countries’ science and technology policy making, posing a challenge to open science policy. In this paper, we argue that the European Union significantly benefits from and contributes to open science and should continue to support it. Similarly, China embraced foreign technologies and engaged in open science as its economy developed rapidly in the last 40 years. Today both economies could learn from each other in finding the right balance between open science and technological sovereignty particularly given the very different policy experience and the urgency of implementing new technologies addressing the grand challenges such as climate change faced by mankind…(More)”.
Nurturing innovation through intelligent failure: The art of failing on purpose
Paper by Alessandro Narduzzo and Valentina Forrer: “Failure, even in the context of innovation, is primarily conceived and experienced as an inevitable (e.g., innovation funnel) or unintended (e.g., unexpected drawbacks) outcome. This paper aims to provide a more systematic understanding of innovation failure by considering and problematizing the case of “intelligent failures”, namely experiments that are intentionally designed and implemented to explore technological and market uncertainty. We conceptualize intelligent failure through an epistemic perspective that recognizes its contribution to challenging and revising the organizational knowledge system. We also outline an original process model of intelligent failure that fully reveals its potential and distinctiveness in the context of learning from failure (i.e., failure as an outcome vs failure of expectations and initial beliefs), analyzing and comparing intended and unintended innovation failures. By positioning intelligent failure in the context of innovation and explaining its critical role in enhancing the ability of innovative firms to achieve breakthroughs, we identify important landmarks for practitioners in designing an intelligent failure approach to innovation…(More)”.
Artificial intelligence for modelling infectious disease epidemics
Paper by Moritz U. G. Kraemer et al: “Infectious disease threats to individual and public health are numerous, varied and frequently unexpected. Artificial intelligence (AI) and related technologies, which are already supporting human decision making in economics, medicine and social science, have the potential to transform the scope and power of infectious disease epidemiology. Here we consider the application to infectious disease modelling of AI systems that combine machine learning, computational statistics, information retrieval and data science. We first outline how recent advances in AI can accelerate breakthroughs in answering key epidemiological questions and we discuss specific AI methods that can be applied to routinely collected infectious disease surveillance data. Second, we elaborate on the social context of AI for infectious disease epidemiology, including issues such as explainability, safety, accountability and ethics. Finally, we summarize some limitations of AI applications in this field and provide recommendations for how infectious disease epidemiology can harness most effectively current and future developments in AI…(More)”.
Moving Toward the FAIR-R principles: Advancing AI-Ready Data
Paper by Stefaan Verhulst, Andrew Zahuranec and Hannah Chafetz: “In today’s rapidly evolving AI ecosystem, making data ready for AI-optimized for training, fine-tuning, and augmentation-is more critical than ever. While the FAIR principles (Findability, Accessibility, Interoperability, and Reusability) have guided data management and open science, they do not inherently address AI-specific needs. Expanding FAIR to FAIR-R, incorporating Readiness for AI, could accelerate the responsible use of open data in AI applications that serve the public interest. This paper introduces the FAIR-R framework and identifies current efforts for enhancing AI-ready data through improved data labeling, provenance tracking, and new data standards. However, key challenges remain: How can data be structured for AI without compromising ethics? What governance models ensure equitable access? How can AI itself be leveraged to improve data quality? Answering these questions is essential for unlocking the full potential of AI-driven innovation while ensuring responsible and transparent data use…(More)”.
Presenting the StanDat database on international standards: improving data accessibility on marginal topics
Article by Solveig Bjørkholt: “This article presents an original database on international standards, constructed using modern data gathering methods. StanDat facilitates studies into the role of standards in the global political economy by (1) being a source for descriptive statistics, (2) enabling researchers to assess scope conditions of previous findings, and (3) providing data for new analyses, for example the exploration of the relationship between standardization and trade, as demonstrated in this article. The creation of StanDat aims to stimulate further research into the domain of standards. Moreover, by exemplifying data collection and dissemination techniques applicable to investigating less-explored subjects in the social sciences, it serves as a model for gathering, systematizing, and sharing data in areas where information is plentiful yet not readily accessible for research…(More)”.
Citizen participation and technology: lessons from the fields of deliberative democracy and science and technology studies
Paper by Julian “Iñaki” Goñi: “Calls for democratising technology are pervasive in current technological discourse. Indeed, participating publics have been mobilised as a core normative aspiration in Science and Technology Studies (STS), driven by a critical examination of “expertise”. In a sense, democratic deliberation became the answer to the question of responsible technological governance, and science and technology communication. On the other hand, calls for technifying democracy are ever more pervasive in deliberative democracy’s discourse. Many new digital tools (“civic technologies”) are shaping democratic practice while navigating a complex political economy. Moreover, Natural Language Processing and AI are providing novel alternatives for systematising large-scale participation, automated moderation and setting up participation. In a sense, emerging digital technologies became the answer to the question of how to augment collective intelligence and reconnect deliberation to mass politics. In this paper, I explore the mutual shaping of (deliberative) democracy and technology (studies), highlighting that without careful consideration, both disciplines risk being reduced to superficial symbols in discourses inclined towards quick solutionism. This analysis highlights the current disconnect between Deliberative Democracy and STS, exploring the potential benefits of fostering closer links between the two fields. Drawing on STS insights, the paper argues that deliberative democracy could be enriched by a deeper engagement with the material aspects of democratic processes, the evolving nature of civic technologies through use, and a more critical approach to expertise. It also suggests that STS scholars would benefit from engaging more closely with democratic theory, which could enhance their analysis of public participation, bridge the gap between descriptive richness and normative relevance, and offer a more nuanced understanding of the inner functioning of political systems and politics in contemporary democracies…(More)”.