Stefaan Verhulst
Article by Stefaan G. Verhulst, Hannah Chafetz, and Alex Fischer: “Asking questions in new and participatory ways can complement advancements in data science and AI while enabling more inclusive and more adaptive democracies…
Yet a crisis, as the saying goes, always contains kernels of opportunity. Buried within our current dilemma—indeed, within one of the underlying causes of it—is a potential solution. Democracies are resilient and adaptive, not static. And importantly, data and artificial intelligence (AI), if implemented responsibly, can contribute to making them more resilient. Technologies such as AI-supported digital public squares and crowd-sourcing are examples of how generative AI and large language models can improve community connectivity, societal health, and public services. Communities can leverage these tools for democratic participation and democratizing information. Through this period of technological transition, policy makers and communities are imagining how digital technologies can better engage our collective intelligence…
Achieving this requires new tools and approaches, specifically the collective process of asking better questions.
Formulated inclusively, questions help establish shared priorities and impart focus, efficiency, and equity to public policy. For instance, school systems can identify indicators and patterns of experiences, such as low attendance rates, that signal a student is at risk of not completing school. However, they rarely ask the positive outlier question of what enables some at-risk students to overcome challenges and finish school. Is it a good teacher relationship, an after-school program, the support of a family member, or a combination of these and other factors? Asking outlier (and orphan, or overlooked and neglected) questions can help refocus programs and guide policies toward areas with the highest potential for impact.
Not asking the right questions can also have adverse effects. For example, many city governments have not asked whether and how people of different genders, in different age groups, or with different physical mobility needs experience local public transportation systems. Creating the necessary infrastructure for people with a variety of needs to travel safely and efficiently increases health and well-being. Questions like whether sidewalks are big enough for strollers and whether there is sufficient public transport near schools can help spotlight areas for improvement, and show where age- or gender-disaggregated data is needed most…(More)”.
Toolkit by OECD: “…a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies. AI can help improve the efficiency of internal operations, the effectiveness of policymaking, the responsiveness of public services, and overall transparency and accountability. Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions…(More)”
Article by Blake Ellis, et al: “The 80-year-old communications engineer from Texas had saved for decades, driving around in an old car and buying clothes from thrift stores so he’d have enough money to enjoy his retirement years.
But as dementia robbed him of his reasoning abilities, he began making online political donations over and over again — eventually telling his son he believed he was part of a network of political operatives communicating with key Republican leaders. In less than two years, the man became one of the country’s largest grassroots supporters of the Republican Party, ultimately giving away nearly half a million dollars to former President Donald Trump and other candidates. Now, the savings account he spent his whole life building is practically empty.
The story of this unlikely political benefactor is one of many playing out across the country.
More than 1,000 reports filed with government agencies and consumer advocacy groups reviewed by CNN, along with an analysis of campaign finance data and interviews with dozens of contributors and their family members, show how deceptive political fundraisers have victimized hundreds of elderly Americans and misled those battling dementia or other cognitive impairments into giving away millions of dollars — far more than they ever intended. Some unintentionally joined the ranks of the top grassroots political donors in the country as they tapped into retirement savings and went into debt, contributing six-figure sums through thousands of transactions…(More)”.
Paper by Steffen Knoblauch et al: “Urban mobility analysis using Twitter as a proxy has gained significant attention in various application fields; however, long-term validation studies are scarce. This paper addresses this gap by assessing the reliability of Twitter data for modeling inner-urban mobility dynamics over a 27-month period in the. metropolitan area of Rio de Janeiro, Brazil. The evaluation involves the validation of Twitter-derived mobility estimates at both temporal and spatial scales, employing over 1.6 × 1011 mobile phone records of around three million users during the non-stationary mobility period from April 2020 to. June 2022, which coincided with the COVID-19 pandemic. The results highlight the need for caution when using Twitter for short-term modeling of urban mobility flows. Short-term inference can be influenced by Twitter policy changes and the availability of publicly accessible tweets. On the other hand, this long-term study demonstrates that employing multiple mobility metrics simultaneously, analyzing dynamic and static mobility changes concurrently, and employing robust preprocessing techniques such as rolling window downsampling can enhance the inference capabilities of Twitter data. These novel insights gained from a long-term perspective are vital, as Twitter – rebranded to X in 2023 – is extensively used by researchers worldwide to infer human movement patterns. Since conclusions drawn from studies using Twitter could be used to inform public policy, emergency response, and urban planning, evaluating the reliability of this data is of utmost importance…(More)”.
Book by Bin Yu and Rebecca L. Barter: “Most textbooks present data science as a linear analytic process involving a set of statistical and computational techniques without accounting for the challenges intrinsic to real-world applications. Veridical Data Science, by contrast, embraces the reality that most projects begin with an ambiguous domain question and messy data; it acknowledges that datasets are mere approximations of reality while analyses are mental constructs.
Bin Yu and Rebecca Barter employ the innovative Predictability, Computability, and Stability (PCS) framework to assess the trustworthiness and relevance of data-driven results relative to three sources of uncertainty that arise throughout the data science life cycle: the human decisions and judgment calls made during data collection, cleaning, and modeling. By providing real-world data case studies, intuitive explanations of common statistical and machine learning techniques, and supplementary R and Python code, Veridical Data Science offers a clear and actionable guide for conducting responsible data science. Requiring little background knowledge, this lucid, self-contained textbook provides a solid foundation and principled framework for future study of advanced methods in machine learning, statistics, and data science…(More)”.
Paper by Thomas Margoni and Alain M. Strowel: “This chapter analyzes the evolving landscape of EU data-sharing agreements, particularly focusing on the balance between contractual freedom and fairness in the context of non-personal data. The discussion highlights the complexities introduced by recent EU legislation, such as the Data Act, Data Governance Act, and Open Data Directive, which collectively aim to regulate data markets and enhance data sharing. The chapter emphasizes how these laws impose obligations that limit contractual freedom to ensure fairness, particularly in business-to-business (B2B) and Internet of Things (IoT) data transactions. It also explores the tension between private ordering and public governance, suggesting that the EU’s approach marks a shift from property-based models to governance-based models in data regulation. This chapter underscores the significant impact these regulations will have on data contracts and the broader EU data economy…(More)”.
Paper by Michael Henry Tessler et al: “We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse. Inspired by Jürgen Habermas’s theory of communicative action, we designed the “Habermas Machine” to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings. Through successive rounds of human data collection, we used supervised fine-tuning and reward modeling to progressively enhance the Habermas Machine’s ability to capture shared perspectives. To evaluate the efficacy of AI-mediated deliberation, we conducted a series of experiments with over 5000 participants from the United Kingdom. These experiments investigated the impact of AI mediation on finding common ground, how the views of discussants changed across the process, the balance between minority and majority perspectives in group statements, and potential biases present in those statements. Lastly, we used the Habermas Machine for a virtual citizens’ assembly, assessing its ability to support deliberation on controversial issues within a demographically representative sample of UK residents…(More)”.
OECD Discussion Paper: “… starts from the premise that democracies are endowed with valuable assets and that putting citizens at the heart of policy making offers an opportunity to strengthen democratic resilience. It draws on data, evidence and insights generated through a wide range of work underway at the OECD to identify systemic challenges and propose lines of action for the future. It calls for greater attention to, and investments in, citizen participation in policy making as one of the core functions of the state and the ‘life force’ of democratic governance. In keeping with the OECD’s strong commitment to providing a platform for diverse perspectives on challenging policy issues, it also offers a collection of thoughtprovoking opinion pieces by leading practitioners whose position as elected officials, academics and civil society leaders provides them with a unique vantage point from which to scan the horizon. As a contribution to an evolving field, this Discussion Paper offers neither a prescriptive framework nor a roadmap for governments but represents a step towards reaching a shared understanding of the very real challenges that lie ahead. It is also a timely invitation to all interested actors to join forces and take concerted action to embed meaningful citizen participation in policy making…(More)”.
Paper by Melody Musoni, Poorva Karkare and Chloe Teevan: “Africa must prioritise data usage and cross-border data sharing to realise the goals of the African Continental Free Trade Area and to drive innovation and AI development. Accessible and shareable data is essential for the growth and success of the digital economy, enabling innovations and economic opportunities, especially in a rapidly evolving landscape.
African countries, through the African Union (AU), have a common vision of sharing data across borders to boost economic growth. However, the adopted continental digital policies are often inconsistently applied at the national level, where some member states implement restrictive measures like data localisation that limit the free flow of data.
The paper looks at national policies that often prioritise domestic interests and how those conflict with continental goals. This is due to differences in political ideologies, socio-economic conditions, security concerns and economic priorities. This misalignment between national agendas and the broader AU strategy is shaped by each country’s unique context, as seen in the examples of Senegal, Nigeria and Mozambique, which face distinct challenges in implementing the continental vision.
The paper concludes with actionable recommendations for the AU, member states and the partnership with the European Union. It suggests that the AU enhances support for data-sharing initiatives and urges member states to focus on policy alignment, address data deficiencies, build data infrastructure and find new ways to use data. It also highlights how the EU can strengthen its support for Africa’s datasharing goals…(More)”.
Paper by Joseph Donia et al: “Process-oriented approaches to the responsible development, implementation, and oversight of artificial intelligence (AI) systems have proliferated in recent years. Variously referred to as lifecycles, pipelines, or value chains, these approaches demonstrate a common focus on systematically mapping key activities and normative considerations throughout the development and use of AI systems. At the same time, these approaches risk focusing on proximal activities of development and use at the expense of a focus on the events and value conflicts that shape how key decisions are made in practice. In this article we report on the results of an ‘embedded’ ethics research study focused on SPOTT– a ‘Smart Physiotherapy Tracking Technology’ employing AI and undergoing development and commercialization at an academic health sciences centre. Through interviews and focus groups with the development and commercialization team, patients, and policy and ethics experts, we suggest that a more expansive design and development lifecycle shaped by key events offers a more robust approach to normative analysis of digital health technologies, especially where those technologies’ actual uses are underspecified or in flux. We introduce five of these key events, outlining their implications for responsible design and governance of AI for health, and present a set of critical questions intended for others doing applied ethics and policy work. We briefly conclude with a reflection on the value of this approach for engaging with health AI ecosystems more broadly…(More)”.