How elderly dementia patients are unwittingly fueling political campaigns


Article by Blake Ellis, et al: “The 80-year-old communications engineer from Texas had saved for decades, driving around in an old car and buying clothes from thrift stores so he’d have enough money to enjoy his retirement years.

But as dementia robbed him of his reasoning abilities, he began making online political donations over and over again — eventually telling his son he believed he was part of a network of political operatives communicating with key Republican leaders. In less than two years, the man became one of the country’s largest grassroots supporters of the Republican Party, ultimately giving away nearly half a million dollars to former President Donald Trump and other candidates. Now, the savings account he spent his whole life building is practically empty.

The story of this unlikely political benefactor is one of many playing out across the country.

More than 1,000 reports filed with government agencies and consumer advocacy groups reviewed by CNN, along with an analysis of campaign finance data and interviews with dozens of contributors and their family members, show how deceptive political fundraisers have victimized hundreds of elderly Americans and misled those battling dementia or other cognitive impairments into giving away millions of dollars — far more than they ever intended. Some unintentionally joined the ranks of the top grassroots political donors in the country as they tapped into retirement savings and went into debt, contributing six-figure sums through thousands of transactions…(More)”.

Long-term validation of inner-urban mobility metrics derived from Twitter/X


Paper by Steffen Knoblauch et al: “Urban mobility analysis using Twitter as a proxy has gained significant attention in various application fields; however, long-term validation studies are scarce. This paper addresses this gap by assessing the reliability of Twitter data for modeling inner-urban mobility dynamics over a 27-month period in the. metropolitan area of Rio de Janeiro, Brazil. The evaluation involves the validation of Twitter-derived mobility estimates at both temporal and spatial scales, employing over 1.6 × 1011 mobile phone records of around three million users during the non-stationary mobility period from April 2020 to. June 2022, which coincided with the COVID-19 pandemic. The results highlight the need for caution when using Twitter for short-term modeling of urban mobility flows. Short-term inference can be influenced by Twitter policy changes and the availability of publicly accessible tweets. On the other hand, this long-term study demonstrates that employing multiple mobility metrics simultaneously, analyzing dynamic and static mobility changes concurrently, and employing robust preprocessing techniques such as rolling window downsampling can enhance the inference capabilities of Twitter data. These novel insights gained from a long-term perspective are vital, as Twitter – rebranded to X in 2023 – is extensively used by researchers worldwide to infer human movement patterns. Since conclusions drawn from studies using Twitter could be used to inform public policy, emergency response, and urban planning, evaluating the reliability of this data is of utmost importance…(More)”.

Veridical Data Science


Book by Bin Yu and Rebecca L. Barter: “Most textbooks present data science as a linear analytic process involving a set of statistical and computational techniques without accounting for the challenges intrinsic to real-world applications. Veridical Data Science, by contrast, embraces the reality that most projects begin with an ambiguous domain question and messy data; it acknowledges that datasets are mere approximations of reality while analyses are mental constructs.
Bin Yu and Rebecca Barter employ the innovative Predictability, Computability, and Stability (PCS) framework to assess the trustworthiness and relevance of data-driven results relative to three sources of uncertainty that arise throughout the data science life cycle: the human decisions and judgment calls made during data collection, cleaning, and modeling. By providing real-world data case studies, intuitive explanations of common statistical and machine learning techniques, and supplementary R and Python code, Veridical Data Science offers a clear and actionable guide for conducting responsible data science. Requiring little background knowledge, this lucid, self-contained textbook provides a solid foundation and principled framework for future study of advanced methods in machine learning, statistics, and data science…(More)”.

Contractual Freedom and Fairness in EU Data Sharing Agreements


Paper by Thomas Margoni and Alain M. Strowel: “This chapter analyzes the evolving landscape of EU data-sharing agreements, particularly focusing on the balance between contractual freedom and fairness in the context of non-personal data. The discussion highlights the complexities introduced by recent EU legislation, such as the Data Act, Data Governance Act, and Open Data Directive, which collectively aim to regulate data markets and enhance data sharing. The chapter emphasizes how these laws impose obligations that limit contractual freedom to ensure fairness, particularly in business-to-business (B2B) and Internet of Things (IoT) data transactions. It also explores the tension between private ordering and public governance, suggesting that the EU’s approach marks a shift from property-based models to governance-based models in data regulation. This chapter underscores the significant impact these regulations will have on data contracts and the broader EU data economy…(More)”.

AI can help humans find common ground in democratic deliberation


Paper by Michael Henry Tessler et al: “We asked whether an AI system based on large language models (LLMs) could successfully capture the underlying shared perspectives of a group of human discussants by writing a “group statement” that the discussants would collectively endorse. Inspired by Jürgen Habermas’s theory of communicative action, we designed the “Habermas Machine” to iteratively generate group statements that were based on the personal opinions and critiques from individual users, with the goal of maximizing group approval ratings. Through successive rounds of human data collection, we used supervised fine-tuning and reward modeling to progressively enhance the Habermas Machine’s ability to capture shared perspectives. To evaluate the efficacy of AI-mediated deliberation, we conducted a series of experiments with over 5000 participants from the United Kingdom. These experiments investigated the impact of AI mediation on finding common ground, how the views of discussants changed across the process, the balance between minority and majority perspectives in group statements, and potential biases present in those statements. Lastly, we used the Habermas Machine for a virtual citizens’ assembly, assessing its ability to support deliberation on controversial issues within a demographically representative sample of UK residents…(More)”.

Exploring New Frontiers of Citizen Participation in the Policy Cycle


OECD Discussion Paper: “… starts from the premise that democracies are endowed with valuable assets and that putting citizens at the heart of policy making offers an opportunity to strengthen democratic resilience. It draws on data, evidence and insights generated through a wide range of work underway at the OECD to identify systemic challenges and propose lines of action for the future. It calls for greater attention to, and investments in, citizen participation in policy making as one of the core functions of the state and the ‘life force’ of democratic governance. In keeping with the OECD’s strong commitment to providing a platform for diverse perspectives on challenging policy issues, it also offers a collection of thoughtprovoking opinion pieces by leading practitioners whose position as elected officials, academics and civil society leaders provides them with a unique vantage point from which to scan the horizon. As a contribution to an evolving field, this Discussion Paper offers neither a prescriptive framework nor a roadmap for governments but represents a step towards reaching a shared understanding of the very real challenges that lie ahead. It is also a timely invitation to all interested actors to join forces and take concerted action to embed meaningful citizen participation in policy making…(More)”.

Cross-border data flows in Africa: Continental ambitions and political realities


Paper by Melody Musoni, Poorva Karkare and Chloe Teevan: “Africa must prioritise data usage and cross-border data sharing to realise the goals of the African Continental Free Trade Area and to drive innovation and AI development. Accessible and shareable data is essential for the growth and success of the digital economy, enabling innovations and economic opportunities, especially in a rapidly evolving landscape.

African countries, through the African Union (AU), have a common vision of sharing data across borders to boost economic growth. However, the adopted continental digital policies are often inconsistently applied at the national level, where some member states implement restrictive measures like data localisation that limit the free flow of data.

The paper looks at national policies that often prioritise domestic interests and how those conflict with continental goals. This is due to differences in political ideologies, socio-economic conditions, security concerns and economic priorities. This misalignment between national agendas and the broader AU strategy is shaped by each country’s unique context, as seen in the examples of Senegal, Nigeria and Mozambique, which face distinct challenges in implementing the continental vision.

The paper concludes with actionable recommendations for the AU, member states and the partnership with the European Union. It suggests that the AU enhances support for data-sharing initiatives and urges member states to focus on policy alignment, address data deficiencies, build data infrastructure and find new ways to use data. It also highlights how the EU can strengthen its support for Africa’s datasharing goals…(More)”.

Lifecycles, pipelines, and value chains: toward a focus on events in responsible artificial intelligence for health


Paper by Joseph Donia et al: “Process-oriented approaches to the responsible development, implementation, and oversight of artificial intelligence (AI) systems have proliferated in recent years. Variously referred to as lifecycles, pipelines, or value chains, these approaches demonstrate a common focus on systematically mapping key activities and normative considerations throughout the development and use of AI systems. At the same time, these approaches risk focusing on proximal activities of development and use at the expense of a focus on the events and value conflicts that shape how key decisions are made in practice. In this article we report on the results of an ‘embedded’ ethics research study focused on SPOTT– a ‘Smart Physiotherapy Tracking Technology’ employing AI and undergoing development and commercialization at an academic health sciences centre. Through interviews and focus groups with the development and commercialization team, patients, and policy and ethics experts, we suggest that a more expansive design and development lifecycle shaped by key events offers a more robust approach to normative analysis of digital health technologies, especially where those technologies’ actual uses are underspecified or in flux. We introduce five of these key events, outlining their implications for responsible design and governance of AI for health, and present a set of critical questions intended for others doing applied ethics and policy work. We briefly conclude with a reflection on the value of this approach for engaging with health AI ecosystems more broadly…(More)”.

A shared destiny for public sector data


Blog post by Shona Nicol: “As a data professional, it can sometime feel hard to get others interested in data. Perhaps like many in this profession, I can often express the importance and value of data for good in an overly technical way. However when our biggest challenges in Scotland include eradicating child poverty, growing the economy and tackling the climate emergency, I would argue that we should all take an interest in data because it’s going to be foundational in helping us solve these problems.

Data is already intrinsic to shaping our society and how services are delivered. And public sector data is a vital component in making sure that services for the people of Scotland are being delivered efficiently and effectively. Despite an ever growing awareness of the transformative power of data to improve the design and delivery of services, feedback from public sector staff shows that they can face difficulties when trying to influence colleagues and senior leaders around the need to invest in data.

A vision gap

In the Scottish Government’s data maturity programme and more widely, we regularly hear about the challenges data professionals encounter when trying to enact change. This community tell us that a long-term vision for public sector data for Scotland could help them by providing the context for what they are trying to achieve locally.

Earlier this year we started to scope how we might do this. We recognised that organisations are already working to deliver local and national strategies and policies that relate to data, so any vision had to be able to sit alongside those, be meaningful in different settings, agnostic of technology and relevant to any public sector organisation. We wanted to offer opportunities for alignment, not enforce an instruction manual…(More)”.

AI in the Public Service: Here for Good


Special Issue of Ethos: “…For the public good, we want AI to help unlock and drive transformative impact, in areas where there is significant potential for breakthroughs, such as cancer research, material sciences or climate change. But we also want to raise the level of generalised adoption. For the user base in the public sector, we want to learn how best to use this new tool in ways that can allow us to not only do things better, but do better things.

This is not to suggest that AI is always the best solution: it is one of many tools in the digital toolkit. Sometimes, simpler computational methods will suffice. That said, AI represents new, untapped potential for the Public Service to enhance our daily work and deliver better outcomes that ultimately benefit Singapore and Singaporeans….

To promote general adoption, we made available AI tools, such as Pair, 1 SmartCompose, 2 and AIBots. 3 They are useful to a wide range of public officers for many general tasks. Other common tools of this nature may include chatbots to support customer-facing and service delivery needs, translation, summarisation, and so on. Much of what public officers do involves words and language, which is an area that LLM-based AI technology can now help with.

Beyond improving the productivity of the Public Service, the real value lies in AI’s broader ability to transform our business and operating models to deliver greater impact. In driving adoption, we want to encourage public officers to experiment with different approaches to figure out where we can create new value by doing things differently, rather than just settle for incremental value from doing things the same old ways using new tools.

For example, we have seen how AI and automation have transformed language translation, software engineering, identity verification and border clearance. This is just the beginning and much more is possible in many other domains…(More)”.