Stefaan Verhulst
Essay by Dan Williams: “America’s epistemic challenges run deeper than social media.
Many people sense that the United States is undergoing an epistemic crisis, a breakdown in the country’s collective capacity to agree on basic facts, distinguish truth from falsehood, and adhere to norms of rational debate.
This crisis encompasses many things: rampant political lies; misinformation; and conspiracy theories; widespread beliefs in demonstrable falsehoods (“misperceptions”); intense polarization in preferred information sources; and collapsing trust in institutions meant to uphold basic standards of truth and evidence (such as science, universities, professional journalism, and public health agencies).
According to survey data, over 60% of Republicans believe Joe Biden’s presidency was illegitimate. 20% of Americans think vaccines are more dangerous than the diseases they prevent, and 36% think the specific risks of COVID-19 vaccines outweigh their benefits. Only 31% of Americans have at least a “fair amount” of confidence in mainstream media, while a record-high 36% have no trust at all.
What is driving these problems? One influential narrative blames social media platforms like Facebook, Twitter (now X), and YouTube. In the most extreme form of this narrative, such platforms are depicted as technological wrecking balls responsible for shattering the norms and institutions that kept citizens tethered to a shared reality, creating an informational Wild West dominated by viral falsehoods, bias-confirming echo chambers, and know-nothing punditry.
The timing is certainly suspicious. Facebook launched in 2004, YouTube in 2005, and Twitter in 2006. As they and other platforms acquired hundreds of millions of users over the next decade, the health of American democracy and its public sphere deteriorated. By 2016, when Donald Trump was first elected president, many experts were writing about a new “post-truth” or “misinformation” age.
Moreover, the fundamental architecture of social media platforms seems hostile to rational discourse. Algorithms that recommend content prioritize engagement over accuracy. This can amplify sensational and polarizing material or bias-confirming content, which can drag users into filter bubbles. Meanwhile, the absence of traditional gatekeepers means that influencers with no expertise or ethical scruples can reach vast audiences.
The dangerous consequences of these problems seem obvious to many casual observers of social media. And some scientific research corroborates this widespread impression. For example, a systematic review of nearly five hundred studies finds suggestive evidence for a link between digital media use and declining political trust, increasing populism, and growing polarization. Evidence also consistently shows an association between social media use and beliefs in conspiracy theories and misinformation…(More)”.
Paper by Art Alishani, Vincent Homburg, and Ott Velsberg: “Public service providers around the world are now offering chatbots to answer citizens’ questions and deliver digital services. Using these artificial intelligence-powered technologies, citizens can engage in conversations with governments through systems that mimic face-to-face interactions and adjust their use of natural language to citizens’ communication styles. This paper examines emerging experiences with chatbots in government interactions, with a focus on exploring what public administration practitioners and scholars should expect from chatbots in public encounters. Furthermore, it seeks to identify what gaps exist in the general understanding of digital public encounters…(More)”.
Article by by Rahmin Sarabi: “Across the United States, democracy faces mounting challenges from polarization, public distrust, and increasingly complex societal challenges. Traditional systems of civic participation—and the broader foundations of democratic governance—have struggled to adapt as media and electoral incentives increasingly reward outrage over understanding.
Despite these challenges, new possibilities are emerging. Artificial intelligence—specifically large language models (LLMs)—is beginning to serve as a transformative tool for public engagement and policymaking, led by innovative governments and civic institutions.
When used thoughtfully, LLMs can help unlock public wisdom, rebuild trust, and enable better decisionmaking—not by replacing human judgment, but by strengthening it. This promise doesn’t dismiss the serious concerns about AI’s impact on social cohesion, work, and democracy—which remain vital to address. Yet these emerging capabilities can enhance both institutional efficiency and, more importantly, core democratic values: inclusiveness, meaningful participation, and deliberative reasoning.
By strengthening these foundations, AI can enable the collaborative problem-solving today’s interconnected problems demand and help us renew democracy to meet the challenges of our time. This piece examines concrete applications where LLM-based AI is already enhancing democratic processes—from citizen engagement to survey and context analysis—and explores principles for scaling these innovations responsibly…(More)”.
Essay by Antón Barba-Kay: “…Consider the casual assumption that the scope and accessibility of choice is itself “democratic,” such that Reddit, Facebook, Instagram, and TikTok might be called democratic platforms because everyone gets a vote and anyone can become someone. In fact, there’s nothing intrinsically democratic about participation as such at any level. Countries do not become more democratic when the people are more frequently consulted. Expertise and state secrets are part of it, as is the fact that states need to make long-term, binding commitments. Digital consultations (such as have been used by the Five Star Movement in Italy) give undue power to those who control the software through which they take place. There is also the better reason that, when questions are placed before a mass electorate to an extent that exceeds our capacity for deliberation, there occurs a populist or oligarchic reversal. In a direct democracy such as that of ancient Athens, everything begins to turn on who poses the questions and when and why.
The point is that mass populations can be good judges of what locally touches us but are not able to collectively pay attention to what doesn’t—and that this is an entirely democratic, republican situation. I mean that the will of the people is not a fact but an artifice: It is not the input of democratic government but the output of its codified processes. Intermediate institutions such as parties, caucuses, primaries, debates, and the Electoral College were designed in theory to sublimate citizen passions and interests into more substantive formulations than their raw expression allows. The Federalist Papers explicitly speak of it as a machine—a Newtonian “system” of “bodies,” “springs,” “energies,” and “balances” operating under universally observable laws. The point of this machine is to produce the popular will by representing it. The Constitution is intended to be the context within which popular decision can find its most articulate expression through the medium of good government—which is why any one person’s claim to immediately embody the will of the people is anti-democratic.
For digital services as well as democratic processes, there is a point beyond which the intensification of involvement degrades the objects of preference themselves. Just as social media has made us impatient for headlines spicy or outrageous, the focus on and capture of choice as such can destroy the conditions of meaningful choice themselves…(More)”.
Paper by Lidis Garbovan et al: “A major driver of the transition of population data science to occurring within Trusted Research Environments (TRE) is the concern regarding ‘Social Licence’ which suggests that factors such as demonstrable public good are essential for socially acceptable data use. Yet, whilst public dialogue has helped identify the criteria to define what is ‘in the public good’, we currently lack the methodologies necessary to bring a diverse perspective as to whether we are achieving this. The ‘Citizen Panel’ concept may be a mechanism to integrate a diverse public voice into decision-making as a form of public audit of such data access decision-making.
The concept of a ‘Citizen Panel’ is developed by the organisation Understanding Patient Data as a learning data governance model for data access in Longitudinal Population Studies (LPS) that shift the involvement of the public in data access from a one-way direction to a feedback cycle. To date, this model has not been deployed in the context of data access to public data as recommended by Understanding Patient Data and therefore it is not known whether the model will work as intended or how a panel would be constituted and operate in practice.
LPS employ continuous or repeated measures to follow particular individuals over prolonged periods of time—often decades or lifetimes. They are usually observational in nature and include collections of quantitative and/or qualitative data on any combination of exposures and outcomes.
UK Longitudinal Linkage Collaboration (UK LLC) – the national TRE for data linkage in longitudinal research – has been funded by UK Research and Innovation (UKRI), the Economic and Social Research Council (ESRC) and Medical Research Council (MRC) to pilot the first operational Citizen Panel…(More)”.
JRC European Commission Report: “…explores how public administrations can foster more effective collaboration by intentionally shaping the behaviours, routines, and mindsets of public servants. Drawing on cutting-edge insights from cognitive, behavioural, and organisational sciences, it presents a rigorous review of the key drivers of collaboration and how to leverage them. It serves as an evidence-informed compass to help public administrations harness their collaborative potential to effectively address today’s complex policy challenges. The main findings are distilled into twelve actionable principles that outline a clear vision for working together more effectively…(More)”.
Paper by Eray Erturk: “Wearable devices record physiological and behavioral signals that can improve health predictions. While foundation models are increasingly used for such predictions, they have been primarily applied to low-level sensor data, despite behavioral data often being more informative due to their alignment with physiologically relevant timescales and quantities. We develop foundation models of such behavioral signals using over 2.5B hours of wearable data from 162K individuals, systematically optimizing architectures and tokenization strategies for this unique dataset. Evaluated on 57 health-related tasks, our model shows strong performance across diverse real-world applications including individual-level classification and time-varying health state prediction. The model excels in behavior-driven tasks like sleep prediction, and improves further when combined with representations of raw sensor data. These results underscore the importance of tailoring foundation model design to wearables and demonstrate the potential to enable new health applications…(More)”.
Article by ‘Gbenga Sesan: “The world is witnessing an unprecedented convergence of challenges that threaten digital democracy, social innovation, and international relations. At the heart of these threats are three fundamental shifts: the shrinking of civic space, the decline in funding for digital rights programs (or “digital funding”), and the erosion of legitimacy in global governance. These trends, while distinct, are interconnected in ways that reveal deep fractures in the global order. How high-level structural changes will impact the field of digital democracy should be continuously explored to help advocacy groups identify the necessary actions to ensure resilience, as well as help shape the role of all stakeholders in rebuilding trust in the digital democratic landscape.
Shrinking Civic Space: The Last Line Under Attack
The emergence of a post-truth era characterized by an increasing abuse of technology has significantly eroded civic space—a fundamental platform for free expression, activism, and advocacy. Governments, both authoritarian and ostensibly democratic, have weaponized technology to surveil, censor, and suppress dissent, effectively transforming many civic spaces from sites of resistance into zones of control. In a recent collection of essays from the Carnegie Endowment for International Peace, “New Digital Dilemmas: Resisting Autocrats, Navigating Geopolitics, Confronting Platforms,” various authors highlighted trends in digital repression. Jan Rydzak’s essay, “The Stalled Machines of Transparency Reporting,” for example, revealed a troubling pattern: as technology platforms are retreating from their transparency commitments and disbanding trust and safety teams, authoritarian governments are stepping in to define the limits of permissible speech. Other pieces described the intensification of identity-based repression in the Middle East and North Africa and explained how governments and digital mobs alike are using AI-driven profiling, facial recognition, and doxxing campaigns to target human rights defenders, LGBTQ activists, and dissidents.
What makes this moment particularly dangerous is a shift in both the tools and targets employed…(More)”.
Report by National Academies of Sciences, Engineering, and Medicine: “Over the last decade, advances in artificial intelligence (AI) technologies have created transformational opportunities for health, health care, and biomedical science. While new tools are available to improve effectiveness and efficiency in myriad applications in health and health care, challenges persist, including those related to increasing costs of care, staff burnout and shortages, and the growing disease burden of an aging population. The need for new approaches to address these long-standing challenges is evident and AI offers both new hope and new concerns.
An Artificial Intelligence Code of Conduct for Health and Medicine: Essential Guidance for Aligned Action presents a unifying AI Code of Conduct (AICC) framework developed to align the field around responsible development and application of AI and to catalyze collective action to ensure that the transformative potential of AI in health and medicine is realized. Designed to be applied at every level of decision making—from boardroom to bedside and from innovation labs to reimbursement policies—the publication serves as a blueprint for building trust, protecting patients, and ensuring that innovation benefits people…(More)”.
Paper by Oliver Escobar and Adrian Bua: “The world faces social, political, economic, and ecological crises, and there is doubt that democratic governance can cope. Democracies rely on a narrow set of institutions and processes anchored in dominant forms of political organisation and imagination. Power inequalities sustain the (re)production of current ills in democratic life. In this context, what does the field of democratic innovation offer to the task of sociopolitical reimagining and change? The field has advanced since the turn of the century, building foundations for democratic renewal. It draws from various traditions of democracy, including participatory and deliberative streams. But there is concern that a non-critical version of deliberative democracy is becoming hegemonic. Deliberative theory generated useful correctives to participatory democracy – that is, a deeper understanding of the communicative fabric of the public sphere as worthy of democratisation; public reasoning as a bridge-builder between streets and institutions and a key precursor to democratic collective action. However, we argue that democratic innovation now needs a participatory corrective to strengthen its potential to mobilise capacity for change. We review emerging critiques in conversation with participatory ideas and practices, illustrating our argument with four gaps in democratic innovation that can become field-expanding dimensions to deliver emancipatory change more effectively: pluriversality, policy, political economy, and empowerment…(More)”.