The Theory of Deliberative Wisdom


Book by Eric Racine: “Humanity faces a multitude of profound challenges at present: technological advances, environmental changes, rising inequality, and deep social and political pluralism. These transformations raise moral questions—questions about how we view ourselves and how we ought to engage with the world in the pursuit of human flourishing. In The Theory of Deliberative Wisdom, Eric Racine puts forward an original interdisciplinary ethics theory that offers both an explanation of the workings of human morality and a model for deliberation-based imaginative processes to tackle moral problems.

Drawing from a wide array of disciplines such as philosophy, psychology, sociology, political science, neuroscience, and economics, this book offers an engaging account of situated moral agency and of ethical life as the pursuit of human flourishing. Moral experience, Racine explains, is accounted for in the form of situational units—morally problematic situations. These units are, in turn, theorized as actionable and participatory building blocks of moral existence mapping to mechanisms of episodic memory and to the construction of personal identity. Such explanations pave the way for an understanding of the social and psychological mechanisms of the awareness and neglect of morally problematic situations as well as of the imaginative ethical deliberation needed to respond to these situations. Deliberative wisdom is explained as an engaged and ongoing learning process about human flourishing…(More)”

Activating citizens: the contribution of the Capability Approach to critical citizenship studies and to understanding the enablers of engaged citizenship


Paper by Anna Colom and Agnes Czajka: “The paper argues that the Capability Approach can make a significant contribution to understanding the enablers of engaged citizenship. Using insights from critical citizenship studies and original empirical research on young people’s civic and political involvement in western Kenya, we argue that it is useful to think of the process of engaged citizenship as comprised of two distinct yet interrelated parts: activation and performance. We suggest that the Capability Approach (CA) can help us understand what resources and processes are needed for people to not only become activated but to also effectively perform their citizenship. Although the CA is rarely brought into conversation with critical citizenship studies literatures, we argue that it can be useful in both operationalising the insights of critical citizenship studies on citizenship engagement and illustrating how activation and performance can be effectively supported or catalysed….(More)”

The Right to AI


Paper by Rashid Mushkani, Hugo Berard, Allison Cohen, Shin Koeski: “This paper proposes a Right to AI, which asserts that individuals and communities should meaningfully participate in the development and governance of the AI systems that shape their lives. Motivated by the increasing deployment of AI in critical domains and inspired by Henri Lefebvre’s concept of the Right to the City, we reconceptualize AI as a societal infrastructure, rather than merely a product of expert design. In this paper, we critically evaluate how generative agents, large-scale data extraction, and diverse cultural values bring new complexities to AI oversight. The paper proposes that grassroots participatory methodologies can mitigate biased outcomes and enhance social responsiveness. It asserts that data is socially produced and should be managed and owned collectively. Drawing on Sherry Arnstein’s Ladder of Citizen Participation and analyzing nine case studies, the paper develops a four-tier model for the Right to AI that situates the current paradigm and envisions an aspirational future. It proposes recommendations for inclusive data ownership, transparent design processes, and stakeholder-driven oversight. We also discuss market-led and state-centric alternatives and argue that participatory approaches offer a better balance between technical efficiency and democratic legitimacy…(More)”.

The world at our fingertips, just out of reach: the algorithmic age of AI


Article by Soumi Banerjee: “Artificial intelligence (AI) has made global movements, testimonies, and critiques seem just a swipe away. The digital realm, powered by machine learning and algorithmic recommendation systems, offers an abundance of visual, textual, and auditory information. With a few swipes or keystrokes, the unbounded world lies open before us. Yet this ‘openness’ conceals a fundamental paradox: the distinction between availability and accessibility.

What is technically available is not always epistemically accessible. What appears global is often algorithmically curated. And what is served to users under the guise of choice frequently reflects the imperatives of engagement, profit, and emotional resonance over critical understanding or cognitive expansion.

The transformative potential of AI in democratising access to information comes with risks. Algorithmic enclosure and content curation can deepen epistemic inequality, particularly for the youth, whose digital fluency often masks a lack of epistemic literacy. What we need is algorithmic transparency, civic education in media literacy, and inclusive knowledge formats…(More)”.

How Bad Is China’s Economy? The Data Needed to Answer Is Vanishing


Article by Rebecca Feng and Jason Douglas: “Not long ago, anyone could comb through a wide range of official data from China. Then it started to disappear. 

Land sales measures, foreign investment data and unemployment indicators have gone dark in recent years. Data on cremations and a business confidence index have been cut off. Even official soy sauce production reports are gone.

In all, Chinese officials have stopped publishing hundreds of data points once used by researchers and investors, according to a Wall Street Journal analysis. 

In most cases, Chinese authorities haven’t given any reason for ending or withholding data. But the missing numbers have come as the world’s second biggest economy has stumbled under the weight of excessive debt, a crumbling real-estate market and other troubles—spurring heavy-handed efforts by authorities to control the narrative.China’s National Bureau of Statistics stopped publishing some numbers related to unemployment in urban areas in recent years. After an anonymous user on the bureau’s website asked why one of those data points had disappeared, the bureau said only that the ministry that provided it stopped sharing the data.

The disappearing data have made it harder for people to know what’s going on in China at a pivotal time, with the trade war between Washington and Beijing expected to hit China hard and weaken global growth. Plunging trade with the U.S. has already led to production shutdowns and job cuts.

Getting a true read on China’s growth has always been tricky. Many economists have long questioned the reliability of China’s headline gross domestic product data, and concerns have intensified recently. Official figures put GDP growth at 5% last year and 5.2% in 2023, but some have estimated that Beijing overstated its numbers by as much as 2 to 3 percentage points. 

To get what they consider to be more realistic assessments of China’s growth, economists have turned to alternative sources such as movie box office revenues, satellite data on the intensity of nighttime lights, the operating rates of cement factories and electricity generation by major power companies. Some parse location data from mapping services run by private companies such as Chinese tech giant Baidu to gauge business activity. 

One economist said he has been assessing the health of China’s services sector by counting news stories about owners of gyms and beauty salons who abruptly close up and skip town with users’ membership fees…(More)”.

Governing in the Age of AI: Reimagining Local Government


Report by the Tony Blair Institute for Global Change: “…The limits of the existing operating model have been reached. Starved of resources by cuts inflicted by previous governments over the past 15 years, many councils are on the verge of bankruptcy even though local taxes are at their highest level. Residents wait too long for care, too long for planning applications and too long for benefits; many people never receive what they are entitled to. Public satisfaction with local services is sliding.

Today, however, there are new tools – enabled by artificial intelligence – that would allow councils to tackle these challenges. The day-to-day tasks of local government, whether related to the delivery of public services or planning for the local area, can all be performed faster, better and cheaper with the use of AI – a true transformation not unlike the one seen a century ago.

These tools would allow councils to overturn an operating model that is bureaucratic, labour-intensive and unresponsive to need. AI could release staff from repetitive tasks and relieve an overburdened and demotivated workforce. It could help citizens navigate the labyrinth of institutions, webpages and forms with greater ease and convenience. It could support councils to make better long-term decisions to drive economic growth, without which the resource pressure will only continue to build…(More)”.

Co-Designing AI Systems with Value-Sensitive Citizen Science


Paper by Sachit Mahajan and Dirk Helbing: “As artificial intelligence (AI) systems increasingly shape everyday life, integrating diverse community values into their development becomes both an ethical imperative and a practical necessity. This paper introduces Value Sensitive Citizen Science (VSCS), a systematic framework combining Value Sensitive Design (VSD) principles with citizen science methods to foster meaningful public participation in AI. Addressing critical gaps in existing approaches, VSCS integrates culturally grounded participatory methods and structured cognitive scaffolding through the Participatory Value-Cognition Taxonomy (PVCT). Through iterative value-sensitive participation cycles guided by an extended scenario logic (What-if, If-then, Then-what, What-now), community members act as genuine coresearchers-identifying, translating, and operationalizing local values into concrete technical requirements. The framework also institutionalizes governance structures for ongoing oversight, adaptability, and accountability across the AI lifecycle. By explicitly bridging participatory design with algorithmic accountability, VSCS ensures that AI systems reflect evolving community priorities rather than reinforcing top-down or monocultural perspectives. Critical discussions highlight VSCS’s practical implications, addressing challenges such as power dynamics, scalability, and epistemic justice. The paper concludes by outlining actionable strategies for policymakers and practitioners, alongside future research directions aimed at advancing participatory, value-driven AI development across diverse technical and sociocultural contexts…(More)”.


Balancing Data Sharing and Privacy to Enhance Integrity and Trust in Government Programs


Paper by National Academy of Public Administration: “Improper payments and fraud cost the federal government hundreds of billions of dollars each year, wasting taxpayer money and eroding public trust. At the same time, agencies are increasingly expected to do more with less. Finding better ways to share data, without compromising privacy, is critical for ensuring program integrity in a resource-constrained environment.

Key Takeaways

  • Data sharing strengthens program integrity and fraud prevention. Agencies and oversight bodies like GAO and OIGs have uncovered large-scale fraud by using shared data.
  • Opportunities exist to streamline and expedite the compliance processes required by privacy laws and reduce systemic barriers to sharing data across federal agencies.
  • Targeted reforms can address these barriers while protecting privacy:
    1. OMB could issue guidance to authorize fraud prevention as a routine use in System of Records Notices.
    2. Congress could enact special authorities or exemptions for data sharing that supports program integrity and fraud prevention.
    3. A centralized data platform could help to drive cultural change and support secure, responsible data sharing…(More)”

Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”

Understanding and Addressing Misinformation About Science


Report by National Academies of Sciences, Engineering, and Medicine: “Our current information ecosystem makes it easier for misinformation about science to spread and harder for people to figure out what is scientifically accurate. Proactive solutions are needed to address misinformation about science, an issue of public concern given its potential to cause harm at individual, community, and societal levels. Improving access to high-quality scientific information can fill information voids that exist for topics of interest to people, reducing the likelihood of exposure to and uptake of misinformation about science. Misinformation is commonly perceived as a matter of bad actors maliciously misleading the public, but misinformation about science arises both intentionally and inadvertently and from a wide range of sources…(More)”.