Artificial Intelligence for Participation


Policy Brief by the Brazil Centre of the University of Münster: “…provides an overview of current and potential applications of artificial intelligence (AI) technologies in the context of political participation and democratic governance processes in cities. Aimed primarily at public managers, the document also highlights critical issues to consider in the implementation of these technologies, and proposes an agenda for debate on the new state capabilities they require…(More)”.

The 2026 Aid Transparency Index is canceled. Here’s what it means


Article by Gary Forster: “As things stand, we will not be running the 2026 Aid Transparency Index. Not because it isn’t needed. Not because it isn’t effective. But because, in spite of our best efforts, we haven’t been able to secure the funding for it.

This is not a trivial loss. The Aid Transparency Index has been the single most powerful mechanism driving improvements in the quantity and quality of aid data that is published to the International Aid Transparency Initiative, or IATI, Standard. Since 2012, every two years, it has independently assessed and ranked the transparency of the world’s 50 largest aid agencies — organizations responsible for 92% of all spending published in IATI, amounting to $237 billion in 2023 alone.

The index works because it shapes agency behavior. It has encouraged reluctant agencies to start publishing data; motivated those already engaged to improve data quantity and quality; and provided a crucial, independent check on the state of global aid transparency…(More)”.

Will big data lift the veil of ignorance?


Blog by Lisa Herzog: “Imagine that you have a toothache, and a visit at the dentist reveals that a major operation is needed. You phone your health insurance. You listen to the voice of the chatbot, press the buttons to go through the menu. And then you hear: “We have evaluated your profile based on the data you have agreed to share with us. Your dental health behavior scores 6 out of 10. The suggested treatment plan therefore requires a co-payment of [insert some large sum of money here].”

This may sound like science fiction. But many other insurances, e.g. car insurances, already build on automated data being shared with them. If they were allowed, health insurers would certainly like to access our data as well – not only those from smart toothbrushes, but also credit card data, behavioral data (e.g. from step counting apps), or genetic data. If they were allowed to use them, they could move towards segmented insurance plans for specific target groups. As two commentators, on whose research I come back below, recently wrote about health insurance: “Today, public plans and nondiscrimination clauses, not lack of information, are what stands between integration and segmentation.”

If, like me, you’re interested in the relation between knowledge and institutional design, insurance is a fascinating topic. The basic idea of insurance is centuries old – here is a brief summary (skip a few paragraphs if you know this stuff). Because we cannot know what might happen to us in the future, but we can know that on an aggregate level, things will happen to people, it can make sense to enter an insurance contract, creating a pool that a group jointly contributes to. Those for whom the risks in question materialize get support from the pool. Those for whom it does not materialize may go through life without receiving any money, but they still know that they could get support if something happened to them. As such, insurance combines solidarity within a group with individual pre-caution…(More)”.

Is This How Reddit Ends?


Article by Matteo Wong: “The internet is growing more hostile to humans. Google results are stuffed with search-optimized spam, unhelpful advertisements, and AI slop. Amazon has become littered with undifferentiated junk. The state of social media, meanwhile—fractured, disorienting, and prone to boosting all manner of misinformation—can be succinctly described as a cesspool.

It’s with some irony, then, that Reddit has become a reservoir of humanity. The platform has itself been called a cesspool, rife with hateful rhetoric and falsehoods. But it is also known for quirky discussions and impassioned debates on any topic among its users. Does charging your brother rent, telling your mom she’s an unwanted guest, or giving your wife a performance review make you an asshole? (Redditors voted no, yes, and “everyone sucks,” respectively.) The site is where fans hash out the best rap album ever and plumbers weigh in on how to unclog a drain. As Google has begun to offer more and more vacuous SEO sites and ads in response to queries, many people have started adding reddit to their searches to find thoughtful, human-written answers: find mosquito in bedroom redditfix musty sponge reddit.

But now even Reddit is becoming more artificial. The platform has quietly started beta-testing Reddit Answers, what it calls an “AI-powered conversational interface.” In function and design, the feature—which is so far available only for some users in the U.S.—is basically an AI chatbot. On a new search screen accessible from the homepage, Reddit Answers takes anyone’s queries, trawls the site for relevant discussions and debates, and composes them into a response. In other words, a site that sells itself as a home for “authentic human connection” is now giving humans the option to interact with an algorithm instead…(More)”.

Flipping data on its head: Differing conceptualisations of data and the implications for actioning Indigenous data sovereignty principles


Paper by Stephanie Cunningham-Reimann et al: “Indigenous data sovereignty is of global concern. The power of data through its multitude of uses can cause harm to Indigenous Peoples, communities, organisations and Nations in Canada and globally. Indigenous research principles play a vital role in guiding researchers, scholars and policy makers in their careers and roles. We define data, data sovereignty principles, ways of practicing Indigenous research principles, and recommendations for applying and actioning Indigenous data sovereignty through culturally safe self-reflection, interpersonal and reciprocal relationships built upon respect, reciprocity, relevance, responsibility and accountability. Research should be co-developed, co-led, and co-disseminated in partnership with Indigenous Peoples, communities, organisations and/or nations to build capacity, support self-determination, and reduce harms produced through the analysis and dissemination of research findings. OCAP® (Ownership, Control, Access & Possession), OCAS (Ownership, Control, Access & Stewardship), Inuit Qaujimajatuqangit principles in conjunction the 4Rs (respect, relevance, reciprocity & responsibility) and cultural competency including self-examination of the 3Ps (power, privilege, and positionality) of researchers, scholars and policy makers can be challenging, but will amplify the voices and understandings of Indigenous research by implementing Indigenous data sovereignty in Canada…(More)”

AI Commons: nourishing alternatives to Big Tech monoculture


Report by Joana Varon, Sasha Costanza-Chock, Mariana Tamari, Berhan Taye, and Vanessa Koetz: “‘Artificial Intelligence’ (AI) has become a buzzword all around the globe, with tech companies, research institutions, and governments all vying to define and shape its future. How can we escape the current context of AI development where certain power forces are pushing for models that, ultimately, automate inequalities and threaten socio-enviromental diversities? What if we could redefine AI? What if we could shift its production from a capitalist model to a more disruptive, inclusive, and decentralized one? Can we imagine and foster an AI Commons ecosystem that challenges the current dominant neoliberal logic of an AI arms race? An ecosystem encompassing researchers, developers, and activists who are thinking about AI from decolonial, transfeminist, antiracist, indigenous, decentralized, post-capitalist and/or socio-environmental justice perspectives?

This fieldscan research, commissioned by One Project and conducted by Coding Rights, aims to understand the (possibly) emerging “AI Common” ecosystem. Focused on key entities (organizations, cooperatives and collectives, networks, companies, projects, and others) from Africa, the Americas, and Europe advancing alternative possible AI futures, the authors identify 234 entities that are advancing the AI Commons ecosystem. The report finds powerful communities of practice, groups, and organizations producing nuanced criticism of the Big Tech-driven AI development ecosystem and, most importantly, imagining, developing, and, at times, deploying an alternative AI technology that’s informed and guided by the principles of decoloniality, feminism, antiracist, and post-capitalist AI systems…(More)”.

Thousands of U.S. Government Web Pages Have Been Taken Down Since Friday


Article by Ethan Singer: “More than 8,000 web pages across more than a dozen U.S. government websites have been taken down since Friday afternoon, a New York Times analysis has found, as federal agencies rush to heed President Trump’s orders targeting diversity initiatives and “gender ideology.”

The purges have removed information about vaccines, veterans’ care, hate crimes and scientific research, among many other topics. Doctors, researchers and other professionals often rely on such government data and advisories. Some government agencies appear to have removed entire sections of their websites, while others are missing only a handful of pages.

Among the pages that have been taken down:

(The links are to archived versions.)

Developing a theory of robust democracy


Paper by Eva Sørensen and Mark E. Warren: “While many democratic theorists recognise the necessity of reforming liberal democracies to keep pace with social change, they rarely consider what enables such reform. In this conceptual article, we suggest that liberal democracies are politically robust when they are able to continuously adapt and innovate how they operate when doing so is necessary to continue to serve key democratic functions. These functions include securing the empowered inclusion of those affected, collective agenda setting and will formation, and the making of joint decisions. Three current challenges highlight the urgency of adapting and innovating liberal democracies to become more politically robust: an increasingly assertive political culture, the digitalisation of political communication and increasing global interdependencies. A democratic theory of political robustness emphasises the need to strengthen the capacity of liberal democracies to adapt and innovate in response to changes, just as it helps to frame the necessary adaptations and innovations in times such as the present…(More)”

The Impact of Artificial Intelligence on Societies


Book edited by Christian Montag and Raian Ali: “This book presents a recent framework proposed to understand how attitudes towards artificial intelligence are formed. It describes how the interplay between different variables, such as the modality of AI interaction, the user personality and culture, the type of AI applications (e.g. in the realm of education, medicine, transportation, among others), and the transparency and explainability of AI systems contributes to understand how user’s acceptance or a negative attitude towards AI develops. Gathering chapters from leading researchers with different backgrounds, this book offers a timely snapshot on factors that will be influencing the impact of artificial intelligence on societies…(More)”.

Establish data collaboratives to foster meaningful public involvement


Article by Gwen Ottinger: “…Data Collaboratives would move public participation and community engagement upstream in the policy process by creating opportunities for community members to contribute their lived experience to the assessment of data and the framing of policy problems. This would in turn foster two-way communication and trusting relationships between government and the public. Data Collaboratives would also help ensure that data and their uses in federal government are equitable, by inviting a broader range of perspectives on how data analysis can promote equity and where relevant data are missing. Finally, Data Collaboratives would be one vehicle for enabling individuals to participate in science, technology, engineering, math, and medicine activities throughout their lives, increasing the quality of American science and the competitiveness of American industry…(More)”.