Problems of participatory processes in policymaking: a service design approach


Paper by Susana Díez-Calvo, Iván Lidón, Rubén Rebollar, Ignacio Gil-Pérez: “This study aims to identify and map the problems of participatory processes in policymaking through a Service Design approach….Fifteen problems of participatory processes in policymaking were identified, and some differences were observed in the perception of these problems between the stakeholders responsible for designing and implementing the participatory processes (backstage stakeholders) and those who are called upon to participate (frontstage stakeholders). The problems were found to occur at different stages of the service and to affect different stakeholders. A number of design actions were proposed to help mitigate these problems from a human-centred approach. These included process improvements, digital opportunities, new technologies and staff training, among others…(More)”.

The disparities and development trajectories of nations in achieving the sustainable development goals


Paper by Fengmei Ma, et al: “The Sustainable Development Goals (SDGs) provide a comprehensive framework for societal progress and planetary health. However, it remains unclear whether universal patterns exist in how nations pursue these goals and whether key development areas are being overlooked. Here, we apply the product space methodology, widely used in development economics, to construct an ‘SDG space of nations’. The SDG space models the relative performance and specialization patterns of 166 countries across 96 SDG indicators from 2000 to 2022. Our SDG space reveals a polarized global landscape, characterized by distinct groups of nations, each specializing in specific development indicators. Furthermore, we find that as countries improve their overall SDG scores, they tend to modify their sustainable development trajectories, pursuing different development objectives. Additionally, we identify orphaned SDG indicators — areas where certain country groups remain under-specialized. These patterns, and the SDG space more broadly, provide a high-resolution tool to understand and evaluate the progress and disparities of countries towards achieving the SDGs…(More)”

Developing a Framework for Collective Data Rights


Report by Jeni Tennison: “Are collective data rights really necessary? Or, do people and communities already have sufficient rights to address harms through equality, public administration or consumer law? Might collective data rights even be harmful by undermining individual data rights or creating unjust collectivities? If we did have collective data rights, what should they look like? And how could they be introduced into legislation?

Data protection law and policy are founded on the notion of individual notice and consent, originating from the handling of personal data gathered for medical and scientific research. However, recent work on data governance has highlighted shortcomings with the notice-and-consent approach, especially in an age of big data and artificial intelligence. This special reports considers the need for collective data rights by examining legal remedies currently available in the United Kingdom in three scenarios where the people affected by algorithmic decision making are not data subjects and therefore do not have individual data protection rights…(More)”.

Un-Plateauing Corruption Research?Perhaps less necessary, but more exciting than one might think


Article by Dieter Zinnbauer: “There is a sense in the anti-corruption research community that we may have reached some plateau (or less politely, hit a wall). This article argues – at least partly – against this claim.

We may have reached a plateau with regard to some recurring (staid?) scholarly and policy debates that resurface with eerie regularity, tend to suck all oxygen out of the room, yet remain essentially unsettled and irresolvable. Questions aimed at arriving closure on what constitutes corruption, passing authoritative judgements  on what works and what does not and rather grand pronouncements on whether progress has or has not been all fall into this category.

 At the same time, there is exciting work often in unexpected places outside the inner ward of the anti-corruption castle,  contributing new approaches and fresh-ish insights and there are promising leads for exciting research on the horizon. Such areas include the underappreciated idiosyncrasies of corruption in the form of inaction rather than action, the use of satellites and remote sensing techniques to better understand and measure corruption, the overlooked role of short-sellers in tackling complex forms of corporate corruption and the growing phenomena of integrity capture, the anti-corruption apparatus co-opted for sinister, corrupt purposes.

These are just four examples of the colourful opportunity tapestry for (anti)corruption research moving forward, not in form of a great unified project and overarching new idea  but as little stabs of potentiality here and  there and somewhere else surprisingly unbeknownst…(More)”

Reimagining data for Open Source AI: A call to action


Report by Open Source Initiative: “Artificial intelligence (AI) is changing the world at a remarkable pace, with Open Source AI playing a pivotal role in shaping its trajectory. Yet, as AI advances, a fundamental challenge emerges: How do we create a data ecosystem that is not only robust but also equitable and sustainable?

The Open Source Initiative (OSI) and Open Future have taken a significant step toward addressing this challenge by releasing a white paper: “Data Governance in Open Source AI: Enabling Responsible and Systematic Access.” This document is the culmination of a global co-design process, enriched by insights from a vibrant two-day workshop held in Paris in October 2024….

The white paper offers a blueprint for a data ecosystem rooted in fairness, inclusivity and sustainability. It calls for two transformative shifts:

  1. From Open Data to Data Commons: Moving beyond the notion of unrestricted data to a model that balances openness with the rights and needs of all stakeholders.
  2. Broadening the stakeholder universe: Creating collaborative frameworks that unite communities, stewards and creators in equitable data-sharing practices.

To bring these shifts to life, the white paper delves into six critical focus areas:

  • Data preparation
  • Preference signaling and licensing
  • Data stewards and custodians
  • Environmental sustainability
  • Reciprocity and compensation
  • Policy interventions…(More)”

Wikenigma – an Encyclopedia of Unknowns


About: “Wikenigma is a unique wiki-based resource specifically dedicated to documenting fundamental gaps in human knowledge.

Listing scientific and academic questions to which no-one, anywhere, has yet been able to provide a definitive answer. [ 1141 so far ]

That’s to say, a compendium of so-called ‘Known Unknowns’.

The idea is to inspire and promote interest in scientific and academic research by highlighting opportunities to investigate problems which no-one has yet been able to solve.

You can start browsing the content via the main menu on the left (or in the ‘Main Menu’ section if you’re using a small-screen device) Alternatively, the search box (above right) will find any articles with details that match your search terms…(More)”.

Overcoming challenges associated with broad sharing of human genomic data


Paper by Jonathan E. LoTempio Jr & Jonathan D. Moreno: “Since the Human Genome Project, the consensus position in genomics has been that data should be shared widely to achieve the greatest societal benefit. This position relies on imprecise definitions of the concept of ‘broad data sharing’. Accordingly, the implementation of data sharing varies among landmark genomic studies. In this Perspective, we identify definitions of broad that have been used interchangeably, despite their distinct implications. We further offer a framework with clarified concepts for genomic data sharing and probe six examples in genomics that produced public data. Finally, we articulate three challenges. First, we explore the need to reinterpret the limits of general research use data. Second, we consider the governance of public data deposition from extant samples. Third, we ask whether, in light of changing concepts of broad, participants should be encouraged to share their status as participants publicly or not. Each of these challenges is followed with recommendations…(More)”.

Superbloom: How Technologies of Connection Tear Us Apart


Book by Nicholas Carr: “From the telegraph and telephone in the 1800s to the internet and social media in our own day, the public has welcomed new communication systems. Whenever people gain more power to share information, the assumption goes, society prospers. Superbloom tells a startlingly different story. As communication becomes more mechanized and efficient, it breeds confusion more than understanding, strife more than harmony. Media technologies all too often bring out the worst in us.

A celebrated writer on the human consequences of technology, Nicholas Carr reorients the conversation around modern communication, challenging some of our most cherished beliefs about self-expression, free speech, and media democratization. He reveals how messaging apps strip nuance from conversation, how “digital crowding” erodes empathy and triggers aggression, how online political debates narrow our minds and distort our perceptions, and how advances in AI are further blurring the already hazy line between fantasy and reality.

Even as Carr shows how tech companies and their tools of connection have failed us, he forces us to confront inconvenient truths about our own nature. The human psyche, it turns out, is profoundly ill-suited to the “superbloom” of information that technology has unleashed.

With rich psychological insights and vivid examples drawn from history and science, Superbloom provides both a panoramic view of how media shapes society and an intimate examination of the fate of the self in a time of radical dislocation. It may be too late to change the system, Carr counsels, but it’s not too late to change ourselves…(More)”.

Towards Best Practices for Open Datasets for LLM Training


Paper by Stefan Baack et al: “Many AI companies are training their large language models (LLMs) on data without the permission of the copyright owners. The permissibility of doing so varies by jurisdiction: in countries like the EU and Japan, this is allowed under certain restrictions, while in the United States, the legal landscape is more ambiguous. Regardless of the legal status, concerns from creative producers have led to several high-profile copyright lawsuits, and the threat of litigation is commonly cited as a reason for the recent trend towards minimizing the information shared about training datasets by both corporate and public interest actors. This trend in limiting data information causes harm by hindering transparency, accountability, and innovation in the broader ecosystem by denying researchers, auditors, and impacted individuals access to the information needed to understand AI models.
While this could be mitigated by training language models on open access and public domain data, at the time of writing, there are no such models (trained at a meaningful scale) due to the substantial technical and sociological challenges in assembling the necessary corpus. These challenges include incomplete and unreliable metadata, the cost and complexity of digitizing physical records, and the diverse set of legal and technical skills required to ensure relevance and responsibility in a quickly changing landscape. Building towards a future where AI systems can be trained on openly licensed data that is responsibly curated and governed requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness…(More)”.

Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models


Article by Yaqub Chaudhary and Jonnie Penn: “The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as a competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.

This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate)…(More)”.