Stefaan Verhulst
Article by William Hague: ‘I was born Scottish and I will Never be British,” tweeted Fiona last year on X, with the hashtag “ScottishIndependence”. Jake joined in, with a picture of the saltire, urging his followers to retweet it if they are proud to be Scottish. One Ewan McGregor added to the excitement, insisting “the call for independence is no longer a dream — it’s a democratic necessity”.
But then the internet in Iran was shut down as US bombers attacked the country’s nuclear sites. Suddenly, Fiona, Jake, Ewan and dozens of other keen advocates of Scottish independence stopped posting messages. Last month, as the regime launched its murderous crackdown on peaceful protesters, the same happened again. The truth has been revealed: large numbers of social media accounts with Scottish-sounding names, all advocating the break-up of the UK, are actually Iranian bots.
The disinformation firm Cyabra reported that in May and June last year, before the internet went dark in Iran, 26 per cent of all accounts arguing for Scottish independence were fake. An earlier study by Clemson University found that 4 per cent of all X content relating to independence was linked to a single network of Iranian-backed bots, generating several times more activity than the Scottish National Party.
It is time we recognised democracy is under serious and sustained attack, not only in Ukraine by military invasion, or Hong Kong where it has been ruthlessly quashed, but across the globe…
Yet democracy doesn’t just need defending. It needs renewing. We should expect our parties to produce plans to improve accountability, speed up government and involve responsible citizens. My own list of ideas would include allowing voters to recall MPs who defect to a different party and force them to face a by-election. Having served as an MP for 26 years I cannot imagine how an elected member can look constituents in the eye after so ignoring their wishes. But that is a topical reaction to recent events. More fundamental would be the use of digital technology to speed up dramatically the processes of government. This has begun: the use of AI to analyse rapidly the thousands of responses to a consultation on abolishing Ofwat recently shows how we can use new technologies to improve decisions in a democracy.
Much more use could be made of citizens’ assemblies. Wouldn’t the debates on assisted dying have benefited from parliament convening a body of citizens giving their informed views, as Demos advocated? Or couldn’t ministers have saved themselves the endless U-turns on digital ID if they had asked such an assembly what they thought? If Ireland could sort out its abortion laws that way, many intractable issues could be tackled with the participation of voters…(More)”.
Paper by Maryam Lotfian et al: “The integration of Artificial Intelligence (AI) into Citizen Science (CS) is transforming how communities collect, analyze, and share data, offering opportunities for enhanced efficiency, accuracy, and scalability of CS projects. AI technologies such as natural language processing, anomaly detection systems, and predictive modeling are increasingly being used to address challenges like CS data validation, participant engagement, and large-scale analysis in CS projects. However, this integration also introduces significant risks and challenges, including ethical concerns related to transparency, accountability, and bias, as well as the potential demotivation of participants through automation of meaningful tasks. Furthermore, issues such as algorithmic opacity and data ownership can undermine trust in community-driven projects. This paper explores the dual impact of AI on CS. It emphasizes the need for a balanced approach where technological advancements do not overshadow the foundational principles of community participation, openness, and volunteer-driven efforts. Drawing from insights shared during a panel discussion with experts from diverse fields, this paper provides a roadmap for the responsible integration of AI into CS. Key considerations include developing standards and legal and ethical frameworks, promoting digital inclusivity, balancing technology with human capacity, and ensuring environmental sustainability…(More)”.
Paper by Charles I. Jones: “Artificial intelligence (A.I.) will likely be the most important technology we have ever developed. Technologies such as electricity, semiconductors, and the internet have been transformative, reshaping economic activity and dramatically increasing living standards throughout the world. In some sense, artificial intelligence is simply the latest of these general purpose technologies and at a minimum should continue the economic transformation that has been ongoing for the past century. However, the case can certainly be made that this time is different. Automating intelligence itself arguably has broader effects than electricity or semiconductors. What if machines—A.I. for cognitive tasks and A.I. plus advanced robots for physical tasks—can perform every task a human can do but more cheaply? What does economics have to say about this possibility, and what might our economic future look like?..(More)”.
Report by the IPPR: “Already, 24 per cent of people report using AI for information seeking every week. But there is widespread concern that the information provided will be inaccurate or biased, and that the rise in AI will threaten news organisations’ survival. As these risks materialise and undermine trusted information flows, we are missing opportunities for AI to become a positive force within the news ecosystem.
At present, AI acts as an opaque and at times unreliable interface for news, with AI companies making invisible editorial choices that reshape the public’s access to information. It’s also beginning to erode existing financial incentives to produce news, without a clear sense of how high-quality journalism will be financed in the future.
This direction for AI and news is not inevitable, and a more positive transformation is possible. If we act soon, this moment can in fact be an opportunity for a reset…(More)”.
Article by Miklós Koren, Gábor Békés, Julian Hinz, and Aaron Lohmann: “Generative AI is changing how software is produced and used. In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS), often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers. We study the equilibrium effects of vibe coding on the OSS ecosystem. We develop a model with endogenous entry and heterogeneous project quality in which OSS is a scalable input into producing more software. Users choose whether to use OSS directly or through vibe coding. Vibe coding raises productivity by lowering the cost of using and building on existing code, but it also weakens the user engagement through which many maintainers earn returns. When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity. Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid…(More)”.
Article by Tina Chakrabarty: “…Intelligent agents—autonomous software entities that can learn, act and collaborate to maintain and enhance data ecosystems—are shaping the next frontier of enterprise data. These agents can:
- Continuously scan datasets for drift, bias and integrity issues.
- Auto-classify data based on use-case sensitivity.
- Generate enriched metadata and context.
- Recommend access controls based on behavioral patterns.
- Validate whether data is fit for use for LLMs.
- Trigger alerts and remediation without human intervention.
Instead of humans managing data tasks manually, agents become active co-pilots—ensuring that every data element is AI-ready. This shift from passive governance to proactive enablement is already transforming how AI models scale globally…(More)”.
About: “The Better Deal for Data (BD4D) is a lightweight data governance standard for the social sector. It offers a practical alternative to the norm of collecting extensive data on individuals and organizations, and often using that data against their interests. In adopting the BD4D, and publicly declaring that they will uphold its seven Commitments, organizations will demonstrate that their data practices are trustworthy.
At the core of the BD4D is its Declaration and seven Commitments. These are plain language statements about an organization’s use of data. The Commitments are supported with explanatory text that details when the Commitments apply and don’t apply, and what an organization needs to do to comply with each. The Declaration, Commitments, and explanatory text make up the BD4D Standard.
Trust is key to the Better Deal for Data. The BD4D is not formulaic legal language, although adopting organizations are expected to be legally bound to the commitments they are making. BD4D is not a technical standard, with extensive specifications on what is and is not permitted in data handling. It is a trust standard, defined by a set of principles that the great majority of nonprofit leaders would find reasonable and consistent with their nonprofit mission.
We believe that the concept of “no surprises” is essential to trust: that the individuals and communities served by an organization should never be surprised by its actions when it comes to data. Thus, a BD4D Adopter should provide information about its data handling in a spirit of honest transparency. Its community should find that the organization’s use of their data is clearly consistent with its social mission. Organizations looking for a loophole, or to do the bare minimum on data responsibility, are not good candidates for BD4D adoption.
We encourage organizations to see the BD4D Commitments as a floor, a set of minimum requirements that could and should be exceeded, and never as a ceiling that limits their commitment to ethical data practices. Organizations in many fields and jurisdictions will have more stringent practices or requirements placed on their data activities, and we see complying with such as being wholly consistent with the BD4D…(More)”.
Paper by Andreas P. Distel, Christoph Grimpe, and Marion Poetz: “We examine the use of scientific research in the development of policy documents within the context of clinical practice guidelines (CPGs) for diagnosing, treating, and managing diabetes. Using natural language processing, we identify “hidden citations” (i.e., textual credit without formal citations) and “token citations” (i.e., formal citations without textual credit) to scientific research within CPGs to understand how scientific evidence is selected and integrated. We find that both types of citations are pervasive, calling into question the use of formal citations alone in understanding the societal impact of scientific research. Using data on scholarly citations and expert ratings, we find that hidden citations are positively associated with the actual impact of the research on patients and caregivers while token citations associate positively with scientific impact. Qualitative insights gathered from interviews with senior guideline writers further illustrate the reasons for certain functions of scientific research, which involve balancing scientific rigor with practical demands in the guideline writing process, the need for local adaptations, political dynamics on the organizational level, and individual preferences towards certain types of studies or the use of experiential knowledge. Our work underscores the critical role of research utilization in translating scientific evidence into policy, showing that policymaker decisions shape societal impact as much as the engagement efforts of scientists, and extends institutional accounts of symbolic and substantive knowledge use…(More)”
Article by Cheryl M. Danton and Christopher Graziul: “…Data sovereignty is a critical issue for indigenous communities wary of extractive practices, a conversation that predates current debates (Kukutai and Taylor, 2016). We speak about the American context here, which is influenced by Canadian efforts to support First Nations (Carroll et al., 2020), but the tensions involved emerge in multiple contexts around the world (e.g., Australia, see Lovett et al., 2020). We cannot speak to these contexts individually but highlight relevant aspects of indigenous data sovereignty in the United States as an example.
The FAIR principles—published in 2016 to promote best practices in scientific data sharing—are designed to make data “Findable, Accessible, Interoperable, and Reusable” (Wilkinson et al., 2016). A complementary set of principles, the CARE Principles—“Collective Benefit, Authority to Control, Responsibility, and Ethics”—were developed by the International Indigenous Data Sovereignty Interest Group, through consultations with Indigenous Peoples, academic experts, government representatives, and other affected parties, in response to increasing concerns regarding the secondary use of data belonging to Indigenous communities. According to their authors, the CARE Principles integrate Indigenous worldviews that center “people” and “purpose” to address critical gaps in conventional data frameworks by ensuring that Indigenous Peoples benefit from data activities and maintain control over their data (Carroll et al., 2020)…(More)“
Paper by Giliberto Capano, Maria Tullia Galanti, Karin Ingold, Evangelia Petridou & Christopher M. Weible: “Theories of the policy process understand the dynamics of policymaking as the result of the interaction of structural and agency variables. While these theories tend to conceptualize structural variables in a careful manner, agency (i.e. the actions of individual agents, like policy entrepreneurs, policy leaders, policy brokers, and policy experts) is left as a residual piece in the puzzle of the causality of change and stability. This treatment of agency leaves room for conceptual overlaps, analytical confusion and empirical shortcomings that can complicate the life of the empirical researcher and, most importantly, hinder the ability of theories of the policy process to fully address the drivers of variation in policy dynamics. Drawing on Merton’s concept of function, this article presents a novel theorization of agency in the policy process. We start from the assumption that agency functions are a necessary component through which policy dynamics evolve. We then theorise that agency can fulfil four main functions – steering, innovation, intermediation and intelligence – that need to be performed, by individual agents, in any policy process through four patterns of action – leadership, entrepreneurship, brokerage and knowledge accumulation – and we provide a roadmap for operationalising and measuring these concepts. We then demonstrate what can be achieved in terms of analytical clarity and potential theoretical leverage by applying this novel conceptualisation to two major policy process theories: the Multiple Streams Framework (MSF) and the Advocacy Coalition Framework (ACF)…(More)”.