Stefaan Verhulst
Article by Ruchika Joshi and Miranda Bogen: “The ability to remember you and your preferences is rapidly becoming a big selling point for AI chatbots and agents.
Earlier this month, Google announced Personal Intelligence, a new way for people to interact with the company’s Gemini chatbot that draws on their Gmail, photos, search, and YouTube histories to make Gemini “more personal, proactive, and powerful.” It echoes similar moves by OpenAI, Anthropic, and Meta to add new ways for their AI products to remember and draw from people’s personal details and preferences. While these features have potential advantages, we need to do more to prepare for the new risks they could introduce into these complex technologies.
Personalized, interactive AI systems are built to act on our behalf, maintain context across conversations, and improve our ability to carry out all sorts of tasks, from booking travel to filing taxes. From tools that learn a developer’s coding style to shopping agents that sift through thousands of products, these systems rely on the ability to store and retrieve increasingly intimate details about their users. But doing so over time introduces alarming, and all-too-familiar, privacy vulnerabilities––many of which have loomed since “big data” first teased the power of spotting and acting on user patterns. Worse, AI agents now appear poised to plow through whatever safeguards had been adopted to avoid those vulnerabilities.
Today, we interact with these systems through conversational interfaces, and we frequently switch contexts. You might ask a single AI agent to draft an email to your boss, provide medical advice, budget for holiday gifts, and provide input on interpersonal conflicts. Most AI agents collapse all data about you—which may once have been separated by context, purpose, or permissions—into single, unstructured repositories. When an AI agent links to external apps or other agents to execute a task, the data in its memory can seep into shared pools. This technical reality creates the potential for unprecedented privacy breaches that expose not only isolated data points, but the entire mosaic of people’s lives…(More)”.
Article by Sarah Wray: “The UK Government Digital Service (GDS) has published new guidelines to help public sector organisations prepare their datasets for use with artificial intelligence. Alongside a four-pillar framework, the guidance includes an AI-ready data action plan and a self-assessment checklist.
The document states that: “The United Kingdom is at a critical inflection point in its adoption of artificial intelligence across sectors. While advances in machine learning, generative AI capabilities, and agentic AI capabilities continue at pace, the effectiveness, safety, and legitimacy of AI adoption remain fundamentally constrained by the quality, structure, and governance of underlying data.”
The guidelines, which were shaped by input from public sector bodies, departments and expert organisations, set out four pillars of AI-ready datasets to address these issues: technical optimisation; data and metadata quality; organisational and infrastructure context; and legal, security and ethical compliance.
The document states that: “AI readiness is inherently socio technical. Infrastructure modernisation, metadata fitness, and unstructured data pipelines are essential, but insufficient without clear accountability, sustained skills, and explicit legal and ethical decisioning at dataset level.”..The Department for Science, Innovation and Technology (DSIT) has also published a progress update on the National Data Library (NDL).
The forthcoming NDL is envisaged as a tool to make it “easier to find and reuse data across public sector organisations”. Its goal is to support “better prevention, intervention and detection, [and open] up data to industry, the voluntary sector, start-ups and academics to accelerate AI-driven innovation and boost growth”.
The creation of the NDL is backed by over £100m (US$138m) as part of a £1.9bn (US$2.6bn) total investment allocated to DSIT for cross-cutting digital priorities…(More)”.
Paper by Bruno Botas et al: “The increasing use of social media, particularly X (formerly Twitter), has enabled citizens to openly share their views, making it a valuable arena for examining public perceptions of immigration and its intersections with racial discrimination and xenophobia. This study analyzes Spanish digital debates from January 2020 to January 2023 through a mixed methodology that combines text pre-processing, semantic filtering of keywords, topic modeling, and sentiment analysis. A five-topic solution obtained through Latent Dirichlet Allocation (LDA) captured the main dimensions of the discourse: (1) economic and political debates on immigration, (2) international migration and refugee contexts, (3) racism and social discrimination, (4) insults, stereotypes, and xenophobic framings, and (5) small boat arrivals and maritime management. Sentiment analysis using a transformer-based model (roBERTuito) revealed a strong predominance of negativity across all topics, with sharp spikes linked to major migration crises, humanitarian emergencies, and highly mediatized cultural events. Qualitative readings of representative posts further showed that negativity was often articulated through invasion metaphors, securitarian framings, satire, and ridicule, indicating that hostility was not merely reactive but embedded in broader economic, political, and cultural registers. These findings demonstrate that discriminatory discourse in Spain is event-driven, becoming particularly salient during crises and symbolic moments, and underline the persistent role of social media in amplifying racialized exclusion and partisan polarization…(More)”.
Book by Roger Kreuz: “Plagiarism and appropriation are hot topics when they appear in the news. A politician copies a section of a speech, a section of music sounds familiar, the plot of a novel follows the same pattern as an older story, a piece of scientific research is attributed to the wrong researcher… The list is endless. Allegations and convictions of such incidents can easily ruin a career and inspire gossip. People report worrying about unconsciously appropriating someone else’s work. But why do people plagiarise? How many claims of unconscious plagiarism are truthful? How is plagiarism detected, and what are the outcomes for the perpetrators and victims? Strikingly Similar uncovers the deeper psychology behind this controversial human behavior, as well as a cultural history that is far wider and more interesting than sensationalised news stories…(More)”.
Paper by Federico Bartolomucci, Edoardo Ramalli and Valeria Maria Urbano: “The potential benefits deriving from inter-organizational data sharing have increased over time, leading to an intensified interest in data ecosystems. The governance of these endeavors depends on both collaborative and data governance dimensions. However, previous research has often treated these dimensions separately, creating silos that hinder the capacity to deliver value considering their socio-technical nature. Addressing this gap, this study investigates the intertwined relationship between these two dimensions within data ecosystems. It does so by questioning which existing and most relevant relationships exist between them, as well as the nature of these relationships. To this end, we adopt a multiple case study approach, analyzing five data ecosystems. The research led to the development of a conceptual framework for Integrated Governance, highlighting the need for a holistic socio-technical approach that addresses collaborative and data governance dimensions as intertwined. The framework unveils 24 core relationships between these dimensions in data ecosystems and provides insights on the nature of the relationships, distinguishing among causal, explanatory, concurrent, chronological, and overlapping ones. This work introduces a new perspective in the academic discourse on data sharing providing actionable insights for practitioners and enabling them to design and manage data ecosystems more effectively…(More)”.

Article by Julia Angwin: “We are in a phone war. Ever since cameras became embedded in cellphones, people have been using their devices to bear witness to state violence. But now, the state is striking back.
I don’t think it is any coincidence that Alex Pretti was holding his phone when he was shot to death by federal agents in Minneapolis. Or that Renee Good’s partner was filming a federal agent seconds before he killed Ms. Good. Agents have repeatedly knocked phones out of the hands of observers. They have beaten people filming them and followed them to their homes and threatened them. Of the 19 shootings by federal agents in the past year identified by The Trace, a news outlet that investigates gun violence, at least four involved people who were observing or documenting federal agents’ actions.
Courts have long granted citizens a First Amendment right to film in public. But this right on paper is now being increasingly contested on the streets as federal agents try to stop citizens from recording their activities…
Government officials have openly equated filming an agent with violence in statements and in court testimony. In July, Homeland Security Secretary Kristi Noem said that violence against agents includes “videotaping them where they are at, when they are out on operations.”
The nation’s founders worried that if the state had a monopoly on weapons, its citizens could be oppressed. Their answer was the Second Amendment. Now that our phones are the primary weapons of today’s information war, we should be as zealous about our right to bear phones as we are about our right to bear arms. To adopt the language of Second Amendment enthusiasts, perhaps the only thing that can eventually stop a bad guy with a gun is a good guy with a camera…(More)”
Article with Sami Mahroum: “In my experimentation with AI-augmented policy analysis for government clients, I have found that these systems excel at what I call “sentiment-aware policy design.” While traditional tools might show that a congestion charge reduces traffic by 22%, AI systems can remind you that the term “congestion charge” polls substantially worse than “clean-air fee”; that implementation during election years multiplies political risk; and that exempting delivery vehicles creates coalition-building possibilities with small-business groups.
The point isn’t to replace human judgment. It is to make experienced insiders’ implicit political knowledge more explicit, systematic, and testable. With AI, the abstract-rationality crowd gets quantitative rigor, the bounded-rationality practitioners get political intelligence, and – crucially – both can see the other’s perspective clearly.
Moreover, when combined with web-search capabilities, AI tools can contribute near-real-time sentiment analysis. This matters because policies designed to address last quarter’s concerns might no longer fit the political terrain when they launch in the coming quarter. By the time a pension reform reaches parliament, murmurs of a recession may have changed voters’ priorities entirely.
AI-powered analysis can reveal how specific issues are being discussed across news, social media, parliamentary debates, stakeholder communications, and other channels. It can identify rising concerns and flag when a window of political opportunity has opened or closed. Such insights can help governments counter the perception that they are slow, deaf, and disconnected from everyday realities. AI cannot make governments omniscient, but it can make them more responsive and less blind to the political consequences of technical decisions.
The high-trust technocracies succeed partly because they have systematized the integration of technical excellence with political responsiveness. Now, AI offers democracies the means to do so as well…(More)”.
Article by Brian Callaci and Sandeep Vaheesan: “…The scholarship on state capacity emphasizes the plurality and unevenness of state capacities. For example, states can strengthen their capacities in some areas, such as repression, while self-consciously weakening their capacities in others, such as corporate regulation. States exercise their power for different ends and use assorted means, some good, some bad. Some state agencies deliver health care for millions, while others target working-class people through tax audits and imprisonment. Moreover, we care not just about the state’s capacity to act, but also about the democratic legitimacy of those actions. States make some decisions through democratic means, such as legislation and regulation based on public input and consultation, and others through undemocratic methods, such as court decisions. And while state capacity entails the ability of the state to pursue its own goals autonomously from powerful social groups such as large corporations, there are other social groups, like poor communities suffering the effects of environmental racism, that do not have enough power to influence the state.
Taxation, spending, regulation, and public provision are all aspects of state capacity. A prerequisite for a state that meets even Weber’s minimal criteria would be fiscal capacity: the ability to collect taxes and direct public resources to the state’s desired ends. On the taxation front, U.S. state capacity is clearly heading in the wrong direction, with corporations and wealthy individuals openly pursuing a multitude of tax avoidance strategies with little fear of negative consequences. At the sub-federal level, states and municipalities compete with one another for private investment via offers of deregulation and subsidies, allowing powerful corporations to choose the level of regulation and taxation they desire. On the spending side, the federal government’s reliance on private contractors and unwillingness to use its bargaining power as a large buyer means it has limited control over military procurement costs…(More)”.
Blog by Mohamed Shareef: “…For two decades, Asian governments have counted broadband subscriptions, celebrated connectivity percentages, and commissioned policy frameworks.
Meanwhile, fishing communities in the Maldives still can’t afford 1GB of data, Pakistani e-government services crash during internet disruptions, and Tongan government operations collapsed for five weeks after a volcanic eruption severed their only submarine cable.
The gap between digital strategy documents and actual service delivery has never been wider. Here’s how Asian governments can close it.
Measure what citizens actually experience
Your ministry reports 85 per cent internet penetration. But can your citizens actually access government services during monsoon season when submarine cables fail? Can rural hospitals use your telemedicine platform on 3G networks? What percentage of median household income does meaningful connectivity actually cost? For Asian governments, this means replacing vanity metrics with citizen-centered measurements:
Instead of: “Fiber deployed to 500 district” . Measure: “Healthcare centers in 500 districts can access national health records during extreme weather events”
Instead of: “75 per cent smartphone penetration”. Measure: “Percentage of citizens who can afford data plans sufficient for essential government services”
Instead of: “E-government portal launched”. Measure: “Government services accessible to citizens using entry-level devices on congested networks”
Bangladesh’s experience with biometric identity systems, India’s Aadhaar implementation challenges, and Indonesia’s struggles with connectivity in remote islands offer lessons. The question isn’t whether you have digital infrastructure. It’s whether that infrastructure delivers services when citizens need them most…(More)”.
Paper by Pietro Bini, Lin William Cong, Xing Huang & Lawrence J. JinDo generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to date—originally designed to document human biases—on prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases…(More)”.