Stefaan Verhulst
Press Release: “The U.S. Patent and Trademark Office (USPTO) is launching DesignVision, the first artificial intelligence (AI)-based image search tool available to design patent examiners via the Patents End-to-End (PE2E) search suite. DesignVision is the latest step in the agency’s broader efforts to streamline and modernize examination and reduce application pendency.
DesignVision is an AI-powered tool that is capable of searching U.S. and foreign industrial design collections using image(s) as an input query. The tool provides centralized access and federated searching of design patents, registrations, trademarks, and industrial designs from over 80 global registers, and returns search results based on, and sortable by, image similarity.
DesignVision will augment—not replace—design examiners’ other search tools. Examiners can continue using other PE2E search tools and non-patent literature when conducting their research. The complete text of the Official Gazette Notice can be found on the patent related notices page of the USPTO website…(More)“.
Reports by GMF Technology: “The People’s Republic of China (PRC) builds and exports digital infrastructure as part of its Belt and Road Initiative. While the PRC sells technology deals as a “win-win” with partner countries, the agreements also serve Beijing’s geopolitical interests.
That makes it imperative for European and US policymakers to understand the PRC’s tech footprint and assess its global economic and security impact.
To support policymakers in this endeavor, GMF Technology, using a “technology stack” or “tech stack” framework, has produced a series of reports that map the presence of the PRC and its affiliated entities across countries’ technology domains.
Newly released reports on Kazakhstan, Kyrgyzstan, Serbia, and Uzbekistan are built on previous work in two studies by GMF’s Alliance for Securing Democracy (ASD) on the future internet and the digital information stack released, respectively, in 2020 and 2022. The new reports on Central Asian countries, where Russia maintains significant influence as a legacy of Soviet rule, also examine Kremlin influence there.
The “tech stack” framework features five layers:
- network Infrastructure: including optical cables (terrestrial and undersea), telecommunications equipment, satellites, and space-based connectivity infrastructure
- data Infrastructure: including cloud technology and data centers
- devices: including hand-held consumer instruments such as mobile phones, tablets, and laptops, and more advanced Internet-of-Things and AI-enabled devices such as electric vehicles and surveillance cameras
- applications: includes hardware, software, data analytics, and digital platforms to deliver tailored solutions to consumers, sectors, and industries (e.g., robotic manufacturing)
- governance: includes the legal and normative framework that governs technology use across the entire tech stack..(More)”.
Article by Ali Shiri: “…There are two categories of emerging LLM-enhanced tools that support academic research:
1. AI research assistants: The number of AI research assistants that support different aspects and steps of the research process is growing at an exponential rate. These technologies have the potential to enhance and extend traditional research methods in academic work. Examples include AI assistants that support:
- Concept mapping (Kumu, GitMind, MindMeister);
- Literature and systematic reviews (Elicit, Undermind, NotebookLM, SciSpace);
- Literature search (Consensus, ResearchRabbit, Connected Papers, Scite);
- Literature analysis and summarization (Scholarcy, Paper Digest, Keenious);
- And research topic and trend detection and analysis (Scinapse, tlooto, Dimension AI).
2. ‘Deep research’ AI agents: The field of artificial intelligence is advancing quickly with the rise of “deep research” AI agents. These next-generation agents combine LLMs, retrieval-augmented generation and sophisticated reasoning frameworks to conduct in-depth, multi-step analyses.
Research is currently being conducted to evaluate the quality and effectiveness of deep research tools. New evaluation criteria are being developed to assess their performance and quality.
Criteria include elements such as cost, speed, editing ease and overall user experience — as well as citation and writing quality, and how these deep research tools adhere to prompts…(More)”.
Paper by Sara Marcucci and Stefaan Verhulst: “As Artificial Intelligence (AI) systems become increasingly embedded in societal decision-making, they have simultaneously deepened longstanding asymmetries of data, information and control. Central to this dynamic is what this paper terms agency asymmetry: the systematic lack of meaningful participation by individuals and communities in shaping the data and AI systems that inform decisions that impact their lives. This asymmetry is not merely a technical or procedural shortcoming; it is a structural feature of contemporary data and AI governance that underpins a range of interrelated harms–from algorithmic opacity and marginalization, to ecological degradation.
This paper proposes Digital Self-Determination (DSD) as a normative and practical framework for addressing these challenges. Building on the principle of self-determination as both individual autonomy and collective agency, DSD offers tools for empowering communities and individuals to determine how data-based technologies are designed, implemented and used.
It seeks to contribute to current debates on AI governance and provides a systematic account of how it can be operationalized across the AI lifecycle. In particular, it identifies four domains of intervention–processes, policies, people, and technologies–and illustrates how DSD approaches can be mobilized to confront agency asymmetries, whether through data stewardship models, participatory audits, or inclusive policy instruments…(More)”.
Essay by Dan Williams: “America’s epistemic challenges run deeper than social media.
Many people sense that the United States is undergoing an epistemic crisis, a breakdown in the country’s collective capacity to agree on basic facts, distinguish truth from falsehood, and adhere to norms of rational debate.
This crisis encompasses many things: rampant political lies; misinformation; and conspiracy theories; widespread beliefs in demonstrable falsehoods (“misperceptions”); intense polarization in preferred information sources; and collapsing trust in institutions meant to uphold basic standards of truth and evidence (such as science, universities, professional journalism, and public health agencies).
According to survey data, over 60% of Republicans believe Joe Biden’s presidency was illegitimate. 20% of Americans think vaccines are more dangerous than the diseases they prevent, and 36% think the specific risks of COVID-19 vaccines outweigh their benefits. Only 31% of Americans have at least a “fair amount” of confidence in mainstream media, while a record-high 36% have no trust at all.
What is driving these problems? One influential narrative blames social media platforms like Facebook, Twitter (now X), and YouTube. In the most extreme form of this narrative, such platforms are depicted as technological wrecking balls responsible for shattering the norms and institutions that kept citizens tethered to a shared reality, creating an informational Wild West dominated by viral falsehoods, bias-confirming echo chambers, and know-nothing punditry.
The timing is certainly suspicious. Facebook launched in 2004, YouTube in 2005, and Twitter in 2006. As they and other platforms acquired hundreds of millions of users over the next decade, the health of American democracy and its public sphere deteriorated. By 2016, when Donald Trump was first elected president, many experts were writing about a new “post-truth” or “misinformation” age.
Moreover, the fundamental architecture of social media platforms seems hostile to rational discourse. Algorithms that recommend content prioritize engagement over accuracy. This can amplify sensational and polarizing material or bias-confirming content, which can drag users into filter bubbles. Meanwhile, the absence of traditional gatekeepers means that influencers with no expertise or ethical scruples can reach vast audiences.
The dangerous consequences of these problems seem obvious to many casual observers of social media. And some scientific research corroborates this widespread impression. For example, a systematic review of nearly five hundred studies finds suggestive evidence for a link between digital media use and declining political trust, increasing populism, and growing polarization. Evidence also consistently shows an association between social media use and beliefs in conspiracy theories and misinformation…(More)”.
Paper by Art Alishani, Vincent Homburg, and Ott Velsberg: “Public service providers around the world are now offering chatbots to answer citizens’ questions and deliver digital services. Using these artificial intelligence-powered technologies, citizens can engage in conversations with governments through systems that mimic face-to-face interactions and adjust their use of natural language to citizens’ communication styles. This paper examines emerging experiences with chatbots in government interactions, with a focus on exploring what public administration practitioners and scholars should expect from chatbots in public encounters. Furthermore, it seeks to identify what gaps exist in the general understanding of digital public encounters…(More)”.
Article by by Rahmin Sarabi: “Across the United States, democracy faces mounting challenges from polarization, public distrust, and increasingly complex societal challenges. Traditional systems of civic participation—and the broader foundations of democratic governance—have struggled to adapt as media and electoral incentives increasingly reward outrage over understanding.
Despite these challenges, new possibilities are emerging. Artificial intelligence—specifically large language models (LLMs)—is beginning to serve as a transformative tool for public engagement and policymaking, led by innovative governments and civic institutions.
When used thoughtfully, LLMs can help unlock public wisdom, rebuild trust, and enable better decisionmaking—not by replacing human judgment, but by strengthening it. This promise doesn’t dismiss the serious concerns about AI’s impact on social cohesion, work, and democracy—which remain vital to address. Yet these emerging capabilities can enhance both institutional efficiency and, more importantly, core democratic values: inclusiveness, meaningful participation, and deliberative reasoning.
By strengthening these foundations, AI can enable the collaborative problem-solving today’s interconnected problems demand and help us renew democracy to meet the challenges of our time. This piece examines concrete applications where LLM-based AI is already enhancing democratic processes—from citizen engagement to survey and context analysis—and explores principles for scaling these innovations responsibly…(More)”.
Essay by Antón Barba-Kay: “…Consider the casual assumption that the scope and accessibility of choice is itself “democratic,” such that Reddit, Facebook, Instagram, and TikTok might be called democratic platforms because everyone gets a vote and anyone can become someone. In fact, there’s nothing intrinsically democratic about participation as such at any level. Countries do not become more democratic when the people are more frequently consulted. Expertise and state secrets are part of it, as is the fact that states need to make long-term, binding commitments. Digital consultations (such as have been used by the Five Star Movement in Italy) give undue power to those who control the software through which they take place. There is also the better reason that, when questions are placed before a mass electorate to an extent that exceeds our capacity for deliberation, there occurs a populist or oligarchic reversal. In a direct democracy such as that of ancient Athens, everything begins to turn on who poses the questions and when and why.
The point is that mass populations can be good judges of what locally touches us but are not able to collectively pay attention to what doesn’t—and that this is an entirely democratic, republican situation. I mean that the will of the people is not a fact but an artifice: It is not the input of democratic government but the output of its codified processes. Intermediate institutions such as parties, caucuses, primaries, debates, and the Electoral College were designed in theory to sublimate citizen passions and interests into more substantive formulations than their raw expression allows. The Federalist Papers explicitly speak of it as a machine—a Newtonian “system” of “bodies,” “springs,” “energies,” and “balances” operating under universally observable laws. The point of this machine is to produce the popular will by representing it. The Constitution is intended to be the context within which popular decision can find its most articulate expression through the medium of good government—which is why any one person’s claim to immediately embody the will of the people is anti-democratic.
For digital services as well as democratic processes, there is a point beyond which the intensification of involvement degrades the objects of preference themselves. Just as social media has made us impatient for headlines spicy or outrageous, the focus on and capture of choice as such can destroy the conditions of meaningful choice themselves…(More)”.
Paper by Lidis Garbovan et al: “A major driver of the transition of population data science to occurring within Trusted Research Environments (TRE) is the concern regarding ‘Social Licence’ which suggests that factors such as demonstrable public good are essential for socially acceptable data use. Yet, whilst public dialogue has helped identify the criteria to define what is ‘in the public good’, we currently lack the methodologies necessary to bring a diverse perspective as to whether we are achieving this. The ‘Citizen Panel’ concept may be a mechanism to integrate a diverse public voice into decision-making as a form of public audit of such data access decision-making.
The concept of a ‘Citizen Panel’ is developed by the organisation Understanding Patient Data as a learning data governance model for data access in Longitudinal Population Studies (LPS) that shift the involvement of the public in data access from a one-way direction to a feedback cycle. To date, this model has not been deployed in the context of data access to public data as recommended by Understanding Patient Data and therefore it is not known whether the model will work as intended or how a panel would be constituted and operate in practice.
LPS employ continuous or repeated measures to follow particular individuals over prolonged periods of time—often decades or lifetimes. They are usually observational in nature and include collections of quantitative and/or qualitative data on any combination of exposures and outcomes.
UK Longitudinal Linkage Collaboration (UK LLC) – the national TRE for data linkage in longitudinal research – has been funded by UK Research and Innovation (UKRI), the Economic and Social Research Council (ESRC) and Medical Research Council (MRC) to pilot the first operational Citizen Panel…(More)”.
JRC European Commission Report: “…explores how public administrations can foster more effective collaboration by intentionally shaping the behaviours, routines, and mindsets of public servants. Drawing on cutting-edge insights from cognitive, behavioural, and organisational sciences, it presents a rigorous review of the key drivers of collaboration and how to leverage them. It serves as an evidence-informed compass to help public administrations harness their collaborative potential to effectively address today’s complex policy challenges. The main findings are distilled into twelve actionable principles that outline a clear vision for working together more effectively…(More)”.