Stefaan Verhulst
Book by Susan C. Stokes: “Democracies around the world are getting swept up in a wave of democratic erosion. Since the beginning of the twenty-first century, two dozen presidents and prime ministers have attacked their countries’ democratic institutions, violating political norms, aggrandizing their own powers, and often trying to overstay their terms in office.
The Backsliders offers the first general explanation for this wave. Drawing on a wealth of original research, Susan Stokes shows that increasing income inequality, a legacy of late twentieth-century globalization, left some countries especially at risk of backsliding toward autocracy. Left-behind voters were drawn to right-wing ethnonationalist leaders in countries like the United States, India, and Brazil, and to left-wing populist ones in countries like Venezuela, Mexico, and South Africa.
Unlike military leaders who abruptly kill democracies in coups, elected leaders who erode them gradually must maintain some level of public support. They do so by encouraging polarization among citizens and also by trash-talking their democracies: claiming that the institutions they attack are corrupt and incompetent. They tell voters that these institutions should be torn down and replaced by ones under the executive’s control. The Backsliders describes how journalists, judges, NGOs, and opposition leaders can put the brakes on democratic erosion, and how voters can do so through political engagement and the power of the ballot box…(More)”.
Report by Careful Industries: “…discusses what we learnt about how local communities and everyday publics make sense of data-intensive and AI-based technologies in their everyday environments.
Through engagements with participants in four locations across the UK and at one site in Australia, our research found that there is a persistent perception that government discourse on the societal benefits of AI is disconnected from the needs of local communities in the urban environments where AI innovation takes place.
The report reflects on future opportunities for public participation in AI governance and offers recommendations aiming to grow public trust and deliver better outcomes…(More)”.
European Union: “The strategy identifies three priority areas for action based on:
- Scaling up access to data for AI to ensure our businesses have access to high-quality data needed for innovation
- Streamlining data rules to give legal certainty to businesses and reduce compliance costs
- Safeguarding the EU’s data sovereignty to strengthen our global position on international data flows
Scaling up access to data for AI
Flagship initiatives to address data bottlenecks:
- Launch the first data labs to scale up data availability and create links between data spaces and AI ecosystems – they will pool both private and public resources to make high-quality sectoral data available to companies, including SMEs, and researchers using AI and provide relevant data services for AI-driven innovation
- Scale up common European data spaces, supported by ongoing EU investment of around EUR 100 million, creating new data spaces across key sectors, including a defence data space.
- Explore horizontal enablers to boost the entire data economy. In particular, expanding high-value datasets under the Open Data Directive, making 30 million digitised cultural objects available for AI training, boosting the use of synthetic data and the EU’s production output…(More)”.
Paper by Itziar Moreno and Gorka Espiau: “This paper explores the concept of adaptive governance as a critical approach to addressing complex societal challenges. These multifaceted issues, often termed “wicked problems“, require experimental and collaborative solutions beyond the capacity of traditional governance systems. The paper identifies core capabilities of adaptive governance such as ecosystem visualisation, real-time listening to social perceptions, collective deliberation, and co-creation through experimentation portfolios. Additionally, it highlights the transformative potential of high-performance computing and artificial intelligence (AI) in managing complex societal data and enhancing anticipatory governance…(More)”.
Paper by Anne Mollen: “The widespread adoption of generative artificial intelligence (AI) has led to struggles on a global scale by individual and collective actors trying to secure their autonomy with reference to generative AI. There are several examples of how generative AI impacts the ability of individuals and collectives to self-govern and exercise their free will: for instance, training data copyright violations, cultural misrepresentations, precarious working conditions of data workers, and the environmental and social justice implications of generative AI development. Based on discussions from various fields, including machine learning, AI ethics, and critical data studies, this article presents how current struggles in generative AI relate to matters of autonomy, sovereignty, and self-determination. It specifically reflects on autonomy in relation to generative AI training data, accountability, and market concentration as well as social and environmental justice. Given that these struggles over autonomy significantly relate to the materiality of generative AI, the article proposes digital self-determination and an infrastructural perspective as an analytical concept for a multi-actor, process-oriented, and situated contextual analysis of how autonomy implications manifest in generative AI infrastructures…(More)”.
An excerpt from You Must Become an Algorithmic Problem on the internet’s social contract by Jose Marichal: “In our increasingly digital world, algorithmic models take every typed word or gesture (likes), eye movement, or swipe and breaks them down into either a node (thing) or an edge (attribute), which is placed into a database. This age of incessant collecting and analysing of our digital life is what I call an algorithmic age. In the algorithmic age, the priority is to create products that can predict our future behaviour. Prior to the 2010s, prediction was important, but understanding ourselves and other humans was the priority. We have moved into a regime where data is collected not simply to understand humans for marketing or surveillance purposes, but to create artificial replicas of human thought.
Research and lived experiences increasingly show that most people are not ‘swimming against the data tide’. They are content to accept a world of algorithmic classification despite its obvious harms because it gives them a (false) promise of comfort and security. We continue to be glued to our devices, increasingly using social media platforms as the foundation of our information diets. A 2024 Pew study found that 86 percent of users got their news from digital devices, and a growing number (54 percent) sometimes or often get their news from social media platforms. On social media sites popular with young people, 40 percent of Instagram users and 52 percent of TikTok users regularly get their news from each platform (St Aubin and Liedke 2024a). If we are under the throes of algorithmic overlords, we are not acting like it.
For this reason, we should think of our relationship to algorithms through the lens of contract theory. The concept of a social contract is a core element of political theory. Political theorists have used it to justify why individuals should form allegiances to a particular political system. It is a thought experiment designed to illustrate a relationship, one that cannot possibly be universal in practice since individuals have different reasons for their allegiance to a state. Nonetheless, it is one that provides legitimacy for state power. The three most prominent applications of contract theory come from Locke, Rousseau, and Hobbes, who posit that rational actors will willingly give up their theoretical position in a ‘state of nature’ either for protection of the self (Hobbes 1967 [1651]) or for an increased preservation of rights (Locke 1996). In Rousseau’s (1920 [1762]) case, leaving the state of nature is a fact and the only way to restore a sense of meaning and an escape from the judgement and status consciousness of modernity is to submit to your political community – the general will. In each case, the social contract justifies adherence to a political system. The system will either protect your physical person (Hobbes 1967 [1651]), preserve your natural rights (Locke 1996 [1689]), or provide you with meaning (Rousseau 1920 [1762])…(More)”
Book by César A. Hidalgo: “We all understand that knowledge shapes the fate of business and the growth of nations, but few of us are aware of the principles that govern its motion. The Infinite Alphabet unravels the laws describing the growth and diffusion of knowledge by taking you from a failed attempt to build a city of knowledge in Ecuador to the growth of China’s innovation economy. Through dozens of stories, you will learn why aircraft manufacturers in Italy began manufacturing scooters after the Second World War and how migrants like Samuel Slater shaped the industrial fabric of the United States.
Knowledge is the secret to the wealth of nations. But to understand it, we must accept that it is not a single thing, but an ever-growing tapestry of unique ideas, experiences and received wisdom. An Infinite Alphabet that we are only beginning to fathom.
César A. Hidalgo, a world-renowned scholar for his work on economic complexity, will walk you through the “three laws” and the many principles that govern how knowledge grows, moves, and decays. By the end of this journey, you will understand why knowledge grows exponentially in the electronics industry and what mechanisms govern its diffusion across geographic borders, social networks, and professional boundaries.
Together these principles will teach you how knowledge shapes the world…(More)”.
Article by James Evans and Eamon Duede: “With the emergence of AI in science, we are witnessing the prelude to a curious inversion – our human ability to instrumentally control nature is beginning to outpace human understanding of nature, and in some instances, appears possible without understanding at all. With rapid adoption of AI across all scientific disciplines, what does this mean for the future of scientific inquiry? And what comes after science?…Historically, human curiosity motivated the search to increase our scientific understanding of nature, driving us to develop new methods (including science itself), which often also yielded increased control. Algorithms, understood as tools, were never invested with the same capacity. In the emerging new era, entire regions of the scientific enterprise may not yield the same degree of human understanding despite improved control. Such a result has the potential to reduce the curiosity that drives us to seek out new questions, methods, and solutions, attenuating scientist engagement. Moreover, although algorithms for science have increased in their decision-making capacity, only a few have explicitly sought to invest them with flexibility and explicitly encoded curiosity for sustained performance.
If successful, science “after science” must rely increasingly on human curiosity..(More)”.
Book by Brian Potter: “Efficiency is the engine of civilization. But where do improvements in production efficiency come from? In The Origins of Efficiency, Brian Potter argues that improving production efficiency—finding ways to produce goods and services in less time, with less labor, using fewer resources—is the force behind some of the most consequential changes in human history. He examines the fundamental characteristics of a production process and how each can be made faster, cheaper, and more reliable, with detailed examples from a range of industries: steel and semiconductors, wind turbines and container shipping, Tesla and the Ford Model T, and more. The Origins of Efficiency is a comprehensive companion for anyone seeking to understand how we arrived at this age of material abundance—and how we can push efficiency improvements into domains like housing, medicine, and education, where much work is left to be done…(More)”.
Paper by Sarah McKenna et al: “Co-production of research, where researchers and experts by experience work as equal partners throughout a research project, can improve the quality, relevance, implementation and impact of research. However, there is limited evidence on methods for successful co-production in data-intensive research with underserved groups. In partnership with the charity Voice of Young People in Care (VOYPIC) and a group of care experienced young people, the Administrative Data Research Centre Northern Ireland (ADRC NI) piloted and evaluated a co-production approach in a research project that used linked administrative data to examine the association between care experience and mental ill health and mortality.
The aim of this paper is to report the impact of co-production using the pilot as a case study, and assess the mechanisms involved against published principles of co-production. Additionally, we consider if co-production in this context is a special case that warrants bespoke guidance…(More)”.