The Limitations of Consent as a Legal Basis for Data Processing in the Digital Society


Paper by the Centre for Information Policy Leadership: “Contemporary everyday life is increasingly permeated by digital information, whether by creating, consuming or depending on it. Most of our professional and private lives now rely to a large degree on digital interactions. As a result, access to and the use of data, and in particular personal data, are key elements and drivers of the digital economy and society. This has brought us to a significant inflection point on the issue of legitimising the processing of personal data in the wide range of contexts that are essential to our data-driven, AI-enabled digital products and services. The time has come to seriously re-consider the status of consent as a privileged legal basis and to consider alternatives that are better suited for a wide range of essential data processing contexts. The most prominent among these alternatives are the “legitimate interest” and “contractual necessity” legal bases, which have found an equivalent in a number of jurisdictions. One example is Singapore, where revisions to their data protection framework include a legitimate interest exemption…(More)”.

Towards Civic Digital Twins: Co-Design the Citizen-Centric Future of Bologna


Paper by Massimiliano Luca et al: “We introduce Civic Digital Twin (CDT), an evolution of Urban Digital Twins designed to support a citizen-centric transformative approach to urban planning and governance. CDT is being developed in the scope of the Bologna Digital Twin initiative, launched one year ago by the city of Bologna, to fulfill the city’s political and strategic goal of adopting innovative digital tools to support decision-making and civic engagement. The CDT, in addition to its capability of sensing the city through spatial, temporal, and social data, must be able to model and simulate social dynamics in a city: the behavior, attitude, and preference of citizens and collectives and how they impact city life and transform transformation processes. Another distinctive feature of CDT is that it must be able to engage citizens (individuals, collectives, and organized civil society) and other civic stakeholders (utilities, economic actors, third sector) interested in co-designing the future of the city. In this paper, we discuss the motivations that led to the definition of the CDT, define its modeling aspects and key research challenges, and illustrate its intended use with two use cases in urban mobility and urban development…(More)”.

Revealed: bias found in AI system used to detect UK benefits fraud


Article by Robert Booth: “An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.

The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.

But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.

Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system…(More)”.

The British state is blind


The Economist: “Britiain is a bit bigger than it thought. In 2023 net migration stood at 906,000 people, rather more than the 740,000 previously estimated, according to the Office for National Statistics. It is equivalent to discovering an extra Slough. New numbers for 2022 also arrived. At first the ONS thought net migration stood at 606,000. Now it reckons the figure was 872,000, a difference roughly the size of Stoke-on-Trent, a small English city.

If statistics enable the state to see, then the British government is increasingly short-sighted. Fundamental questions, such as how many people arrive each year, are now tricky to answer. How many people are in work? The answer is fuzzy. Just how big is the backlog of court cases? The Ministry of Justice will not say, because it does not know. Britain is a blind state.

This causes all sorts of problems. The Labour Force Survey, once a gold standard of data collection, now struggles to provide basic figures. At one point the Resolution Foundation, an economic think-tank, reckoned the ONS had underestimated the number of workers by almost 1m since 2019. Even after the ONS rejigged its tally on December 3rd, the discrepancy is still perhaps 500,000, Resolution reckons. Things are so bad that Andrew Bailey, the governor of the Bank of England, makes jokes about the inaccuracy of Britain’s job-market stats in after-dinner speeches—akin to a pilot bursting out of the cockpit mid-flight and asking to borrow a compass, with a chuckle.

Sometimes the sums in question are vast. When the Department for Work and Pensions put out a new survey on household income in the spring, it was missing about £40bn ($51bn) of benefit income, roughly 1.5% of gdp or 13% of all welfare spending. This makes things like calculating the rate of child poverty much harder. Labour mps want this line to go down. Yet it has little idea where the line is to begin with.

Even small numbers are hard to count. Britain has a backlog of court cases. How big no one quite knows: the Ministry of Justice has not published any data on it since March. In the summer, concerned about reliability, it held back the numbers (which means the numbers it did publish are probably wrong, says the Institute for Government, another think-tank). And there is no way of tracking someone from charge to court to prison to probation. Justice is meant to be blind, but not to her own conduct…(More)”.

Informality in Policymaking


Book edited by Lindsey Garner-Knapp, Joanna Mason, Tamara Mulherin and E. Lianne Visser: “Public policy actors spend considerable time writing policy, advising politicians, eliciting stakeholder views on policy concerns, and implementing initiatives. Yet, they also ‘hang out’ chatting at coffee machines, discuss developments in the hallway walking from one meeting to another, or wander outside to carparks for a quick word and to avoid prying eyes. Rather than interrogating the rules and procedures which govern how policies are made, this volume asks readers to begin with the informal as a concept and extend this to what people do, how they relate to each other, and how this matters.

Emerging from a desire to enquire into the lived experience of policy professionals, and to conceptualise afresh the informal in the making of public policy, Informality in Policymaking explores how informality manifests in different contexts, spaces, places, and policy arenas, and the implications of this. Including nine empirical chapters, this volume presents studies from around the world and across policy domains spanning the rural and urban, and the local to the supranational. The chapters employ interdisciplinary approaches and integrate creative elements, such as drawings of hand gestures and fieldwork photographs, in conjunction with ethnographic ‘thick descriptions’.

In unveiling the realities of how policy is made, this deeply meaningful and thoughtfully constructed collection argues that the formal is only part of the story of policymaking, and thus only part of the solutions it seeks to create. Informality in Policymaking will be of interest to researchers and policymakers alike…(More)”.

Can AI review the scientific literature — and figure out what it all means?


Article by Helen Pearson: “When Sam Rodriques was a neurobiology graduate student, he was struck by a fundamental limitation of science. Even if researchers had already produced all the information needed to understand a human cell or a brain, “I’m not sure we would know it”, he says, “because no human has the ability to understand or read all the literature and get a comprehensive view.”

Five years later, Rodriques says he is closer to solving that problem using artificial intelligence (AI). In September, he and his team at the US start-up FutureHouse announced that an AI-based system they had built could, within minutes, produce syntheses of scientific knowledge that were more accurate than Wikipedia pages1. The team promptly generated Wikipedia-style entries on around 17,000 human genes, most of which previously lacked a detailed page.How AI-powered science search engines can speed up your research

Rodriques is not the only one turning to AI to help synthesize science. For decades, scholars have been trying to accelerate the onerous task of compiling bodies of research into reviews. “They’re too long, they’re incredibly intensive and they’re often out of date by the time they’re written,” says Iain Marshall, who studies research synthesis at King’s College London. The explosion of interest in large language models (LLMs), the generative-AI programs that underlie tools such as ChatGPT, is prompting fresh excitement about automating the task…(More)”.

AI adoption in the public sector


Two studies from the Joint Research Centre: “…delve into the factors that influence the adoption of Artificial Intelligence (AI) in public sector organisations.

first report analyses a survey conducted among 574 public managers across seven EU countries, identifying what are currently the main drivers of AI adoption and providing 3 key recommendations to practitioners. 

Strong expertise and various organisational factors emerge as key contributors for AI adoptions, and a second study sheds light on the essential competences and governance practices required for the effective adoption and usage of AI in the public sector across Europe…

The study finds that AI adoption is no longer a promise for public administration, but a reality, particularly in service delivery and internal operations and to a lesser extent in policy decision-making. It also highlights the importance of organisational factors such as leadership support, innovative culture, clear AI strategy, and in-house expertise in fostering AI adoption. Anticipated citizen needs are also identified as a key external factor driving AI adoption. 

Based on these findings, the report offers three policy recommendations. First, it suggests paying attention to AI and digitalisation in leadership programmes, organisational development and strategy building. Second, it recommends broadening in-house expertise on AI, which should include not only technical expertise, but also expertise on ethics, governance, and law. Third, the report advises monitoring (for instance through focus groups and surveys) and exchanging on citizen needs and levels of readiness for digital improvements in government service delivery…(More)”.

Access to data for research: lessons for the National Data Library from the front lines of AI innovation.


Report by the Minderoo Centre for Technology and Democracy and the Bennett Institute for Public Policy: “…a series of case studies on access to data for research. These case studies illustrate the barriers that researchers are grappling with, and suggest how a new wave of policy development could help address these.

Each show innovative uses of data for research in areas that are critically important to science and society, including:

The projects highlight crucial design considerations for the UK’s National Data Library and the need for a digital infrastructure that connects data, researchers, and resources that enable data use. By centring the experiences of researchers on the front-line of AI innovation, this report hopes to bring some of those barriers into focus and inform continued conversations in this area…(More)”.

Launching the Data-Powered Positive Deviance Course


Blog by Robin Nowok: “Data-Powered Positive Deviance (DPPD) is a new method that combines the principles of Positive Deviance with the power of digital data and advanced analytics. Positive Deviance is based on the observation that in every community or organization, some individuals achieve significantly better outcomes than their peers, despite having similar challenges and resources. These individuals or groups are referred to as positive deviants.

The DPPD method follows the same logic as the Positive Deviance approach but leverages existing, non-traditional data sources, either instead of or in conjunction with traditional data sources. This allows for the identification of positive deviants on larger geographic and temporal scales. Once identified, we can then uncover the behaviors that lead to their success, enabling others to adopt these practices.

In a world where top-down solutions often fall short, DPPD offers a fresh perspective. It focuses on finding what’s already working within communities, rather than imposing external solutions. This can lead to more sustainable, culturally appropriate, and effective interventions.

Our online course is designed to get you started on your DPPD journey. Through five modules, you’ll gain both theoretical knowledge and practical skills to apply DPPD in your own work…(More)”.

Access, Signal, Action: Data Stewardship Lessons from Valencia’s Floods


Article by Marta Poblet, Stefaan Verhulst, and Anna Colom: “Valencia has a rich history in water management, a legacy shaped by both triumphs and tragedies. This connection to water is embedded in the city’s identity, yet modern floods test its resilience in new ways.

During the recent floods, Valencians experienced a troubling paradox. In today’s connected world, digital information flows through traditional and social media, weather apps, and government alert systems designed to warn us of danger and guide rapid responses. Despite this abundance of data, a tragedy unfolded last month in Valencia. This raises a crucial question: how can we ensure access to the right data, filter it for critical signals, and transform those signals into timely, effective action?

Data stewardship becomes essential in this process.

In particular, the devastating floods in Valencia underscore the importance of:

  • having access to data to strengthen the signal (first mile challenges)
  • separating signal from noise
  • translating signal into action (last mile challenges)…(More)”.