Why more AI researchers should collaborate with governments


Article by Mohamed Ibrahim: “Artificial intelligence (AI) is beginning to transform many industries, yet its use to improve public services remains limited globally. AI-based tools could streamline access to government benefits through online chatbots or automate systems by which citizens report problems such as potholes.

Currently, scholarly advances in AI are mostly confined to academic papers and conferences, rarely translating into actionable government policies or products. This means that the expertise at universities is not used to solve real-world problems. As a No10 Innovation Fellow with the UK government and a lecturer in spatial data science, I have explored the potential of AI-driven rapid prototyping in public policy.

Take Street.AI, a prototype smartphone app that I developed, which lets citizens report issues including potholes, street violence or illegal litter dumping by simply taking a picture through the app. The AI model classifies the problem automatically and alerts the relevant local authority, passing on the location and details of the issue. A key feature of the app is its on-device processing, which ensures privacy and reduces operational costs. Similar tools were tested as an early-warning system during the riots that swept the United Kingdom in July and August 2024.

AI models can also aid complex decision-making — for instance, that involved in determining where to build houses. The UK government plans to construct 1.5 million homes in the next 5 years, but planning laws require that several parameters be considered — such as proximity to schools, noise levels, the neighbourhoods’ built-up ratio and flood risk. The current strategy is to compile voluminous academic reports on viable locations, but an online dashboard powered by AI that can optimize across parameters would be much more useful to policymakers…(More)”.

Europe’s GDPR privacy law is headed for red tape bonfire within ‘weeks’


Article by Ellen O’Regan: “Europe’s most famous technology law, the GDPR, is next on the hit list as the European Union pushes ahead with its regulatory killing spree to slash laws it reckons are weighing down its businesses.

The European Commission plans to present a proposal to cut back the General Data Protection Regulation, or GDPR for short, in the next couple of weeks. Slashing regulation is a key focus for Commission President Ursula von der Leyen, as part of an attempt to make businesses in Europe more competitive with rivals in the United States, China and elsewhere. 

The EU’s executive arm has already unveiled packages to simplify rules around sustainability reporting and accessing EU investment. The aim is for companies to waste less time and money on complying with complex legal and regulatory requirements imposed by EU laws…Seven years later, Brussels is taking out the scissors to give its (in)famous privacy law a trim.

There are “a lot of good things about GDPR, [and] privacy is completely necessary. But we don’t need to regulate in a stupid way. We need to make it easy for businesses and for companies to comply,” Danish Digital Minister Caroline Stage Olsen told reporters last week. Denmark will chair the work in the EU Council in the second half of 2025 as part of its rotating presidency.

The criticism of the GDPR echoes the views of former Italian Prime Minister Mario Draghi, who released a landmark economic report last September warning that Europe’s complex laws were preventing its economy from catching up with the United States and China. “The EU’s regulatory stance towards tech companies hampers innovation,” Draghi wrote, singling out the Artificial Intelligence Act and the GDPR…(More)”.

Researching data discomfort: The case of Statistics Norway’s quest for billing data


Paper by Lisa Reutter: “National statistics offices are increasingly exploring the possibilities of utilizing new data sources to position themselves in emerging data markets. In 2022, Statistics Norway announced that the national agency will require the biggest grocers in Norway to hand over all collected billing data to produce consumer behavior statistics which had previously been produced by other sampling methods. An online article discussing this proposal sparked a surprisingly (at least to Statistics Norway) high level of interest among readers, many of whom expressed concerns about this intended change in data practice. This paper focuses on the multifaceted online discussions of the proposal, as these enable us to study citizens’ reactions and feelings towards increased data collection and emerging public-private data flows in a Nordic context. Through an explorative empirical analysis of comment sections, this paper investigates what is discussed by commenters and reflects upon why this case sparked so much interest among citizens in the first place. It therefore contributes to the growing literature of citizens’ voices in data-driven administration and to a wider discussion on how to research public feeling towards datafication. I argue that this presents an interesting case of discomfort voiced by citizens, which demonstrates the contested nature of data practices among citizens–and their ability to regard data as deeply intertwined with power and politics. This case also reminds researchers to pay attention to seemingly benign and small changes in administration beyond artificial intelligence…(More)”

Integrating Social Media into Biodiversity Databases: The Next Big Step?


Article by Muhammad Osama: “Digital technologies and social media have transformed ecology and conservation biology data collection. Traditional biodiversity monitoring often relies on field surveys, which can be time-consuming and biased toward rural habitats.

The Global Biodiversity Information Facility (GBIF) serves as a key repository for biodiversity data, but it faces challenges such as delayed data availability and underrepresentation of urban habitats.

Social media platforms have become valuable tools for rapid data collection, enabling users to share georeferenced observations instantly, reducing time lags associated with traditional methods. The widespread use of smartphones with cameras allows individuals to document wildlife sightings in real-time, enhancing biodiversity monitoring. Integrating social media data with traditional ecological datasets offers significant advancements, particularly in tracking species distributions in urban areas.

In this paper, the authors evaluated the Jersey tiger moth’s habitat usage by comparing occurrence data from social media platforms (Instagram and Flickr) with traditional records from GBIF and iNaturalist. They hypothesized that social media data would reveal significant JTM occurrences in urban environments, which may be underrepresented in traditional datasets…(More)”.

Uniting the UK’s Health Data: A Huge Opportunity for Society’


The Sudlow Review (UK): “…Surveys show that people in the UK overwhelmingly support the use of their health data with appropriate safeguards to improve lives. One of the review’s recommendations calls for continued engagement with patients, the public, and healthcare professionals to drive forward developments in health data research.

The review also features several examples of harnessing health data for public benefit in the UK, such as the national response to the COVID-19 pandemic. But successes like these are few and far between due to complex systems and governance. The review reveals that:

  • Access to datasets is difficult or slow, often taking months or even years.
  • Data is accessible for analysis and research related to COVID-19, but not to tackle other health conditions, such as other infectious diseases, cancer, heart disease, stroke, diabetes and dementia.
  • More complex types of health data generally don’t have national data systems (for example, most lab testing data and radiology imaging).
  • Barriers like these can delay or prevent hundreds of studies, holding back progress that could improve lives…

The Sudlow Review’s recommendations provide a pathway to establishing a secure and trusted health data system for the UK:

  1. Major national public bodies with responsibility for or interest in health data should agree a coordinated joint strategy to recognise England’s health data for what they are: a critical national infrastructure.
  2. Key government health, care and research bodies should establish a national health data service in England with accountable senior leadership.
  3. The Department of Health and Social Care should oversee and commission ongoing, coordinated, engagement with patients, public, health professionals, policymakers and politicians.
  4. The health and social care departments in the four UK nations should set a UK-wide approach to streamline data access processes and foster proportionate, trustworthy data governance.
  5. National health data organisations and statistical authorities in the four UK nations should develop a UK-wide system for standards and accreditation of secure data environments (SDEs) holding data from the health and care system…(More)”.

Public Value of Data: B2G data-sharing Within the Data Ecosystem of Helsinki


Paper by Vera Djakonoff: “Datafication penetrates all levels of society. In order to harness public value from an expanding pool of private-produced data, there has been growing interest in facilitating business-to-government (B2G) data-sharing. This research examines the development of B2G data-sharing within the data ecosystem of the City of Helsinki. The research has identified expectations ecosystem actors have for B2G data-sharing and factors that influence the city’s ability to unlock public value from private-produced data.

The research context is smart cities, with a specific focus on the City of Helsinki. Smart cities are in an advantageous position to develop novel public-private collaborations. Helsinki, on the international stage, stands out as a pioneer in the realm of data-driven smart city development. For this research, nine data ecosystem actors representing the city and companies participated in semi-structured thematic interviews through which their perceptions and experiences were mapped.

The theoretical framework of this research draws from the public value management (PVM) approach in examining the smart city data ecosystem and alignment of diverse interests for a shared purpose. Additionally, the research transcends the examination of the interests in isolation and looks at how technological artefacts shape the social context and interests surrounding them. Here, the focus is on the properties of data as an artefact with anti-rival value-generation potential.

The findings of this research reveal that while ecosystem actors recognise that more value can be drawn from data through collaboration, this is not apparent at the level of individual initiatives and transactions. This research shows that the city’s commitment to and facilitation of a long-term shared sense of direction and purpose among ecosystem actors is central to developing B2G data-sharing for public value outcomes. Here, participatory experimentation is key, promoting an understanding of the value of data and rendering visible the diverse motivations and concerns of ecosystem actors, enabling learning for wise, data-driven development…(More)”.

The big idea: should governments run more experiments?


Article by Stian Westlake: “…Conceived in haste in the early days of the pandemic, Recovery (which stands for Randomised Evaluation of Covid-19 Therapy) sought to find drugs to help treat people seriously ill with the novel disease. It brought together epidemiologists, statisticians and health workers to test a range of promising existing drugs at massive scale across the NHS.

The secret of Recovery’s success is that it was a series of large, fast, randomised experiments, designed to be as easy as possible for doctors and nurses to administer in the midst of a medical emergency. And it worked wonders: within three months, it had demonstrated that dexamethasone, a cheap and widely available steroid, reduced Covid deaths by a fifth to a third. In the months that followed, Recovery identified four more effective drugs, and along the way showed that various popular treatments, including hydroxychloroquine, President Trump’s tonic of choice, were useless. All in all, it is thought that Recovery saved a million lives around the world, and it’s still going.

But Recovery’s incredible success should prompt us to ask a more challenging question: why don’t we do this more often? The question of which drugs to use was far from the only unknown we had to navigate in the early days of the pandemic. Consider the decision to delay second doses of the vaccine, when to close schools, or the right regime for Covid testing. In each case, the UK took a calculated risk and hoped for the best. But as the Royal Statistical Society pointed out at the time, it would have been cheap and quick to undertake trials so we could know for sure what the right choice was, and then double down on it.

There is a growing movement to apply randomised trials not just in healthcare but in other things government does. ..(More)”.

Government must earn public trust that AI is being used safely and responsibly


Article by Sue Bateman and Felicity Burch: “Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons. 

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI. 

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”…(More)”.

Cutting through complexity using collective intelligence


Blog by the UK Policy Lab: “In November 2021 we established a Collective Intelligence Lab (CILab), with the aim of improving policy outcomes by tapping into collective intelligence (CI). We define CI as the diversity of thought and experience that is distributed across groups of people, from public servants and domain experts to members of the public. We have been experimenting with a digital tool, Pol.is, to capture diverse perspectives and new ideas on key government priority areas. To date we have run eight debates on issues as diverse as Civil Service modernisation, fisheries management and national security. Across these debates over 2400 civil servants, subject matter experts and members of the public have participated…

From our experience using CILab on live policy issues, we have identified a series of policy use cases that echo findings from the government of Taiwan and organisations such as Nesta. These use cases include: 1) stress-testing existing policies and current thinking, 2) drawing out consensus and divergence on complex, contentious issues, and 3) identifying novel policy ideas

1) Stress-testing existing policy and current thinking

CI could be used to gauge expert and public sentiment towards existing policy ideas by asking participants to discuss existing policies and current thinking on Pol.is. This is well suited to testing public and expert opinions on current policy proposals, especially where their success depends on securing buy-in and action from stakeholders. It can also help collate views and identify barriers to effective implementation of existing policy.

From the initial set of eight CILab policy debates, we have learnt that it is sometimes useful to design a ‘crossover point’ into the process. This is where part way through a debate, statements submitted by policymakers, subject matter experts and members of the public can be shown to each other, in a bid to break down groupthink across those groups. We used this approach in a Pol.is debate on a topic relating to UK foreign policy, and think it could help test how existing policies on complex areas such as climate change or social care are perceived within and outside government…(More)”

Can politicians and citizens deliberate together? Evidence from a local deliberative mini-public


Paper by Kimmo Grönlund, Kaisa Herne, Maija Jäske, and Mikko Värttö: “In a deliberative mini-public, a representative number of citizens receive information and discuss given policy topics in facilitated small groups. Typically, mini-publics are most effective politically and can have the most impact on policy-making when they are connected to democratic decision-making processes. Theorists have put forward possible mechanisms that may enhance this linkage, one of which is involving politicians within mini-publics with citizens. However, although much research to date has focussed on mini-publics with many citizen participants, there is little analysis of mini-publics with politicians as coparticipants. In this study, we ask how involving politicians in mini-publics influences both participating citizens’ opinions and citizens’ and politicians’ perceptions of the quality of the mini-public deliberations. We organised an online mini-public, together with the City of Turku, Finland, on the topic of transport planning. The participants (n = 171) were recruited from a random sample and discussed the topic in facilitated small groups (n = 21). Pre- and postdeliberation surveys were collected. The effect of politicians on mini-publics was studied using an experimental intervention: in half of the groups, local politicians (two per group) participated, whereas in the other half, citizens deliberated among themselves. Although we found that the participating citizens’ opinions changed, no trace of differences between the two treatment groups was reported. We conclude that politicians, at least when they are in a clear minority in the deliberating small groups, can deliberate with citizens without negatively affecting internal inclusion and the quality of deliberation within mini-publics….(More)”.