Paper by Fanni Kertesz: “The European healthcare sector is transforming toward patient-centred and value-based healthcare delivery. The European Health Data Space (EHDS) Regulation aims to unlock the potential of health data by establishing a single market for its primary and secondary use. This paper examines the legal challenges associated with the secondary use of health data within the EHDS and offers recommendations for improvement. Key issues include the compatibility between the EHDS and the General Data Protection Regulation (GDPR), barriers to cross-border data sharing, and intellectual property concerns. Resolving these challenges is essential for realising the full potential of health data and advancing healthcare research and innovation within the EU…(More)”.
Definitions, digital, and distance: on AI and policymaking
Article by Gavin Freeguard: “Our first question is less, ‘to what extent can AI improve public policymaking?’, but ‘what is currently wrong with policymaking?’, and then, ‘is AI able to help?’.
Ask those in and around policymaking about the problems and you’ll get a list likely to include:
- the practice not having changed in decades (or centuries)
- it being an opaque ‘dark art’ with little transparency
- defaulting to easily accessible stakeholders and evidence
- a separation between policy and delivery (and digital and other disciplines), and failure to recognise the need for agility and feedback as opposed to distinct stages
- the challenges in measuring or evaluating the impact of policy interventions and understanding what works, with a lack of awareness, let alone sharing, of case studies elsewhere
- difficulties in sharing data
- the siloed nature of government complicating cross-departmental working
- policy asks often being dictated by politics, with electoral cycles leading to short-termism, ministerial churn changing priorities and personal style, events prompting rushed reactions, or political priorities dictating ‘policy-based evidence making’
- a rush to answers before understanding the problem
- definitional issues about what policy actually is making it hard to get a hold of or develop professional expertise.
If we’re defining ‘policy’ and the problem, we also need to define ‘AI’, or at least acknowledge that we are not only talking about new, shiny generative AI, but a world of other techniques for automating processes and analysing data that have been used in government for years.
So is ‘AI’ able to help? It could support us to make better use of a wider range of data more quickly; but it could privilege that which is easier to measure, strip data of vital context, and embed biases and historical assumptions. It could ‘make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision)’; or make them more opaque with ‘black-box’ algorithms, and distract from overcoming the very human cultural problems around greater openness. It could help synthesise submissions or generate ideas to brainstorm; or fail to compensate for deficiencies in underlying government knowledge infrastructure, and generate gibberish. It could be a tempting silver bullet for better policy; or it could paper over the cracks, while underlying technical, organisational and cultural plumbing goes unfixed. It could have real value in some areas, or cause harms in others…(More)”.
Using internet search data as part of medical research
Blog by Susan Thomas and Matthew Thompson: “…In the UK, almost 50 million health-related searches are made using Google per year. Globally there are 100s of millions of health-related searches every day. And, of course, people are doing these searches in real-time, looking for answers to their concerns in the moment. It’s also possible that, even if people aren’t noticing and searching about changes to their health, their behaviour is changing. Maybe they are searching more at night because they are having difficulty sleeping or maybe they are spending more (or less) time online. Maybe an individual’s search history could actually be really useful for researchers. This realisation has led medical researchers to start to explore whether individuals’ online search activity could help provide those subtle, almost unnoticeable signals that point to the beginning of a serious illness.
Our recent review found 23 studies have been published so far that have done exactly this. These studies suggest that online search activity among people later diagnosed with a variety of conditions ranging from pancreatic cancer and stroke to mood disorders, was different to people who did not have one of these conditions.
One of these studies was published by researchers at Imperial College London, who used online search activity to identify signals of women with gynaecological malignancies. They found that women with malignant (e.g. ovarian cancer) and benign conditions had different search patterns, up to two months prior to a GP referral.
Pause for a moment, and think about what this could mean. Ovarian cancer is one of the most devastating cancers women get. It’s desperately hard to detect early – and yet there are signals of this cancer visible in women’s internet searches months before diagnosis?…(More)”.
Advocating an International Decade for Data under G20 Sponsorship
G20 Policy Brief by Lorrayne Porciuncula, David Passarelli, Muznah Siddiqui, and Stefaan Verhulst: “This brief draws attention to the important role of data in social and economic development. It advocates the establishment of an International Decade for Data (IDD) from 2025-2035 under G20 sponsorship. The IDD can be used to bridge existing data governance initiatives and deliver global ambitions to use data for social impact, innovation, economic growth, research, and social development. Despite the critical importance of data governance to achieving the SDGs and to emerging topics such as artificial intelligence, there is no unified space that brings together stakeholders to coordinate and shape the data dimension of digital societies.
While various data governance processes exist, they often operate in silos, without effective coordination and interoperability. This fragmented landscape inhibits progress toward a more inclusive and sustainable digital future. The envisaged IDD fosters an integrated approach to data governance that supports all stakeholders in navigating complex data landscapes. Central to this proposal are new institutional frameworks (e.g. data collaboratives), mechanisms (e.g. digital social licenses and sandboxes), and professional domains (e.g. data stewards), that can respond to the multifaceted issue of data governance and the multiplicity of actors involved.
The G20 can capitalize on the Global Digital Compact’s momentum and create a task force to position itself as a data champion through the launch of the IDD, enabling collective progress and steering global efforts towards a more informed and responsible data-centric society…(More)”.
Frontier AI: double-edged sword for public sector
Article by Zeynep Engin: “The power of the latest AI technologies, often referred to as ‘frontier AI’, lies in their ability to automate decision-making by harnessing complex statistical insights from vast amounts of unstructured data, using models that surpass human understanding. The introduction of ChatGPT in late 2022 marked a new era for these technologies, making advanced AI models accessible to a wide range of users, a development poised to permanently reshape how our societies function.
From a public policy perspective, this capacity offers the optimistic potential to enable personalised services at scale, potentially revolutionising healthcare, education, local services, democratic processes, and justice, tailoring them to everyone’s unique needs in a digitally connected society. The ambition is to achieve better outcomes than humanity has managed so far without AI assistance. There is certainly a vast opportunity for improvement, given the current state of global inequity, environmental degradation, polarised societies, and other chronic challenges facing humanity.
However, it is crucial to temper this optimism with recognising the significant risks. In their current trajectories, these technologies are already starting to undermine hard-won democratic gains and civil rights. Integrating AI into public policy and decision-making processes risks exacerbating existing inequalities and unfairness, potentially leading to new, uncontrollable forms of discrimination at unprecedented speed and scale. The environmental impacts, both direct and indirect, could be catastrophic, while the rise of AI-powered personalised misinformation and behavioural manipulation is contributing to increasingly polarised societies.
Steering the direction of AI to be in the public interest requires a deeper understanding of its characteristics and behaviour. To imagine and design new approaches to public policy and decision-making, we first need a comprehensive understanding of what this remarkable technology offers and its potential implications…(More)”.
Children and Young People’s Participation in Climate Assemblies
Guide by KNOCA: “This guide draws on the experiences and advice of children, young people and adults involved in citizens’ assemblies that have taken place at national, city and community levels across nine countries, highlighting that:
- Involving children and young people can enrich the intergenerational legitimacy and impact of climate assemblies: adult assembly members are reminded of their responsibilities to younger and future generations, and children and young people feel listened to, valued and taken seriously.
- Involving children and young people has significant potential to strengthen the future of democracy and climate governance by enhancing democratic and climate literacy within education systems.
- Children and young people can and should be involved in climate assemblies in different ways. Most importantly, children and young people should be involved from the very beginning of the process to ensure it reflects children and young people’s own ideas.
- There are practical, ethical and design factors to consider when working with children and young people which can often be positively navigated by taking a child rights-based approach to the conceptualisation, design and delivery of climate assemblies…(More)”.
Even laypeople use legalese
Paper by Eric Martínez, Francis Mollica and Edward Gibson: “Whereas principles of communicative efficiency and legal doctrine dictate that laws be comprehensible to the common world, empirical evidence suggests legal documents are largely incomprehensible to lawyers and laypeople alike. Here, a corpus analysis (n = 59) million words) first replicated and extended prior work revealing laws to contain strikingly higher rates of complex syntactic structures relative to six baseline genres of English. Next, two preregistered text generation experiments (n = 286) tested two leading hypotheses regarding how these complex structures enter into legal documents in the first place. In line with the magic spell hypothesis, we found people tasked with writing official laws wrote in a more convoluted manner than when tasked with writing unofficial legal texts of equivalent conceptual complexity. Contrary to the copy-and-edit hypothesis, we did not find evidence that people editing a legal document wrote in a more convoluted manner than when writing the same document from scratch. From a cognitive perspective, these results suggest law to be a rare exception to the general tendency in human language toward communicative efficiency. In particular, these findings indicate law’s complexity to be derived from its performativity, whereby low-frequency structures may be inserted to signal law’s authoritative, world-state-altering nature, at the cost of increased processing demands on readers. From a law and policy perspective, these results suggest that the tension between the ubiquity and impenetrability of the law is not an inherent one, and that laws can be simplified without a loss or distortion of communicative content…(More)”.
The Power of Volunteers: Remote Mapping Gaza and Strategies in Conflict Areas
Blog by Jessica Pechmann: “…In Gaza, increased conflict since October 2023 has caused a prolonged humanitarian crisis. Understanding the impact of the conflict on buildings has been challenging, since pre-existing datasets from artificial intelligence and machine learning (AI/ML) models and OSM were not accurate enough to create a full building footprint baseline. The area’s buildings were too dense, and information on the ground was impossible to collect safely. In these hard-to-reach areas, HOT’s remote and crowdsourced mapping methodology was a good fit for collecting detailed information visible on aerial imagery.
In February 2024, after consultation with humanitarian and UN actors working in Gaza, HOT decided to create a pre-conflict dataset of all building footprints in the area in OSM. HOT’s community of OpenStreetMap volunteers did all the data work, coordinating through HOT’s Tasking Manager. The volunteers made meticulous edits to add missing data and to improve existing data. Due to protection and data quality concerns, only expert volunteer teams were assigned to map and validate the area. As in other areas that are hard to reach due to conflict, HOT balanced the data needs with responsible data practices based on the context.
Comparing AI/ML with human-verified OSM building datasets in conflict zones
AI/ML is becoming an increasingly common and quick way to obtain building footprints across large areas. Sources for automated building footprints range from worldwide datasets by Microsoft or Google to smaller-scale open community-managed tools such as HOT’s new application, fAIr.
Now that HOT volunteers have completely updated and validated all OSM buildings in visible imagery pre-conflict, OSM has 18% more individual buildings in the Gaza strip than Microsoft’s ML buildings dataset (estimated 330,079 buildings vs 280,112 buildings). However, in contexts where there has not been a coordinated update effort in OSM, the numbers may differ. For example, in Sudan where there has not been a large organized editing campaign, there are just under 1,500,000 in OSM, compared to over 5,820,000 buildings in Microsoft’s ML data. It is important to note that the ML datasets have not been human-verified and their accuracy is not known. Google Open Buildings has over 26 million building features in Sudan, but on visual inspection, many of these features are noise in the data that the model incorrectly identified as buildings in the uninhabited desert…(More)”.
Relational ethics in health care automation
Paper by Frances Shaw and Anthony McCosker: “Despite the transformative potential of automation and clinical decision support technology in health care, there is growing urgency for more nuanced approaches to ethics. Relational ethics is an approach that can guide the responsible use of a range of automated decision-making systems including the use of generative artificial intelligence and large language models as they affect health care relationships.
There is an urgent need for sector-wide training and scrutiny regarding the effects of automation using relational ethics touchstones, such as patient-centred health care, informed consent, patient autonomy, shared decision-making, empathy and the politics of care.
The purpose of this review is to offer a provocation for health care practitioners, managers and policy makers to consider the use automated tools in practice settings and examine how these tools might affect relationships and hence care outcomes…(More)”.
Governing mediation in the data ecosystem: lessons from media governance for overcoming data asymmetries
Chapter by Stefaan Verhulst in Handbook of Media and Communication Governance edited by Manuel Puppis , Robin Mansell , and Hilde Van den Bulck: “The internet and the accompanying datafication were heralded to usher in a golden era of disintermediation. Instead, the modern data ecology witnessed a process of remediation, or ‘hyper-mediation’, resulting in governance challenges, many of which underlie broader socioeconomic difficulties. Particularly, the rise of data asymmetries and silos create new forms of scarcity and dominance with deleterious political, economic and cultural consequences. Responding to these challenges requires a new data governance framework, focused on unlocking data and developing a more data pluralistic ecosystem. We argue for regulation and policy focused on promoting data collaboratives, an emerging form of cross-sectoral partnership; and on the establishment of data stewards, individuals/groups tasked with managing and responsibly sharing organizations’ data assets. Some regulatory steps are discussed, along with the various ways in which these two emerging stakeholders can help alleviate data scarcities and their associated problems…(More)”