Definitions, digital, and distance: on AI and policymaking


Article by Gavin Freeguard: “Our first question is less, ‘to what extent can AI improve public policymaking?’, but ‘what is currently wrong with policymaking?’, and then, ‘is AI able to help?’.

Ask those in and around policymaking about the problems and you’ll get a list likely to include:

  • the practice not having changed in decades (or centuries)
  • it being an opaque ‘dark art’ with little transparency
  • defaulting to easily accessible stakeholders and evidence
  • a separation between policy and delivery (and digital and other disciplines), and failure to recognise the need for agility and feedback as opposed to distinct stages
  • the challenges in measuring or evaluating the impact of policy interventions and understanding what works, with a lack of awareness, let alone sharing, of case studies elsewhere
  • difficulties in sharing data
  • the siloed nature of government complicating cross-departmental working
  • policy asks often being dictated by politics, with electoral cycles leading to short-termism, ministerial churn changing priorities and personal style, events prompting rushed reactions, or political priorities dictating ‘policy-based evidence making’
  • a rush to answers before understanding the problem
  • definitional issues about what policy actually is making it hard to get a hold of or develop professional expertise.  

If we’re defining ‘policy’ and the problem, we also need to define ‘AI’, or at least acknowledge that we are not only talking about new, shiny generative AI, but a world of other techniques for automating processes and analysing data that have been used in government for years.

So is ‘AI’ able to help? It could support us to make better use of a wider range of data more quickly; but it could privilege that which is easier to measure, strip data of vital context, and embed biases and historical assumptions. It could ‘make decisions more transparent (perhaps through capturing digital records of the process behind them, or by visualising the data that underpins a decision)’; or make them more opaque with ‘black-box’ algorithms, and distract from overcoming the very human cultural problems around greater openness. It could help synthesise submissions or generate ideas to brainstorm; or fail to compensate for deficiencies in underlying government knowledge infrastructure, and generate gibberish. It could be a tempting silver bullet for better policy; or it could paper over the cracks, while underlying technical, organisational and cultural plumbing goes unfixed. It could have real value in some areas, or cause harms in others…(More)”.

Geographies of missing data: Spatializing counterdata production against feminicide


Paper by Catherine D’Ignazio et al: “Feminicide is the gender-related killing of cisgender and transgender women and girls. It reflects patriarchal and racialized systems of oppression and reveals how territories and socio-economic landscapes configure everyday gender-related violence. In recent decades, many grassroots data production initiatives have emerged with the aim of monitoring this extreme but invisibilized phenomenon. We bridge scholarship in feminist and information geographies with data feminism to examine the ways in which space, broadly defined, shapes the counterdata production strategies of feminicide data activists. Drawing on a qualitative study of 33 monitoring efforts led by civil society organizations across 15 countries, primarily in Latin America, we provide a conceptual framework for examining the spatial dimensions of data activism. We show how there are striking transnational patterns related to where feminicide goes unrecorded, resulting in geographies of missing data. In response to these omissions, activists deploy multiple spatialized strategies to make these geographies visible, to situate and contextualize each case of feminicide, to reclaim databases as spaces for memory and witnessing, and to build transnational networks of solidarity. In this sense, we argue that data activism about feminicide constitutes a space of resistance and resignification of everyday forms of gender-related violence…(More)”.

On Slicks and Satellites: An Open Source Guide to Marine Oil Spill Detection


Article by Wim Zwijnenburg: “The sheer scale of ocean oil pollution is staggering. In Europe, a suspected 3,000 major illegal oil dumps take place annually, with an estimated release of between 15,000 and 60,000 tonnes of oil ending up in the North Sea. In the Mediterranean, figures provided by the Regional Marine Pollution Emergency Response Centre estimate there are 1,500 to 2,000 oil spills every year.

The impact of any single oil spill on a marine or coastal ecosystem can be devastating and long-lasting. Animals such as birds, turtles, dolphins and otters can suffer from ingesting or inhaling oil, as well as getting stuck in the slick. The loss of water and soil quality can be toxic to both flora and fauna. Heavy metals enter the food chain, poisoning everything from plankton to shellfish, which in turn affects the livelihoods of coastal communities dependent on fishing and tourism.

However, with a wealth of open source earth observation tools at our fingertips, during such environmental disasters it’s possible for us to identify and monitor these spills, highlight at-risk areas, and even hold perpetrators accountable. …

There are several different types of remote sensing sensors we can use for collecting data about the Earth’s surface. In this article we’ll focus on two: optical and radar sensors. 

Optical imagery captures the broad light spectrum reflected from the Earth, also known as passive remote sensing. In contrast, Synthetic Aperture Radar (SAR) uses active remote sensing, sending radio waves down to the Earth’s surface and capturing them as they are reflected back. Any change in the reflection can indicate a change on ground, which can then be investigated. For more background, see Bellingcat contributor Ollie Ballinger’s Remote Sensing for OSINT Guide…(More)”.

New AI standards group wants to make data scraping opt-in


Article by Kate Knibbs: “The first wave of major generative AI tools largely were trained on “publicly available” data—basically, anything and everything that could be scraped from the Internet. Now, sources of training data are increasingly restricting access and pushing for licensing agreements. With the hunt for additional data sources intensifying, new licensing startups have emerged to keep the source material flowing.

The Dataset Providers Alliance, a trade group formed this summer, wants to make the AI industry more standardized and fair. To that end, it has just released a position paper outlining its stances on major AI-related issues. The alliance is made up of seven AI licensing companies, including music copyright-management firm Rightsify, Japanese stock-photo marketplace Pixta, and generative-AI copyright-licensing startup Calliope Networks. (At least five new members will be announced in the fall.)

The DPA advocates for an opt-in system, meaning that data can be used only after consent is explicitly given by creators and rights holders. This represents a significant departure from the way most major AI companies operate. Some have developed their own opt-out systems, which put the burden on data owners to pull their work on a case-by-case basis. Others offer no opt-outs whatsoever…(More)”.

Building LLMs for the social sector: Emerging pain points


Blog by Edmund Korley: “…One of the sprint’s main tracks focused on using LLMs to enhance the impact and scale of chat services in the social sector.

Six organizations participated, with operations spanning Africa and India. Bandhu empowers India’s blue-collar workers and migrants by connecting them to jobs and affordable housing, helping them take control of their livelihoods and future stability. Digital Green enhances rural farmers’ agency with AI-driven insights to improve agricultural productivity and livelihoods. Jacaranda Health provides mothers in sub-Saharan Africa with essential information and support to improve maternal and newborn health outcomes. Kabakoo equips youth in Francophone Africa with digital skills, fostering self-reliance and economic independence. Noora Health teaches Indian patients and caregivers critical health skills, enhancing their ability to manage care. Udhyam provides micro-entrepreneurs’ with education, mentorship, and financial support to build sustainable businesses.

These organizations demonstrate diverse ways one can boost human agency: they help people in underserved communities take control of their lives, make more informed choices, and build better futures – and they are piloting AI interventions to scale these efforts…(More)”.

Using internet search data as part of medical research


Blog by Susan Thomas and Matthew Thompson: “…In the UK, almost 50 million health-related searches are made using Google per year. Globally there are 100s of millions of health-related searches every day. And, of course, people are doing these searches in real-time, looking for answers to their concerns in the moment. It’s also possible that, even if people aren’t noticing and searching about changes to their health, their behaviour is changing. Maybe they are searching more at night because they are having difficulty sleeping or maybe they are spending more (or less) time online. Maybe an individual’s search history could actually be really useful for researchers. This realisation has led medical researchers to start to explore whether individuals’ online search activity could help provide those subtle, almost unnoticeable signals that point to the beginning of a serious illness.

Our recent review found 23 studies have been published so far that have done exactly this. These studies suggest that online search activity among people later diagnosed with a variety of conditions ranging from pancreatic cancer and stroke to mood disorders, was different to people who did not have one of these conditions.

One of these studies was published by researchers at Imperial College London, who used online search activity to identify signals of women with gynaecological malignancies. They found that women with malignant (e.g. ovarian cancer) and benign conditions had different search patterns, up to two months prior to a GP referral. 

Pause for a moment, and think about what this could mean. Ovarian cancer is one of the most devastating cancers women get. It’s desperately hard to detect early – and yet there are signals of this cancer visible in women’s internet searches months before diagnosis?…(More)”.

Advocating an International Decade for Data under G20 Sponsorship


G20 Policy Brief by Lorrayne Porciuncula, David Passarelli, Muznah Siddiqui, and Stefaan Verhulst: “This brief draws attention to the important role of data in social and economic development. It advocates the establishment of an International Decade for Data (IDD) from 2025-2035 under G20 sponsorship. The IDD can be used to bridge existing data governance initiatives and deliver global ambitions to use data for social impact, innovation, economic growth, research, and social development. Despite the critical importance of data governance to achieving the SDGs and to emerging topics such as artificial intelligence, there is no unified space that brings together stakeholders to coordinate and shape the data dimension of digital societies.

While various data governance processes exist, they often operate in silos, without effective coordination and interoperability. This fragmented landscape inhibits progress toward a more inclusive and sustainable digital future. The envisaged IDD fosters an integrated approach to data governance that supports all stakeholders in navigating complex data landscapes. Central to this proposal are new institutional frameworks (e.g. data collaboratives), mechanisms (e.g. digital social licenses and sandboxes), and professional domains (e.g. data stewards), that can respond to the multifaceted issue of data governance and the multiplicity of actors involved.

The G20 can capitalize on the Global Digital Compact’s momentum and create a task force to position itself as a data champion through the launch of the IDD, enabling collective progress and steering global efforts towards a more informed and responsible data-centric society…(More)”.

Frontier AI: double-edged sword for public sector


Article by Zeynep Engin: “The power of the latest AI technologies, often referred to as ‘frontier AI’, lies in their ability to automate decision-making by harnessing complex statistical insights from vast amounts of unstructured data, using models that surpass human understanding. The introduction of ChatGPT in late 2022 marked a new era for these technologies, making advanced AI models accessible to a wide range of users, a development poised to permanently reshape how our societies function.

From a public policy perspective, this capacity offers the optimistic potential to enable personalised services at scale, potentially revolutionising healthcare, education, local services, democratic processes, and justice, tailoring them to everyone’s unique needs in a digitally connected society. The ambition is to achieve better outcomes than humanity has managed so far without AI assistance. There is certainly a vast opportunity for improvement, given the current state of global inequity, environmental degradation, polarised societies, and other chronic challenges facing humanity.

However, it is crucial to temper this optimism with recognising the significant risks. In their current trajectories, these technologies are already starting to undermine hard-won democratic gains and civil rights. Integrating AI into public policy and decision-making processes risks exacerbating existing inequalities and unfairness, potentially leading to new, uncontrollable forms of discrimination at unprecedented speed and scale. The environmental impacts, both direct and indirect, could be catastrophic, while the rise of AI-powered personalised misinformation and behavioural manipulation is contributing to increasingly polarised societies.

Steering the direction of AI to be in the public interest requires a deeper understanding of its characteristics and behaviour. To imagine and design new approaches to public policy and decision-making, we first need a comprehensive understanding of what this remarkable technology offers and its potential implications…(More)”.

Data sovereignty for local governments. Considerations and enablers


Report by JRC Data sovereignty for local governments refers to a capacity to control and/or access data, and to foster a digital transformation aligned with societal values and EU Commission political priorities. Data sovereignty clauses are an instrument that local governments may use to compel companies to share data of public interest. Albeit promising, little is known about the peculiarities of this instrument and how it has been implemented so far. This policy brief aims at filling the gap by systematising existing knowledge and providing policy-relevant recommendations for its wider implementation…(More)”.

Breaking the Wall of Digital Heteronomy


Interview with Julia Janssen: “The walls of algorithms increasingly shape your life. Telling what to buy, where to go, what news to believe or songs to listen to. Data helps to navigate the world’s complexity and its endless possibilities. Artificial intelligence promises frictionless experiences, tailored and targeted, seamless and optimized to serve you best. But, at what cost? Frictionlessness comes with obedience. To the machine, the market and your own prophesy.

Mapping the Oblivion researches the influence of data and AI on human autonomy. The installation visualized Netflix’s percentage-based prediction models to provoke questions about to what extent we want to quantify choices. Will you only watch movies that are over 64% to your liking? Dine at restaurants that match your appetite above 76%. Date people with a compatibility rate of 89%? Will you never choose the career you want when there is only a 12% chance you’ll succeed? Do you want to outsmart your intuition with systems you do not understand and follow the map of probabilities and statistics?

Digital heteronomy is a condition in which one is guided by data, governed by AI and ordained by the industry. Homo Sapiens, the knowing being becomes Homo Stultus, the controllable being.

Living a quantified life in a numeric world. Not having to choose, doubt or wonder. Kept safe, risk-free and predictable within algorithmic walls. Exhausted of autonomy, creativity and randomness. Imprisoned in bubbles, profiles and behavioural tribes. Controllable, observable and monetizable.

Breaking the wall of digital heteronomy means taking back control over our data, identity, choices and chances in life. Honouring the unexpected, risk, doubt and having an unknown future. Shattering the power structures created by Big Tech to harvest information and capitalize on unfairness, vulnerabilities and fears. Breaking the wall of digital heteronomy means breaking down a system where profit is more important than people…(More)”.