Participatory seascape mapping: A community-based approach to ocean governance and marine conservation


Paper by Isabel James: “Despite the global proliferation of ocean governance frameworks that feature socioeconomic variables, the inclusion of community needs and local ecological knowledge remains underrepresented. Participatory mapping or Participatory GIS (PGIS) has emerged as a vital method to address this gap by engaging communities that are conventionally excluded from ocean planning and marine conservation. Originally developed for forest management and Indigenous land reclamation, the scholarship on PGIS remains predominantly focused on terrestrial landscapes. This review explores recent research that employs the method in the marine realm, detailing common methodologies, data types and applications in governance and conservation. A typology of ocean-centered PGIS studies was identified, comprising three main categories: fisheries, habitat classification and blue economy activities. Marine Protected Area (MPA) design and conflict management are the most prevalent conservation applications of PGIS. Case studies also demonstrate the method’s effectiveness in identifying critical marine habitats such as fish spawning grounds and monitoring endangered megafauna. Participatory mapping shows particular promise in resource and data limited contexts due to its ability to generate large quantities of relatively reliable, quick and low-cost data. Validation steps, including satellite imagery and ground-truthing, suggest encouraging accuracy of PGIS data, despite potential limitations related to human error and spatial resolution. This review concludes that participatory mapping not only enriches scientific research but also fosters trust and cooperation among stakeholders, ultimately contributing to more resilient and equitable ocean governance…(More)”.

To Whom Does the World Belong?


Essay by Alexander Hartley: “For an idea of the scale of the prize, it’s worth remembering that 90 percent of recent U.S. economic growth, and 65 percent of the value of its largest 500 companies, is already accounted for by intellectual property. By any estimate, AI will vastly increase the speed and scale at which new intellectual products can be minted. The provision of AI services themselves is estimated to become a trillion-dollar market by 2032, but the value of the intellectual property created by those services—all the drug and technology patents; all the images, films, stories, virtual personalities—will eclipse that sum. It is possible that the products of AI may, within my lifetime, come to represent a substantial portion of all the world’s financial value.

In this light, the question of ownership takes on its true scale, revealing itself as a version of Bertolt Brecht’s famous query: To whom does the world belong?


Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce. Call these, respectively, the input and output problems.

So far, attention—and lawsuits—have clustered around the input problem. The basic business model for LLMs relies on the mass appropriation of human-written text, and there simply isn’t anywhere near enough in the public domain. OpenAI hasn’t been very forthcoming about its training data, but GPT-4 was reportedly trained on around thirteen trillion “tokens,” roughly the equivalent of ten trillion words. This text is drawn in large part from online repositories known as “crawls,” which scrape the internet for troves of text from news sites, forums, and other sources. Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect. Lawyer Peter Schoppert has called the training of LLMs without permission the industry’s “original sin”—to be added, we might say, to the technology’s mind-boggling consumption of energy and water in an overheating planet. (In September, Bloomberg reported that plans for new gas-fired power plants have exploded as energy companies are “racing to meet a surge in demand from power-hungry AI data centers.”)…(More)”.

Beyond checking a box: how a social licence can help communities benefit from data reuse and AI


Article by Stefaan Verhulst and Peter Addo: “In theory, consent offers a mechanism to reduce power imbalances. In reality, existing consent mechanisms are limited and, in many respects, archaic, based on binary distinctions – typically presented in check-the-box forms that most websites use to ask you to register for marketing e-mails – that fail to appreciate the nuance and context-sensitive nature of data reuse. Consent today generally means individual consent, a notion that overlooks the broader needs of communities and groups.

While we understand the need to safeguard information about an individual such as, say, their health status, this information can help address or even prevent societal health crises. Individualised notions of consent fail to consider the potential public good of reusing individual data responsibly. This makes them particularly problematic in societies that have more collective orientations, where prioritising individual choices could disrupt the social fabric.

The notion of a social licence, which has its roots in the 1990s within the extractive industries, refers to the collective acceptance of an activity, such as data reuse, based on its perceived alignment with community values and interests. Social licences go beyond the priorities of individuals and help balance the risks of data misuse and missed use (for example, the risks of violating privacy vs. neglecting to use private data for public good). Social licences permit a broader notion of consent that is dynamic, multifaceted and context-sensitive.

Policymakers, citizens, health providers, think tanks, interest groups and private industry must accept the concept of a social licence before it can be established. The goal for all stakeholders is to establish widespread consensus on community norms and an acceptable balance of social risk and opportunity.

Community engagement can create a consensus-based foundation for preferences and expectations concerning data reuse. Engagement could take place via dedicated “data assemblies” or community deliberations about data reuse for particular purposes under particular conditions. The process would need to involve voices as representative as possible of the different parties involved, and include those that are traditionally marginalised or silenced…(More)”.

Global Trends in Government Innovation 2024


OECD Report: “Governments worldwide are transforming public services through innovative approaches that place people at the center of design and delivery. This report analyses nearly 800 case studies from 83 countries and identifies five critical trends in government innovation that are reshaping public services. First, governments are working with users and stakeholders to co-design solutions and anticipate future needs to create flexible, responsive, resilient and sustainable public services. Second, governments are investing in scalable digital infrastructure, experimenting with emergent technologies (such as automation, AI and modular code), and expanding innovative and digital skills to make public services more efficient. Third, governments are making public services more personalised and proactive to better meet people’s needs and expectations and reduce psychological costs and administrative frictions, ensuring they are more accessible, inclusive and empowering, especially for persons and groups in vulnerable and disadvantaged circumstances. Fourth, governments are drawing on traditional and non-traditional data sources to guide public service design and execution. They are also increasingly using experimentation to navigate highly complex and unpredictable environments. Finally, governments are reframing public services as opportunities and channels for citizens to exercise their civic engagement and hold governments accountable for upholding democratic values such as openness and inclusion…(More)”.

Direct democracy in the digital age: opportunities, challenges, and new approaches


Article by Pattharapong Rattanasevee, Yared Akarapattananukul & Yodsapon Chirawut: “This article delves into the evolving landscape of direct democracy, particularly in the context of the digital era, where ICT and digital platforms play a pivotal role in shaping democratic engagement. Through a comprehensive analysis of empirical data and theoretical frameworks, it evaluates the advantages and inherent challenges of direct democracy, such as majority tyranny, short-term focus, polarization, and the spread of misinformation. It proposes the concept of Liquid democracy as a promising hybrid model that combines direct and representative elements, allowing for voting rights delegation to trusted entities, thereby potentially mitigating some of the traditional drawbacks of direct democracy. Furthermore, the article underscores the necessity for legal regulations and constitutional safeguards to protect fundamental rights and ensure long-term sustainability within a direct democracy framework. This research contributes to the ongoing discourse on democratic innovation and highlights the need for a balanced approach to integrating digital tools with democratic processes…(More)”.

It Was the Best of Times, It Was the Worst of Times: The Dual Realities of Data Access in the Age of Generative AI


Article by Stefaan Verhulst: “It was the best of times, it was the worst of times… It was the spring of hope, it was the winter of despair.” –Charles Dickens, A Tale of Two Cities

Charles Dickens’s famous line captures the contradictions of the present moment in the world of data. On the one hand, data has become central to addressing humanity’s most pressing challenges — climate change, healthcare, economic development, public policy, and scientific discovery. On the other hand, despite the unprecedented quantity of data being generated, significant obstacles remain to accessing and reusing it. As our digital ecosystems evolve, including the rapid advances in artificial intelligence, we find ourselves both on the verge of a golden era of open data and at risk of slipping deeper into a restrictive “data winter.”

A Tale of Two Cities by Charles Dickens (1902)

These two realities are concurrent: the challenges posed by growing restrictions on data reuse, and the countervailing potential brought by advancements in privacy-enhancing technologies (PETs), synthetic data, and data commons approaches. It argues that while current trends toward closed data ecosystems threaten innovation, new technologies and frameworks could lead to a “Fourth Wave of Open Data,” potentially ushering in a new era of data accessibility and collaboration…(More)” (First Published in Industry Data for Society Partnership’s (IDSP) 2024 Year in Review).

Space, Satellites, and Democracy: Implications of the New Space Age for Democratic Processes and Recommendations for Action


NDI Report: “The dawn of a new space age is upon us, marked by unprecedented engagement from both state and private actors. Driven by technological innovations such as reusable rockets and miniaturized satellites, this era presents a double-edged sword for global democracy. On one side, democratized access to space offers powerful tools for enhancing civic processes. Satellite technology now enables real-time election monitoring, improved communication in remote areas, and more effective public infrastructure planning. It also equips democratic actors with means to document human rights abuses and circumvent authoritarian internet restrictions.

However, the accessibility of these technologies also raises significant concerns. The potential for privacy infringements and misuse by authoritarian regimes or malicious actors casts a shadow over these advancements.

This report discusses the opportunities and risks that space and satellite technologies pose to democracy, human rights, and civic processes globally. It examines the current regulatory and normative frameworks governing space activities and highlights key considerations for stakeholders navigating this increasingly competitive domain.

It is essential that the global democracy community be familiar with emerging trends in space and satellite technology and their implications for the future. Failure to do so will leave the community unprepared to harness the opportunities or address the challenges that space capabilities present. It would also cede influence over the development of global norms and standards in this arena to states and private sector interests alone and, in turn, ensure those standards are not rooted in democratic norms and human rights, but rather in principles such as state sovereignty and profit maximization…(More)”.

Synthetic content and its implications for AI policy: a primer


UNESCO Paper: “The deployment of advanced Artificial Intelligence (AI) models, particularly generative AI, has sparked discussions regarding the creation and use of synthetic content – i.e. AI-generated or modified outputs, including text, images, sounds, and combinations thereof – and its impact on individuals, societies, and economies. This note explores the different ways in which synthetic content can be generated and used and proposes a taxonomy that encompasses synthetic media and deepfakes, among others. The taxonomy aims to systematize key characteristics, enhancing understanding and informing policy discussions. Key findings highlight both the potential benefits and concerns associated with synthetic content in fields like data analytics, environmental sustainability, education, creativity, and mis/disinformation and point to the need to frame them ethically, in line with the principles and values of UNESCO Recommendation on the Ethics of Artificial Intelligence. Finally, the note brings to the fore critical questions that policymakers and experts alike need to address to ensure that the development of AI technologies aligns with human rights, human dignity, and fundamental freedoms…(More)”.

Setting the Standard: Statistical Agencies’ Unique Role in Building Trustworthy AI


Article by Corinna Turbes: “As our national statistical agencies grapple with new challenges posed by artificial intelligence (AI), many agencies face intense pressure to embrace generative AI as a way to reach new audiences and demonstrate technological relevance. However, the rush to implement generative AI applications risks undermining these agencies’ fundamental role as authoritative data sources. Statistical agencies’ foundational mission—producing and disseminating high-quality, authoritative statistical information—requires a more measured approach to AI adoption.

Statistical agencies occupy a unique and vital position in our data ecosystem, entrusted with creating the reliable statistics that form the backbone of policy decisions, economic planning, and social research. The work of these agencies demands exceptional precision, transparency, and methodological rigor. Implementation of generative AI interfaces, while technologically impressive, could inadvertently compromise the very trust and accuracy that make these agencies indispensable.

While public-facing interfaces play a valuable role in democratizing access to statistical information, statistical agencies need not—and often should not—rely on generative AI to be effective in that effort. For statistical agencies, an extractive AI approach – which retrieves and presents existing information from verified databases rather than generating new content – offers a more appropriate path forward. By pulling from verified, structured datasets and providing precise, accurate responses, extractive AI systems can maintain the high standards of accuracy required while making statistical information more accessible to users who may find traditional databases overwhelming. An extractive, rather than generative,  approach allows agencies to modernize data delivery while preserving their core mission of providing reliable, verifiable statistical information…(More)”

Revealed: bias found in AI system used to detect UK benefits fraud


Article by Robert Booth: “An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal.

An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.

The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The “statistically significant outcome disparity” emerged in a “fairness analysis” of the automated system for universal credit advances carried out in February this year.

The emergence of the bias comes after the DWP this summer claimed the AI system “does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated £8bn a year lost in fraud and error – is “reasonable and proportionate”.

But no fairness analysis has yet been undertaken in respect of potential bias centring on race, sex, sexual orientation and religion, or pregnancy, maternity and gender reassignment status, the disclosures reveal.

Campaigners responded by accusing the government of a “hurt first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to cheat the system…(More)”.