The Global Cooperation Barometer 2024


WEF Report: “From 2012 up until the COVID-19 pandemic, there was an increase in cooperation across four of the five pillars, with peace and security being the only exception. Innovation and technology saw the biggest increase in cooperation – at more than 30%.

The report shows a “stark deterioration” in the peace and security pillar due to a rapid rise in the number of forcibly displaced people and deaths from conflict. However, there has been “continued growth” in the climate and nature pillar due to increased commitments from countries.

Cooperation trends by pillar.

How cooperation has developed over the past decade, by pillar Image: World Economic Forum

Here’s what you need to know about cooperation across the five pillars.

  • Trade and capital

Global trade and capital flows rose moderately between 2012 and 2022. During the pandemic, these areas experienced volatility, with labour migration patterns dropping. But metrics such as goods trade, development assistance and developing countries’ share of foreign direct investment, and manufacturing exports have returned to strong growth in the post-pandemic period, says the report.

  • Innovation and technology

In the eight years until the pandemic, innovation and technology cooperation “maintained strong and significant growth” across most indicators, especially cross-border data flows and IT services trade. But this has plateaued since 2020, with some key metrics, including cross-border patent applications and international student flows, falling.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

  • Climate and natural capital

This is the only pillar that has seen the majority of indicators rise across the whole decade, with financial commitments to mitigation and adaptation and a significant expansion of marine protected areas. However, emissions continue to rise and “progress towards ecological outcomes is stagnant”, says the report.

  • Health and wellness

Between 2012 and 2020, cooperation on health and wellness rose consistently and was “essential” to navigating the COVID-19 pandemic, says the report, citing vaccine development, if not necessarily distribution, as an example. But cooperation has dipped slightly since its peak in 2020.

  • Peace and security

Trends in peace and security cooperation have declined considerably since 2016, driven by a rise in forcibly displaced people and cyberattacks, as well as a recent increase in the number of conflicts and conflict-related deaths. The report notes these metrics suggest an “increasingly unstable global security environment and increased intensity of conflicts”…(More)”.

Power to the standards


Report by Gergana Baeva, Michael Puntschuh and Matthieu Binder: “Standards and norms will be of central importance when it comes to the practical implementation of legal requirements for developed and deployed AI systems.

Using expert interviews, our study “Power to the standards” documents the existing obstacles on the way to the standardization of AI. In addition to practical and technological challenges, questions of democratic policy arise. After all, requirements such as fairness or transparency are often regarded as criteria to be determined by the legislator, meaning that they are only partially susceptible to standardization.

Our study concludes that the targeted and comprehensive participation of civil society actors is particularly necessary in order to compensate for existing participation deficits within the standardization process…(More)”.

Navigating the Metrics Maze: Lessons from Diverse Domains for Federal Chief Data Officers


Paper by the CDO Council: “In the rapidly evolving landscape of government, Federal Chief Data Officers (CDOs) have emerged as crucial leaders tasked with harnessing the power of data to drive organizational success. However, the relative newness of this role brings forth unique challenges, particularly in the realm of measuring and communicating the value of their efforts.

To address this measurement conundrum, this paper delves into lessons from non-data domains such as asset management, inventory management, manufacturing, and customer experience. While these fields share common ground with CDOs in facing critical questions, they stand apart in possessing established performance metrics. Drawing parallels with domains that have successfully navigated similar challenges offers a roadmap for establishing metrics that can transcend organizational boundaries.

By learning from the experiences of other domains and adopting a nuanced approach to metrics, CDOs can pave the way for a clearer understanding of the impact and value of their vital contributions to the data-driven future…(More)”.

Technology, Data and Elections: An Updated Checklist on the Election Cycle


Checklist by Privacy International: “In the last few years, electoral processes and related activities have undergone significant changes, driven by the development of digital technologies.

The use of personal data has redefined political campaigning and enabled the proliferation of political advertising tailor-made for audiences sharing specific characteristics or personalised to the individual. These new practices, combined with the platforms that enable them, create an environment that facilitate the manipulation of opinion and, in some cases, the exclusion of voters.

In parallel, governments are continuing to invest in modern infrastructure that is inherently data-intensive. Several states are turning to biometric voter registration and verification technologies ostensibly to curtail fraud and vote manipulation. This modernisation often results in the development of nationwide databases containing masses of personal, sensitive information, that require heightened safeguards and protection.

The number and nature of actors involved in the election process is also changing, and so are the relationships between electoral stakeholders. The introduction of new technologies, for example for purposes of voter registration and verification, often goes hand-in-hand with the involvement of private companies, a costly investment that is not without risk and requires robust safeguards to avoid abuse.

This new electoral landscape comes with many challenges that must be addressed in order to protect free and fair elections: a fact that is increasingly recognised by policymakers and regulatory bodies…(More)”.

Uses and Purposes of Beneficial Ownership Data


Report by Andres Knobel: “This report describes more than 30 uses and purposes of beneficial ownership data *beyond anti-money laundering) for a whole-government approach. It covers 5 cases for exposing corruption, 6 cases for protecting democracy, the rule of law and national assets, 9 cases for exposing tax abuse, 4 cases for exposing fraud and administrative violations, 3 cases for protecting the environment and mitigating climate change, 5 cases for ensuring fair market conditions and 4 cases for creating fairer societies…(More)”.

The Transferability Question


Report by Geoff Mulgan: “How should we think about the transferability of ideas and methods? If something works in one place and one time, how do we know if it, or some variant of it, will work in another place or another time?

This – the transferability question – is one that many organisations face: businesses, from retailers and taxi firms to restaurants and accountants wanting to expand to other regions or countries; governments wanting to adopt and adapt policies from elsewhere; and professions like doctors, wanting to know whether a kind of surgery, or a smoking cessation programme, will work in another context…

Here I draw on this literature to suggest not so much a generalisable method but rather an approach that starts by asking four basic questions of any promising idea:  

  • SPREAD: has the idea already spread to diverse contexts and been shown to work?  
  • ESSENTIALS: do we know what the essentials are, the crucial ingredients that make it effective?  
  • EASE: how easy is it to adapt or adopt (in other words, how many other things need to change for it to be implemented successfully)? 
  • RELEVANCE: how relevant is the evidence (or how similar is the context of evidence to the context of action)? 

Asking these questions is a protection against the vice of hoping that you can just ‘cut and paste’ an idea from elsewhere, but also an encouragement to be hungry for good ideas that can be adopted or adapted.    

I conclude by arguing that it is healthy for any society or government to assume that there are good ideas that could adopted or adapted; it’s healthy to cultivate a hunger to learn; healthy to understand methods for analysing what aspects of an idea or model could be transferable; and great value in having institutions that are good at promoting and spreading ideas, at adoption and adaptation as well as innovation…(More)”.

Foundational Research Gaps and Future Directions for Digital Twins


Report by the National Academy of Engineering; National Academies of Sciences, Engineering, and Medicine: “Across multiple domains of science, engineering, and medicine, excitement is growing about the potential of digital twins to transform scientific research, industrial practices, and many aspects of daily life. A digital twin couples computational models with a physical counterpart to create a system that is dynamically updated through bidirectional data flows as conditions change. Going beyond traditional simulation and modeling, digital twins could enable improved medical decision-making at the individual patient level, predictions of future weather and climate conditions over longer timescales, and safer, more efficient engineering processes. However, many challenges remain before these applications can be realized.

This report identifies the foundational research and resources needed to support the development of digital twin technologies. The report presents critical future research priorities and an interdisciplinary research agenda for the field, including how federal agencies and researchers across domains can best collaborate…(More)”.

Considerations for Governing Open Foundation Models


Brief by Rishi Bommasani et al: “Foundation models (e.g., GPT-4, Llama 2) are at the epicenter of AI, driving technological innovation and billions in investment. This paradigm shift has sparked widespread demands for regulation. Animated by factors as diverse as declining transparency and unsafe labor practices, limited protections for copyright and creative work, as well as market concentration and productivity gains, many have called for policymakers to take action.

Central to the debate about how to regulate foundation models is the process by which foundation models are released. Some foundation models like Google DeepMind’s Flamingo are fully closed, meaning they are available only to the model developer; others, such as OpenAI’s GPT-4, are limited access, available to the public but only as a black box; and still others, such as Meta’s Llama 2, are more open, with widely available model weights enabling downstream modification and scrutiny. As of August 2023, the U.K.’s Competition and Markets Authority documents the most common release approach for publicly-disclosed models is open release based on data from Stanford’s Ecosystem Graphs. Developers like Meta, Stability AI, Hugging Face, Mistral, Together AI, and EleutherAI frequently release models openly.

Governments around the world are issuing policy related to foundation models. As part of these efforts, open foundation models have garnered significant attention: The recent U.S. Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence tasks the National Telecommunications and Information Administration with preparing a report on open foundation models for the president. In the EU, open foundation models trained with fewer than 1025 floating point operations (a measure of the amount of compute expended) appear to be exempted under the recently negotiated AI Act. The U.K.’s AI Safety Institute will “consider open-source systems as well as those deployed with various forms of access controls” as part of its initial priorities. Beyond governments, the Partnership on AI has introduced guidelines for the safe deployment of foundation models, recommending against open release for the most capable foundation models.

Policy on foundation models should support the open foundation model ecosystem, while providing resources to monitor risks and create safeguards to address harms. Open foundation models provide significant benefits to society by promoting competition, accelerating innovation, and distributing power. For example, small businesses hoping to build generative AI applications could choose among a variety of open foundation models that offer different capabilities and are often less expensive than closed alternatives. Further, open models are marked by greater transparency and, thereby, accountability. When a model is released with its training data, independent third parties can better assess the model’s capabilities and risks…(More)”.

Observer Theory


Article by Stephen Wolfram: “We call it perception. We call it measurement. We call it analysis. But in the end it’s about how we take the world as it is, and derive from it the impression of it that we have in our minds.

We might have thought that we could do science “purely objectively” without any reference to observers or their nature. But what we’ve discovered particularly dramatically in our Physics Project is that the nature of us as observers is critical even in determining the most fundamental laws we attribute to the universe.

But what ultimately does an observer—say like us—do? And how can we make a theoretical framework for it? Much as we have a general model for the process of computation—instantiated by something like a Turing machine—we’d like to have a general model for the process of observation: a general “observer theory”.

Central to what we think of as an observer is the notion that the observer will take the raw complexity of the world and extract from it some reduced representation suitable for a finite mind. There might be zillions of photons impinging on our eyes, but all we extract is the arrangement of objects in a visual scene. Or there might be zillions of gas molecules impinging on a piston, yet all we extract is the overall pressure of the gas.

In the end, we can think of it fundamentally as being about equivalencing. There are immense numbers of different individual configurations for the photons or the gas molecules—that are all treated as equivalent by an observer who’s just picking out the particular features needed for some reduced representation.

There’s in a sense a certain duality between computation and observation. In computation one’s generating new states of a system. In observation, one’s equivalencing together different states.

That equivalencing must in the end be implemented “underneath” by computation. But in observer theory what we want to do is just characterize the equivalencing that’s achieved. For us as observers it might in practice be all about how our senses work, what our biological or cultural nature is—or what technological devices or structures we’ve built. But what makes a coherent concept of observer theory possible is that there seem to be general, abstract characterizations that capture the essence of different kinds of observers…(More)”.

 Privacy-Enhancing and Privacy-Preserving Technologies: Understanding the Role of PETs and PPTs in the Digital Age


Paper by the Centre for Information Policy Leadership: “…explores how organizations are approaching privacy-enhancing technologies (“PETs”) and how PETs can advance data protection principles, and provides examples of how specific types of PETs work. It also explores potential challenges to the use of PETs and possible solutions to those challenges.

CIPL emphasizes the enormous potential inherent in these technologies to mitigate privacy risks and support innovation, and recommends a number of steps to foster further development and adoption of PETs. In particular, CIPL calls for policymakers and regulators to incentivize the use of PETs through clearer guidance on key legal concepts that impact the use of PETs, and by adopting a pragmatic approach to the application of these concepts.

CIPL’s recommendations towards wider adoption are as follows:

  • Issue regulatory guidance and incentives regarding PETs: Official regulatory guidance addressing PETs in the context of specific legal obligations or concepts (such as anonymization) will incentivize greater investment in PETs.
  • Increase education and awareness about PETs: PET developers and providers need to show tangible evidence of the value of PETs and help policymakers, regulators and organizations understand how such technologies can facilitate responsible data use.
  • Develop industry standards for PETs: Industry standards would help facilitate interoperability for the use of PETs across jurisdictions and help codify best practices to support technical reliability to foster trust in these technologies.
  • Recognize PETs as a demonstrable element of accountability: PETs complement robust data privacy management programs and should be recognized as an element of organizational accountability…(More)”.