Privacy-Enhancing and Privacy-Preserving Technologies: Understanding the Role of PETs and PPTs in the Digital Age


Paper by the Centre for Information Policy Leadership: “The paper explores how organizations are approaching privacy-enhancing technologies (“PETs”) and how PETs can advance data protection principles, and provides examples of how specific types of PETs work. It also explores potential challenges to the use of PETs and possible solutions to those challenges.

CIPL emphasizes the enormous potential inherent in these technologies to mitigate privacy risks and support innovation, and recommends a number of steps to foster further development and adoption of PETs. In particular, CIPL calls for policymakers and regulators to incentivize the use of PETs through clearer guidance on key legal concepts that impact the use of PETs, and by adopting a pragmatic approach to the application of these concepts.

CIPL’s recommendations towards wider adoption are as follows:

  • Issue regulatory guidance and incentives regarding PETs: Official regulatory guidance addressing PETs in the context of specific legal obligations or concepts (such as anonymization) will incentivize greater investment in PETs.
  • Increase education and awareness about PETs: PET developers and providers need to show tangible evidence of the value of PETs and help policymakers, regulators and organizations understand how such technologies can facilitate responsible data use.
  • Develop industry standards for PETs: Industry standards would help facilitate interoperability for the use of PETs across jurisdictions and help codify best practices to support technical reliability to foster trust in these technologies.
  • Recognize PETs as a demonstrable element of accountability: PETs complement robust data privacy management programs and should be recognized as an element of organizational accountability…(More)”.

Testing the Assumptions of the Data Revolution


Report by TRENDS: “Ten years have passed since the release of A World that Counts and the formal adoption of the Sustainable Development Goals (SDGs). This seems an appropriate time for national governments and the global data community to reflect on where progress has been made so far. 

This report supports this objective in three ways: it evaluates the assumptions that underpin A World that Counts’ core hypothesis that the data revolution would lead to better outcomes across the 17 SDGs, it summarizes where and how we have made progress, and it identifies knowledge gaps related to each assumption. These knowledge gaps will serve as the foundation for the next phase of the SDSN TReNDS research program, guiding our exploration of emerging data-driven paradigms and their implications for the SDGs. By analyzing these assumptions, we can consider how SDSN TReNDs and other development actors might adapt their activities to a new set of circumstances in the final six years of the SDG commitments.

Given that the 2030 Agenda established a 15-year timeframe for SDG attainment, it is to be expected that some of A World that Counts’ key assumptions would fall short or require recalibration along the way. Unforeseen events such as the COVID-19 pandemic would inevitably shift global attention and priorities away from the targets set out in the SDG framework, at least temporarily…(More)”.

Tackling Today’s Data Dichotomy: Unveiling the Paradox of Abundant Supply and Restricted Access in the Quest for Social Equity


Article by Stefaan Verhulst: “…One of the ironies of this moment, however, is that an era of unprecedented supply is simultaneously an era of constricted access to data. Much of the data we generate is privately “owned,” hidden away in private or public sector silos, or otherwise inaccessible to those who are most likely to benefit from it or generate valuable insights. These restrictions on access are grafted onto existing socioeconomic inequalities, driven by broader patterns of exclusion and marginalization, and also exacerbating them. Critically, restricted or unequal access to data does not only harm individuals: it causes untold public harm by limiting the potential of data to address social ills. It also limits attempts to improve the output of AI both in terms of bias and trustworthiness.

In this paper, we outline two potential approaches that could help address—or at least mitigate—the harms: social licensing and a greater role for data stewards. While not comprehensive solutions, we believe that these represent two of the most promising avenues to introduce greater efficiencies into how data is used (and reused), and thus lead to more targeted, responsive, and responsible policymaking…(page 22-25)”.

Digital Self-Determination


New Website and Resource by the International Network on Digital Self Determination: “Digital Self-Determination seeks to empower individuals and communities to decide how their data is managed in ways that benefit themselves and society. Translating this principle into practice requires a multi-faceted examination from diverse perspectives and in distinct contexts.

Our network connects different actors from around the world to consider how to apply Digital Self-Determination in real-life settings to inform both theory and practice.

Our main objectives are the following:

  • Inform policy development;
  • Accelerate the creation of new DSD processes and technologies;
  • Estabilish new professions that can help implement DSD (such as data stewards);
  • Contribute to the regulatory and policy debate;
  • Raise awareness and build bridges between the public and private sector and data subjects…(More)”.

Generative AI for economic research: Use cases and implications for economists  


Paper by Anton Korinek: “…This article describes use cases of modern generative AI to interested economic researchers based on the author’s exploration of the space. The main emphasis is on LLMs, which are the type of generative AI that is currently most useful for research. I have categorized their use cases into six areas: ideation and feedback, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions for how to take advantage of each of these capabilities and demonstrate them using specific examples. Moreover, I classify the capabilities of the most commonly used LLMs from experimental to highly useful to provide an overview. My hope is that this paper will be a useful guide both for researchers starting to use generative AI and for expert users who are interested in new use cases beyond what they already have experience with to take advantage of the rapidly growing capabilities of LLMs. The online resources associated with this paper are available at the journal website and will provide semi-annual updates on the capabilities and use cases of the most advanced generative AI tools for economic research. In addition, they offer a guide on “How do I start?” as well as a page with “Useful Resources on Generative AI for Economists.”…(More)”

Why Philanthropists Should Become Heretics


Article by Mark Malloch-Brown: “…There is a legitimate role for philanthropy in troubled times, but one that has to reflect them. No longer is it enough for established figures to use foundations and other philanthropies to prop up an existing order. The world of Hoffman or Bundy no longer exists, let alone that of Carnegie and Rockefeller. Today, the sector will find legitimacy only in its ability to help confront the manifold crises in ways others cannot.

In his 2018 book Just Giving, the political scientist Rob Reich brought a skeptical eye to the question of whether foundations have any valid purpose in liberal democracies but concluded that they can indeed be beneficial by fulfilling roles that only they can take on, through their distinctive constitutions. Reich identified two in particular: pluralism (foundations can challenge orthodoxies by pursuing idiosyncratic goals without clear electoral or market rationales) and discovery (foundations can serve as the “risk capital” for democratic societies, experimenting and investing for the long term). Precisely because entities in the philanthropic sector do not answer to voters or shareholders, they can be both radically urgent and radically patient: moving faster than other actors in response to a crisis or opportunity but also possessing far greater staying power, thus the ability to back projects whose success is judged in decades rather than months.

This approach demands that those who were once secular priests—the leaders of the philanthropic sector—abandon their cassocks and accept the mantle of the heretic. Only by challenging the system and agitating on its fringes can they realize their full potential in today’s crisis-bound world…(More)”

The Global Cooperation Barometer 2024


WEF Report: “From 2012 up until the COVID-19 pandemic, there was an increase in cooperation across four of the five pillars, with peace and security being the only exception. Innovation and technology saw the biggest increase in cooperation – at more than 30%.

The report shows a “stark deterioration” in the peace and security pillar due to a rapid rise in the number of forcibly displaced people and deaths from conflict. However, there has been “continued growth” in the climate and nature pillar due to increased commitments from countries.

Cooperation trends by pillar.

How cooperation has developed over the past decade, by pillar Image: World Economic Forum

Here’s what you need to know about cooperation across the five pillars.

  • Trade and capital

Global trade and capital flows rose moderately between 2012 and 2022. During the pandemic, these areas experienced volatility, with labour migration patterns dropping. But metrics such as goods trade, development assistance and developing countries’ share of foreign direct investment, and manufacturing exports have returned to strong growth in the post-pandemic period, says the report.

  • Innovation and technology

In the eight years until the pandemic, innovation and technology cooperation “maintained strong and significant growth” across most indicators, especially cross-border data flows and IT services trade. But this has plateaued since 2020, with some key metrics, including cross-border patent applications and international student flows, falling.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

  • Climate and natural capital

This is the only pillar that has seen the majority of indicators rise across the whole decade, with financial commitments to mitigation and adaptation and a significant expansion of marine protected areas. However, emissions continue to rise and “progress towards ecological outcomes is stagnant”, says the report.

  • Health and wellness

Between 2012 and 2020, cooperation on health and wellness rose consistently and was “essential” to navigating the COVID-19 pandemic, says the report, citing vaccine development, if not necessarily distribution, as an example. But cooperation has dipped slightly since its peak in 2020.

  • Peace and security

Trends in peace and security cooperation have declined considerably since 2016, driven by a rise in forcibly displaced people and cyberattacks, as well as a recent increase in the number of conflicts and conflict-related deaths. The report notes these metrics suggest an “increasingly unstable global security environment and increased intensity of conflicts”…(More)”.

Do Policy Schools Still Have a Point?


Article by Stephen M. Walt: “Am I proposing that we toss out the current curriculum, stop teaching microeconomics, democratic theory, public accounting, econometrics, foreign policy, applied ethics, history, or any of the other building blocks of today’s public policy curriculum? Not yet. But we ought to devote more time and effort to preparing them for a world that is going to be radically different from the one we’ve known in the past—and sooner than they think.

I have three modest proposals.

First, and somewhat paradoxically, the prospect of radical change highlights the importance of basic theories. Empirical patterns derived from past experience (e.g., “democracies don’t fight each other”) may be of little value if the political and social conditions under which those laws were discovered no longer exist. To make sense of radically new circumstances, we will have to rely on causal explanations (i.e., theories) to help us foresee what is likely to occur and to anticipate the results of different policy choices. Knowledge derived from simplistic hypothesis testing or simple historical analogies will be less useful than rigorous and refined theories that tell us what’s causing what and help us understand the effects of different actions. Even more sophisticated efforts to teach “applied history” will fail if past events are not properly interpreted. The past never speaks to us directly; all historical interpretation is in some sense dependent on the theories or frameworks that we bring to these events. We need to know not just what happened in some earlier moment; we need to understand why it happened as it did and whether similar causal forces are at work today. Providing a causal explanation requires theory.

At the same time, some of our existing theories will need to be revised (or even abandoned), and new ones may need to be invented. We cannot escape reliance on some sort of theory, but rigid and uncritical adherence to a particular worldview can be just as dangerous as trying to operate solely with one’s gut instincts. For this reason, public policy schools should expose students to a wider range of theoretical approaches than they currently do and teach students how to think critically about them and to identify their limitations along with their strengths…(More)”.

The Branding Dilemma of AI: Steering Towards Efficient Regulation


Blog by Zeynep Engin: “…Undoubtedly, the term ‘Artificial Intelligence’ has captured the public imagination, proving to be an excellent choice from a marketing standpoint (particularly serving the marketing goals of big AI tech companies). However, this has not been without its drawbacks. The field has experienced several ‘AI winters’ when lofty promises failed to translate into real-world outcomes. More critically, this term has anthropomorphized what are, at their core, high-dimensional statistical optimization processes. Such representation has obscured their true nature and the extent of their potential. Moreover, as computing capacities have expanded exponentially, the ability of these systems to process large datasets quickly and precisely, identifying patterns autonomously, has often been misinterpreted as evidence of human-like or even superhuman intelligence. Consequently, AI systems have been elevated to almost mystical status, perceived as incomprehensible to humans and, thus, uncontrollable by humans…

A profound shift in the discourse surrounding AI is urgently necessary. The quest to replicate or surpass human intelligence, while technologically fascinating, does not fully encapsulate the field’s true essence and progress. Indeed, AI has seen significant advances, uncovering a vast array of functionalities. However, its core strength still lies in computational speed and precision — a mechanical prowess. The ‘magic’ of AI truly unfolds when this computational capacity intersects with the wealth of real-world data generated by human activities and the environment, transforming human directives into computational actions. Essentially, we are now outsourcing complex processing tasks to machines, moving beyond crafting bespoke solutions for each problem in favour of leveraging vast computational resources we have. This transition does not yield an ‘artificial intelligence’, but poses a new challenge to human intelligence in the knowledge creation cycle: the responsibility to formulate the ‘right’ questions and vigilantly monitor the outcomes of such intricate processing, ensuring the mitigation of any potential adverse impacts…(More)”.

The Data Revolution and the Study of Social Inequality: Promise and Perils


Paper by Mario L. Small: “The social sciences are in the midst of a revolution in access to data, as governments and private companies have accumulated vast digital records of rapidly multiplying aspects of our lives and made those records available to researchers. The accessibility and comprehensiveness of the data are unprecedented. How will the data revolution affect the study of social inequality? I argue that the speed, breadth, and low cost with which large-scale data can be acquired promise a dramatic transformation in the questions we can answer, but this promise can be undercut by size-induced blindness, the tendency to ignore important limitations amidst a source with billions of data points. The likely consequences for what we know about the social world remain unclear…(More)”.