Data4Philanthropy


New Resource and Peer-to-Peer Learning Network: “Today’s global challenges have become increasingly complex and interconnected–from a global pandemic to the climate crisis. Solving these complex problems not only require new solutions, they also demand new methods for developing solutions and making decisions. By responsibly analyzing and using data, we can transform our understanding and approach to addressing societal issues and drive impact through our work.

However, many of these data-driven methods have not yet been adopted by the social sector or integrated across the grant-making cycle.

So we asked, how can innovations in data-driven methods and tools from multiple sectors transform decision making within philanthropy & improve the act of grant giving?

DATA4Philanthropy is a peer-to-peer learning network that aims to identify and advance the responsible use and value of data innovations across philanthropic functions.

Philanthropies can learn more about the potential of data for their sector, who to connect with to learn more about data, and how innovations in data-driven methods and tools are increasingly relevant across the stages of strategy to grant making to impact cycles.

The rapid change in both data supply, and now methods can be integrated across the philanthropy, civil society and government decision-making cycles–from developing joint priorities to improving implementation efficacy to evaluating the impact of investments…(More)”

Will governments ever learn? A study of current provision and the key gaps


Paper by Geoff Mulgan: “The paper describes the history of training from ancient China onwards and the main forms it now takes. It suggests 10 areas where change may be needed and goes onto discuss how skills are learned, suggesting the need for more continuous learning and new approaches to capacity.

I hope anyone interested in this field will at least find it stimulating. I couldn’t find an overview of this kind available and so tried to fill the gap, if only with a personal view. This topic is particularly important for the UK which allowed its training system to collapse over the last decade. But the issues are relevant everywhere since the capacity of governments arguably has more impact on human wellbeing than anything else…(More)”.

Defending the rights of refugees and migrants in the digital age


Primer by Amnesty International: “This is an introduction to the pervasive and rapid deployment of digital technologies in asylum and migration management systems across the globe including the United States, United Kingdom and the European Union. Defending the rights of refugees and migrants in the digital age, highlights some of the key digital technology developments in asylum and migration management systems, in particular systems that process large quantities of data, and the human rights issues arising from their use. This introductory briefing aims to build our collective understanding of these emerging technologies and hopes to add to wider advocacy efforts to stem their harmful effects…(More)”.

Governing Data and AI to Protect Inner Freedoms Includes a Role for IP


Article by Giuseppina (Pina) D’Agostino and Robert Fay: “Generative artificial intelligence (AI) has caught regulators everywhere by surprise. Its ungoverned and growing ubiquity is similar to that of the large digital platforms that play an important role in the work and personal lives of billions of individuals worldwide. These platforms rely on advertising revenue dependent on user data derived from numerous undisclosed sources, including through covert tracking of interactions on digital platforms, surveillance of conversations, monitoring of activity across platforms and acquisition of biometric data through immersive virtual reality games, just to name a few.

This complex milieu creates a suite of public policy challenges. One of the most important yet least explored is the intersection of intellectual property (IP), data governance, AI and the platforms’ underlying business model. The global scale, the quasi-monopolistic dominance enjoyed by the large platforms, and their control over data and data analytics have explicit implications for fundamental human rights, including freedom of thought…(More)”.

Winning the Battle of Ideas: Exposing Global Authoritarian Narratives and Revitalizing Democratic Principles


Report by Joseph Siegle: “Democracies are engaged in an ideological competition with autocracies that could reshape the global order. Narratives are a potent, asymmetric instrument of power, as they reframe events in a way that conforms to and propagates a particular worldview. Over the past decade and a half, autocracies like Russia and China have led the effort to disseminate authoritarian narratives globally, seeking to normalize authoritarianism as an equally viable and legitimate form of government. How do authoritarian narratives reframe an unappealing value proposition, with the aim of making the democratic path seem less attractive and offering authoritarianism as an alternative model? How can democracies reemphasize their core principles and remind audiences of democracy’s moral, developmental, and security advantages?…(More)”.

Collective action for responsible AI in health


OECD Report: “Artificial intelligence (AI) will have profound impacts across health systems, transforming health care, public health, and research. Responsible AI can accelerate efforts toward health systems being more resilient, sustainable, equitable, and person-centred. This paper provides an overview of the background and current state of artificial intelligence in health, perspectives on opportunities, risks, and barriers to success. The paper proposes several areas to be explored for policy-makers to advance the future of responsible AI in health that is adaptable to change, respects individuals, champions equity, and achieves better health outcomes for all.

The areas to be explored relate to trust, capacity building, evaluation, and collaboration. This recognises that the primary forces that are needed to unlock the value from artificial intelligence are people-based and not technical…(More)”

Regulating AI Deepfakes and Synthetic Media in the Political Arena


Report by Daniel Weiner and Lawrence Norden: “…Part I of this resource defines the terms deepfakesynthetic media, and manipulated media in more detail. Part II sets forth some necessary considerations for policymakers, specifically:

  • The most plausible rationales for regulating deepfakes and other manipulated media when used in the political arena. In general, the necessity of promoting an informed electorate and the need to safeguard the overall integrity of the electoral process are among the most compelling rationales for regulating manipulated media in the political space.
  • The types of communications that should be regulated. Regulations should reach synthetic images and audio as well as video. Policymakers should focus on curbing or otherwise limiting depictions of events or statements that did not actually occur, especially those appearing in paid campaign ads and certain other categories of paid advertising or otherwise widely disseminated communications. All new rules should have clear carve-outs for parody, news media stories, and potentially other types of protected speech.
  • How such media should be regulated. Transparency rules — for example, rules requiring a manipulated image or audio recording to be clearly labeled as artificial and not a portrayal of real events — will usually be easiest to defend in court. Transparency will not always be enough, however; lawmakers should also consider outright bans of certain categories of manipulated media, such as deceptive audio and visual material seeking to mislead people about the time, place, and manner of voting.
  • Who regulations should target. Both bans and less burdensome transparency requirements should primarily target those who create or disseminate deceptive media, although regulation of the platforms used to transmit deepfakes may also make sense…(More)”.

2023 OECD Digital Government Index


OECD Report: “Digital government is essential to transform government processes and services in ways that improve the responsiveness and reliability of the public sector. During the COVID-19 pandemic it also proved crucial to governments’ ability to continue operating in times of crisis and provide timely services to citizens and businesses. Yet, for the digital transformation to be sustainable in the long term, it needs solid foundations, including adaptable governance arrangements, reliable and resilient digital public infrastructure, and a prospective approach to governing with emerging technologies such as artificial intelligence. This paper presents the main findings of the 2023 edition of the OECD Digital Government Index (DGI), which benchmarks the efforts made by governments to establish the foundations necessary for a coherent, human-centred digital transformation of the public sector. It comprises 155 data points from 33 member countries, 4 accession countries and 1 partner country collected in 2022, covering the period between 01 January 2020 and 31 October 2022…(More)”

The global reach of the EU’s approach to digital transformation


Report by the European Parliament’s Think Tank: “The EU’s approach to digital transformation is rooted in protecting fundamental rights, sustainability, ethics and fairness. With this human-centric vision of the digital economy and society, the EU seeks to empower citizens and businesses, regardless of their size. In the EU’s view, the internet should remain open, fair, inclusive and focused on people. Digital technologies should work for citizens and help them to engage in society. Companies should be able to compete on equal terms, and consumers should be confident that their rights are respected. The European Commission has published a number of strategies and action plans recently that outline the EU’s vision for the digital future and set concrete targets for achieving it. The Commission has also proposed several digital regulations, including the artificial intelligence act, the Digital Services Act and the Digital Markets Act. These regulations are intended to ensure a safe online environment and fair and open digital markets, strengthen Europe’s competitiveness, improve algorithmic transparency and give citizens better control over how they share their personal data. Although some of these regulations have not yet been adopted, and others have been in force for only a short time, they are expected to have impact not only in the EU but also beyond its borders. For instance, several regulations target businesses – regardless of where they are based – that offer services to EU citizens or businesses. In addition, through the phenomenon known as ‘the Brussels effect’, these rules may influence tech business practices and national legislation around the world. The EU is an active participant in developing global digital cooperation and global governance frameworks for specific areas. Various international organisations are developing instruments to ensure that people and businesses can take advantage of artificial intelligence’s benefits and limit negative consequences. In these global negotiations, the EU promotes respect for various fundamental rights and freedoms, as well as compatibility with EU law….(More)”.

A Guide to Designing New Institutions


Guide by TIAL: “We have created this guide as part of TIAL’s broader programme of work to help with the design of new institutions needed in fields ranging from environmental change to data stewardship and AI to mental health.This toolkit offers a framework for thinking about the design of new public institutions — whether at the level of a region or city, a nation, or at a transnational level. We welcome comments, critiques and additions.

This guide covers all the necessary steps of creating a new institution:

  • Preparation
  • Design (from structures and capabilities to processes and resources)
  • Socialisation (to ensure buy-in and legitimacy)
  • Implementation…(More)”.