Collective action for responsible AI in health


OECD Report: “Artificial intelligence (AI) will have profound impacts across health systems, transforming health care, public health, and research. Responsible AI can accelerate efforts toward health systems being more resilient, sustainable, equitable, and person-centred. This paper provides an overview of the background and current state of artificial intelligence in health, perspectives on opportunities, risks, and barriers to success. The paper proposes several areas to be explored for policy-makers to advance the future of responsible AI in health that is adaptable to change, respects individuals, champions equity, and achieves better health outcomes for all.

The areas to be explored relate to trust, capacity building, evaluation, and collaboration. This recognises that the primary forces that are needed to unlock the value from artificial intelligence are people-based and not technical…(More)”

Regulating AI Deepfakes and Synthetic Media in the Political Arena


Report by Daniel Weiner and Lawrence Norden: “…Part I of this resource defines the terms deepfakesynthetic media, and manipulated media in more detail. Part II sets forth some necessary considerations for policymakers, specifically:

  • The most plausible rationales for regulating deepfakes and other manipulated media when used in the political arena. In general, the necessity of promoting an informed electorate and the need to safeguard the overall integrity of the electoral process are among the most compelling rationales for regulating manipulated media in the political space.
  • The types of communications that should be regulated. Regulations should reach synthetic images and audio as well as video. Policymakers should focus on curbing or otherwise limiting depictions of events or statements that did not actually occur, especially those appearing in paid campaign ads and certain other categories of paid advertising or otherwise widely disseminated communications. All new rules should have clear carve-outs for parody, news media stories, and potentially other types of protected speech.
  • How such media should be regulated. Transparency rules — for example, rules requiring a manipulated image or audio recording to be clearly labeled as artificial and not a portrayal of real events — will usually be easiest to defend in court. Transparency will not always be enough, however; lawmakers should also consider outright bans of certain categories of manipulated media, such as deceptive audio and visual material seeking to mislead people about the time, place, and manner of voting.
  • Who regulations should target. Both bans and less burdensome transparency requirements should primarily target those who create or disseminate deceptive media, although regulation of the platforms used to transmit deepfakes may also make sense…(More)”.

2023 OECD Digital Government Index


OECD Report: “Digital government is essential to transform government processes and services in ways that improve the responsiveness and reliability of the public sector. During the COVID-19 pandemic it also proved crucial to governments’ ability to continue operating in times of crisis and provide timely services to citizens and businesses. Yet, for the digital transformation to be sustainable in the long term, it needs solid foundations, including adaptable governance arrangements, reliable and resilient digital public infrastructure, and a prospective approach to governing with emerging technologies such as artificial intelligence. This paper presents the main findings of the 2023 edition of the OECD Digital Government Index (DGI), which benchmarks the efforts made by governments to establish the foundations necessary for a coherent, human-centred digital transformation of the public sector. It comprises 155 data points from 33 member countries, 4 accession countries and 1 partner country collected in 2022, covering the period between 01 January 2020 and 31 October 2022…(More)”

The global reach of the EU’s approach to digital transformation


Report by the European Parliament’s Think Tank: “The EU’s approach to digital transformation is rooted in protecting fundamental rights, sustainability, ethics and fairness. With this human-centric vision of the digital economy and society, the EU seeks to empower citizens and businesses, regardless of their size. In the EU’s view, the internet should remain open, fair, inclusive and focused on people. Digital technologies should work for citizens and help them to engage in society. Companies should be able to compete on equal terms, and consumers should be confident that their rights are respected. The European Commission has published a number of strategies and action plans recently that outline the EU’s vision for the digital future and set concrete targets for achieving it. The Commission has also proposed several digital regulations, including the artificial intelligence act, the Digital Services Act and the Digital Markets Act. These regulations are intended to ensure a safe online environment and fair and open digital markets, strengthen Europe’s competitiveness, improve algorithmic transparency and give citizens better control over how they share their personal data. Although some of these regulations have not yet been adopted, and others have been in force for only a short time, they are expected to have impact not only in the EU but also beyond its borders. For instance, several regulations target businesses – regardless of where they are based – that offer services to EU citizens or businesses. In addition, through the phenomenon known as ‘the Brussels effect’, these rules may influence tech business practices and national legislation around the world. The EU is an active participant in developing global digital cooperation and global governance frameworks for specific areas. Various international organisations are developing instruments to ensure that people and businesses can take advantage of artificial intelligence’s benefits and limit negative consequences. In these global negotiations, the EU promotes respect for various fundamental rights and freedoms, as well as compatibility with EU law….(More)”.

A Guide to Designing New Institutions


Guide by TIAL: “We have created this guide as part of TIAL’s broader programme of work to help with the design of new institutions needed in fields ranging from environmental change to data stewardship and AI to mental health.This toolkit offers a framework for thinking about the design of new public institutions — whether at the level of a region or city, a nation, or at a transnational level. We welcome comments, critiques and additions.

This guide covers all the necessary steps of creating a new institution:

  • Preparation
  • Design (from structures and capabilities to processes and resources)
  • Socialisation (to ensure buy-in and legitimacy)
  • Implementation…(More)”.

Data Science for Social Impact in Higher Education:  First Steps


Data.org playbook: “… was designed to help you expand opportunities for social impact data science learning. As you browse, you will see a range of these opportunities including courses, modules for other courses, research and internship opportunities, and a variety of events and activities. The playbook also offers lessons learned to guide you through your process. Additionally, the Playbook includes profiles of students who have engaged in data science for social impact, guidance for engaging partners, and additional resources relating to evaluation and courses. We hope that this playbook will inspire and support your efforts to bring social impact data science to your institutions…

As you look at the range of ways you might bring data science for social impact to your students, remember that the intention is not for you to replicate what is here, but rather adapt them to your local contexts and conditions. You might draw pieces from several activities and combine them to create a customized strategy that works for you. Consider the assets you have around you and how you might be able to leverage them. At the same time, imagine how some of the lessons learned might reflect barriers you might face, as well. Most importantly, know that it is possible for you to create data science for social impact at your institution to bring benefit to your students and society…(More)”.

2024 Edelman Trust Barometer


Edelman: “The 2024 Edelman Trust Barometer reveals a new paradox at the heart of society. Rapid innovation offers the promise of a new era of prosperity, but instead risks exacerbating trust issues, leading to further societal instability and political polarization.

Innovation is accelerating – in regenerative agriculture, messenger RNA, renewable energy, and most of all in artificial intelligence. But society’s ability to process and accept rapid change is under pressure, with skepticism about science’s relationship with Government and the perception that the benefits skew towards the wealthy.

There is one issue on which the world stands united: innovation is being poorly managed – defined by lagging government regulation, uncertain impacts, lack of transparency, and an assumption that science is immutable. Our respondents cite this as a problem by nearly a two to one margin across most developed and developing countries, plus all age groups, income levels, educational levels, and genders. There is consensus among those who say innovation is poorly managed that society is changing too quickly and not in ways that benefit “people like me” (69%).

Many are concerned that Science is losing its independence: to Government, to the political process, and to the wealthy. In the U.S., two thirds assert that science is too politicized. For the first time in China, we see a contrast to their high trust in government: Three-quarters of respondents believe that Government and organizations that fund research have too much influence on science. There is concern about excessive influence of the elites, with 82% of those who say innovation is managed poorly believing that the system is biased in favor of the rich – this is 30 percentage points higher than those who feel innovation is managed well…(More)”.

Facial Recognition: Current Capabilities, Future Prospects, and Governance


A National Academies of Sciences, Engineering, and Medicine study: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Introducing RegBox: using serious games in regulatory development


Toolkit by UK Policy Lab: “…enabling policymakers to convene stakeholders and work together to make decisions affecting regulation, using serious games. The toolkit will consist of game patterns for different use cases, a collection of case studies, guidance, and a set of tools to help policymakers to decide which approach to take. Work on RegBox is still in progress but in the spirit of being open and iterative we wanted to share and communicate it early. Our overarching challenge question is:  

How can we provide engaging and participatory tools that help policymakers to develop and test regulations and make effective decisions? …  

Policy Lab has worked on a range of projects that intersect with regulation and we’ve noticed a growing demand for more anticipatory and participatory approaches in this area. Regulators are having to respond to emerging technologies which are disrupting markets and posing new risks to individuals and institutions. Additionally, the government has just launched the Smarter Regulation programme, which is encouraging officials to use regulations only where necessary, and ensure their use is proportionate and future-proof. Because a change in regulation can have significant effects on businesses, organisations, and individuals it is important to understand the potential effects before deciding. We hypothesise that serious games can be used to understand regulatory challenges and stress-test solutions at pace..(More)”.

Representative Bodies in the Age of AI


Report by POPVOX: “The report tracks current developments in the U.S. Congress and internationally, while assessing the prospects for future innovations. The report also serves as a primer for those in Congress on AI technologies and methods in an effort to promote responsible use and adoption. POPVOX endorses a considered, step-wise strategy for AI experimentation, underscoring the importance of capacity building, data stewardship, ethical frameworks, and insights gleaned from global precedents of AI in parliamentary functions. This ensures AI solutions are crafted with human discernment and supervision at their core.

Legislatures worldwide are progressively embracing AI tools such as machine learning, natural language processing, and computer vision to refine the precision, efficiency, and, to a small extent, the participatory aspects of their operations. The advent of generative AI platforms, such as ChatGPT, which excel in interpreting and organizing textual data, marks a transformative shift for the legislative process, inherently a task of converting rules into language.

While nations such as Brazil, India, Italy, and Estonia lead with applications ranging from the transcription and translation of parliamentary proceedings to enhanced bill drafting and sophisticated legislative record searches, the U.S. Congress is prudently venturing into the realm of Generative AI. The House and Senate have initiated AI working groups and secured licenses for platforms like ChatGPT. They have also issued guidance on responsible use…(More)”.