A Guide to Designing New Institutions


Guide by TIAL: “We have created this guide as part of TIAL’s broader programme of work to help with the design of new institutions needed in fields ranging from environmental change to data stewardship and AI to mental health.This toolkit offers a framework for thinking about the design of new public institutions — whether at the level of a region or city, a nation, or at a transnational level. We welcome comments, critiques and additions.

This guide covers all the necessary steps of creating a new institution:

  • Preparation
  • Design (from structures and capabilities to processes and resources)
  • Socialisation (to ensure buy-in and legitimacy)
  • Implementation…(More)”.

Data Science for Social Impact in Higher Education:  First Steps


Data.org playbook: “… was designed to help you expand opportunities for social impact data science learning. As you browse, you will see a range of these opportunities including courses, modules for other courses, research and internship opportunities, and a variety of events and activities. The playbook also offers lessons learned to guide you through your process. Additionally, the Playbook includes profiles of students who have engaged in data science for social impact, guidance for engaging partners, and additional resources relating to evaluation and courses. We hope that this playbook will inspire and support your efforts to bring social impact data science to your institutions…

As you look at the range of ways you might bring data science for social impact to your students, remember that the intention is not for you to replicate what is here, but rather adapt them to your local contexts and conditions. You might draw pieces from several activities and combine them to create a customized strategy that works for you. Consider the assets you have around you and how you might be able to leverage them. At the same time, imagine how some of the lessons learned might reflect barriers you might face, as well. Most importantly, know that it is possible for you to create data science for social impact at your institution to bring benefit to your students and society…(More)”.

2024 Edelman Trust Barometer


Edelman: “The 2024 Edelman Trust Barometer reveals a new paradox at the heart of society. Rapid innovation offers the promise of a new era of prosperity, but instead risks exacerbating trust issues, leading to further societal instability and political polarization.

Innovation is accelerating – in regenerative agriculture, messenger RNA, renewable energy, and most of all in artificial intelligence. But society’s ability to process and accept rapid change is under pressure, with skepticism about science’s relationship with Government and the perception that the benefits skew towards the wealthy.

There is one issue on which the world stands united: innovation is being poorly managed – defined by lagging government regulation, uncertain impacts, lack of transparency, and an assumption that science is immutable. Our respondents cite this as a problem by nearly a two to one margin across most developed and developing countries, plus all age groups, income levels, educational levels, and genders. There is consensus among those who say innovation is poorly managed that society is changing too quickly and not in ways that benefit “people like me” (69%).

Many are concerned that Science is losing its independence: to Government, to the political process, and to the wealthy. In the U.S., two thirds assert that science is too politicized. For the first time in China, we see a contrast to their high trust in government: Three-quarters of respondents believe that Government and organizations that fund research have too much influence on science. There is concern about excessive influence of the elites, with 82% of those who say innovation is managed poorly believing that the system is biased in favor of the rich – this is 30 percentage points higher than those who feel innovation is managed well…(More)”.

Facial Recognition: Current Capabilities, Future Prospects, and Governance


A National Academies of Sciences, Engineering, and Medicine study: “Facial recognition technology is increasingly used for identity verification and identification, from aiding law enforcement investigations to identifying potential security threats at large venues. However, advances in this technology have outpaced laws and regulations, raising significant concerns related to equity, privacy, and civil liberties.

This report explores the current capabilities, future possibilities, and necessary governance for facial recognition technology. Facial Recognition Technology discusses legal, societal, and ethical implications of the technology, and recommends ways that federal agencies and others developing and deploying the technology can mitigate potential harms and enact more comprehensive safeguards…(More)”.

Introducing RegBox: using serious games in regulatory development


Toolkit by UK Policy Lab: “…enabling policymakers to convene stakeholders and work together to make decisions affecting regulation, using serious games. The toolkit will consist of game patterns for different use cases, a collection of case studies, guidance, and a set of tools to help policymakers to decide which approach to take. Work on RegBox is still in progress but in the spirit of being open and iterative we wanted to share and communicate it early. Our overarching challenge question is:  

How can we provide engaging and participatory tools that help policymakers to develop and test regulations and make effective decisions? …  

Policy Lab has worked on a range of projects that intersect with regulation and we’ve noticed a growing demand for more anticipatory and participatory approaches in this area. Regulators are having to respond to emerging technologies which are disrupting markets and posing new risks to individuals and institutions. Additionally, the government has just launched the Smarter Regulation programme, which is encouraging officials to use regulations only where necessary, and ensure their use is proportionate and future-proof. Because a change in regulation can have significant effects on businesses, organisations, and individuals it is important to understand the potential effects before deciding. We hypothesise that serious games can be used to understand regulatory challenges and stress-test solutions at pace..(More)”.

Representative Bodies in the Age of AI


Report by POPVOX: “The report tracks current developments in the U.S. Congress and internationally, while assessing the prospects for future innovations. The report also serves as a primer for those in Congress on AI technologies and methods in an effort to promote responsible use and adoption. POPVOX endorses a considered, step-wise strategy for AI experimentation, underscoring the importance of capacity building, data stewardship, ethical frameworks, and insights gleaned from global precedents of AI in parliamentary functions. This ensures AI solutions are crafted with human discernment and supervision at their core.

Legislatures worldwide are progressively embracing AI tools such as machine learning, natural language processing, and computer vision to refine the precision, efficiency, and, to a small extent, the participatory aspects of their operations. The advent of generative AI platforms, such as ChatGPT, which excel in interpreting and organizing textual data, marks a transformative shift for the legislative process, inherently a task of converting rules into language.

While nations such as Brazil, India, Italy, and Estonia lead with applications ranging from the transcription and translation of parliamentary proceedings to enhanced bill drafting and sophisticated legislative record searches, the U.S. Congress is prudently venturing into the realm of Generative AI. The House and Senate have initiated AI working groups and secured licenses for platforms like ChatGPT. They have also issued guidance on responsible use…(More)”.

Privacy-Enhancing and Privacy-Preserving Technologies: Understanding the Role of PETs and PPTs in the Digital Age


Paper by the Centre for Information Policy Leadership: “The paper explores how organizations are approaching privacy-enhancing technologies (“PETs”) and how PETs can advance data protection principles, and provides examples of how specific types of PETs work. It also explores potential challenges to the use of PETs and possible solutions to those challenges.

CIPL emphasizes the enormous potential inherent in these technologies to mitigate privacy risks and support innovation, and recommends a number of steps to foster further development and adoption of PETs. In particular, CIPL calls for policymakers and regulators to incentivize the use of PETs through clearer guidance on key legal concepts that impact the use of PETs, and by adopting a pragmatic approach to the application of these concepts.

CIPL’s recommendations towards wider adoption are as follows:

  • Issue regulatory guidance and incentives regarding PETs: Official regulatory guidance addressing PETs in the context of specific legal obligations or concepts (such as anonymization) will incentivize greater investment in PETs.
  • Increase education and awareness about PETs: PET developers and providers need to show tangible evidence of the value of PETs and help policymakers, regulators and organizations understand how such technologies can facilitate responsible data use.
  • Develop industry standards for PETs: Industry standards would help facilitate interoperability for the use of PETs across jurisdictions and help codify best practices to support technical reliability to foster trust in these technologies.
  • Recognize PETs as a demonstrable element of accountability: PETs complement robust data privacy management programs and should be recognized as an element of organizational accountability…(More)”.

Testing the Assumptions of the Data Revolution


Report by TRENDS: “Ten years have passed since the release of A World that Counts and the formal adoption of the Sustainable Development Goals (SDGs). This seems an appropriate time for national governments and the global data community to reflect on where progress has been made so far. 

This report supports this objective in three ways: it evaluates the assumptions that underpin A World that Counts’ core hypothesis that the data revolution would lead to better outcomes across the 17 SDGs, it summarizes where and how we have made progress, and it identifies knowledge gaps related to each assumption. These knowledge gaps will serve as the foundation for the next phase of the SDSN TReNDS research program, guiding our exploration of emerging data-driven paradigms and their implications for the SDGs. By analyzing these assumptions, we can consider how SDSN TReNDs and other development actors might adapt their activities to a new set of circumstances in the final six years of the SDG commitments.

Given that the 2030 Agenda established a 15-year timeframe for SDG attainment, it is to be expected that some of A World that Counts’ key assumptions would fall short or require recalibration along the way. Unforeseen events such as the COVID-19 pandemic would inevitably shift global attention and priorities away from the targets set out in the SDG framework, at least temporarily…(More)”.

Tackling Today’s Data Dichotomy: Unveiling the Paradox of Abundant Supply and Restricted Access in the Quest for Social Equity


Article by Stefaan Verhulst: “…One of the ironies of this moment, however, is that an era of unprecedented supply is simultaneously an era of constricted access to data. Much of the data we generate is privately “owned,” hidden away in private or public sector silos, or otherwise inaccessible to those who are most likely to benefit from it or generate valuable insights. These restrictions on access are grafted onto existing socioeconomic inequalities, driven by broader patterns of exclusion and marginalization, and also exacerbating them. Critically, restricted or unequal access to data does not only harm individuals: it causes untold public harm by limiting the potential of data to address social ills. It also limits attempts to improve the output of AI both in terms of bias and trustworthiness.

In this paper, we outline two potential approaches that could help address—or at least mitigate—the harms: social licensing and a greater role for data stewards. While not comprehensive solutions, we believe that these represent two of the most promising avenues to introduce greater efficiencies into how data is used (and reused), and thus lead to more targeted, responsive, and responsible policymaking…(page 22-25)”.

Digital Self-Determination


New Website and Resource by the International Network on Digital Self Determination: “Digital Self-Determination seeks to empower individuals and communities to decide how their data is managed in ways that benefit themselves and society. Translating this principle into practice requires a multi-faceted examination from diverse perspectives and in distinct contexts.

Our network connects different actors from around the world to consider how to apply Digital Self-Determination in real-life settings to inform both theory and practice.

Our main objectives are the following:

  • Inform policy development;
  • Accelerate the creation of new DSD processes and technologies;
  • Estabilish new professions that can help implement DSD (such as data stewards);
  • Contribute to the regulatory and policy debate;
  • Raise awareness and build bridges between the public and private sector and data subjects…(More)”.