The digitalisation of social protection before and since the onset of Covid-19: opportunities, challenges and lessons


Paper by the Overseas Development Institute: “…discusses the main opportunities and challenges associated with digital social protection, drawing on trends pre-Covid and since the onset of the pandemic. It offers eight lessons to help social protection actors capitalise on technology’s potential in a risk-sensitive manner.

  • The response to Covid-19 accelerated the trend of increasing digitalisation of social protection delivery.
  • Studies from before and during the pandemic suggest that well-used technology holds potential to enhance provision for some service users, and played a notable role in rapid social protection expansion during Covid-19. It may also help reduce leakage or inclusion errors, lower costs and support improvements in programme design.
  • However, unless designed and implemented with careful mitigating measures, digitalisation may in some cases do more harm than good. Key concerns relate to potential risks and challenges of exclusion, protection and privacy violations, ‘technosolutionism’ and obscured transparency and accountability.
  • Ultimately, technology is a tool, and its outcomes depend on the needs it is expected to meet, the goals it is deployed to pursue, and the specific ways in which it is designed and implemented…(More)”.

Understanding Criminal Justice Innovations


Paper by Meghan J. Ryan: “Burgeoning science and technology have provided the criminal justice system with the opportunity to address some of its shortcomings. And the criminal justice system has significant shortcomings. Among other issues, we have a mass incarceration problem; clearance rates are surprisingly low; there are serious concerns about wrongful convictions; and the system is layered with racial, religious, and other biases. Innovations that are widely used across industries, as well as those directed specifically at the criminal justice system, have the potential to improve upon such problems. But it is important to recognize that these innovations also have downsides, and criminal justice actors must proceed with caution and understand not only the potential of these interventions but also their limitations. Relevant to this calculation of caution is whether the innovation is broadly used across industry sectors or, rather, whether it has been specifically developed for use within the criminal justice system. These latter innovations have a record of not being sufficiently vetted for accuracy and reliability. Accordingly, criminal justice actors must be sufficiently well versed in basic science and technology so that they have the ability and the confidence to critically assess the usefulness of the various criminal justice innovations in light of their limitations. Considering lawyers’ general lack of competency in these areas, scientific and technological training is necessary to mold them into modern competent criminal justice actors. This training must be more than superficial subject-specific training, though; it must dig deeper, delving into critical thinking skills that include evaluating the accuracy and reliability of the innovation at issue, as well as assessing broader concerns such as the need for development transparency, possible intrusions on individual privacy, and incentives to curtail individual liberties given the innovation at hand….(More)”

AI Can Predict Potential Nutrient Deficiencies from Space


Article by Rachel Berkowitz: “Micronutrient deficiencies afflict more than two billion people worldwide, including 340 million children. This lack of vitamins and minerals can have serious health consequences. But diagnosing deficiencies early enough for effective treatment requires expensive, time-consuming blood draws and laboratory tests.

New research provides a more efficient approach. Computer scientist Elizabeth Bondi and her colleagues at Harvard University used publicly available satellite data and artificial intelligence to reliably pinpoint geographical areas where populations are at high risk of micronutrient deficiencies. This analysis could potentially pave the way for early public health interventions.

Existing AI systems can use satellite data to predict localized food security issues, but they typically rely on directly observable features. For example, agricultural productivity can be estimated from views of vegetation. Micronutrient availability is harder to calculate. After seeing research showing that areas near forests tend to have better dietary diversity, Bondi and her colleagues were inspired to identify lesser-known markers for potential malnourishment. Their work shows that combining data such as vegetation cover, weather and water presence can suggest where populations will lack iron, vitamin B12 or vitamin A.

The team examined raw satellite measurements and consulted with local public health officials, then used AI to sift through the data and pinpoint key features. For instance, a food market, inferred based on roads and buildings visible, was vital for predicting a community’s risk level. The researchers then linked these features to specific nutrients lacking in four regions’ populations across Madagascar. They used real-world biomarker data (blood samples tested in labs) to train and test their AI program….(More)”.

“Co-construction” in deliberative democracy: lessons from the French Citizens’ Convention for Climate


Paper by Louis-Gaëtan Giraudet et al: “Launched in 2019, the French Citizens’ Convention for Climate (CCC) tasked 150 randomly chosen citizens with proposing fair and effective measures to fight climate change. This was to be fulfilled through an “innovative co-construction procedure”, involving some unspecified external input alongside that from the citizens. Did inputs from the steering bodies undermine the citizens’ accountability for the output? Did co-construction help the output resonate with the general public, as is expected from a citizens’ assembly? To answer these questions, we build on our unique experience in observing the CCC proceedings and documenting them with qualitative and quantitative data. We find that the steering bodies’ input, albeit significant, did not impair the citizens’ agency, creativity, and freedom of choice. While succeeding in creating consensus among the citizens who were involved, this co-constructive approach, however, failed to generate significant support among the broader public. These results call for a strengthening of the commitment structure that determines how follow-up on the proposals from a citizens’ assembly should be conducted…(More)”.

10 learnings from considering AI Ethics through global perspectives


Blog by Sampriti Saxena and Stefaan G. Verhulst: “Artificial Intelligence (AI) technologies have the potential to solve the world’s biggest challenges. However, they also come with certain risks to individuals and groups. As these technologies become more prevalent around the world, we need to consider the ethical ramifications of AI use to identify and rectify potential harms. Equally, we need to consider the various associated issues from a global perspective, not assuming that a single approach will satisfy different cultural and societal expectations.

In February 2021, The Governance Lab (The GovLab), the NYU Tandon School of Engineering, the Global AI Ethics Consortium (GAIEC), the Center for Responsible AI @ NYU (R/AI), and the Technical University of Munich’s (TUM) Institute for Ethics in Artificial Intelligence (IEAI) launched AI Ethics: Global Perspectives. …A year and a half later, the course has grown to 38 modules, contributed by 40 faculty members representing over 20 countries. Our conversations with faculty members and our experiences with the course modules have yielded a wealth of knowledge about AI ethics. In keeping with the values of openness and transparency that underlie the course, we summarized these insights into ten learnings to share with a broader audience. In what follows, we outline our key lessons from experts around the world.

Our Ten Learnings:

  1. Broaden the Conversation
  2. The Public as a Stakeholder
  3. Centering Diversity and Inclusion in Ethics
  4. Building Effective Systems of Accountability
  5. Establishing Trust
  6. Ask the Right Questions
  7. The Role of Independent Research
  8. Humans at the Center
  9. Our Shared Responsibility
  10. The Challenge and Potential for a Global Framework…(More)”.

The Behavioral Economics Guide 2022


Editorial by Kathleen Vohs & Avni Shah: “This year’s Behavioral Economics Guide editorial reviews recent work in the areas of self-control and goals. To do so, we distilled the latest findings and advanced a set of guiding principles termed the FRESH framework: Fatigue, Reminders, Ease, Social influence, and Habits. Example findings reviewed include physicians giving out more prescriptions for opioids later in the workday compared to earlier (fatigue); the use of digital reminders to prompt people to re-engage with goals, such as for personal savings, from which they may have turned away (reminders); visual displays that give people data on their behavioral patterns so as to enable feedback and active monitoring (ease); the importance of geographically-local peers in changing behaviors such as residential water use (social influence); and digital and other tools that help people break the link between aspects of the environment and problematic behaviors (habits). We used the FRESH framework as a potential guide for thinking about the kinds of behaviors people can perform in achieving the goal of being environmental stewards of a more sustainable future…(More)”.

Technology is Not Neutral: A Short Guide to Technology Ethics


Book by Stephanie Hare: “It seems that just about every new technology that we bring to bear on improving our lives brings with it some downside, side effect or unintended consequence.

These issues can pose very real and growing ethical problems for all of us. For example, automated facial recognition can make life easier and safer for us – but it also poses huge issues with regard to privacy, ownership of data and even identity theft. How do we understand and frame these debates, and work out strategies at personal and governmental levels?

Technology Is Not Neutral: A Short Guide to Technology Ethics addresses one of today’s most pressing problems: how to create and use tools and technologies to maximize benefits and minimize harms? Drawing on the author’s experience as a technologist, political risk analyst and historian, the book offers a practical and cross-disciplinary approach that will inspire anyone creating, investing in or regulating technology, and it will empower all readers to better hold technology to account…(More)”.

New laws to strengthen Canadians’ privacy protection and trust in the digital economy


Press Release: “Canadians increasingly rely on digital technology to connect with loved ones, to work and to innovate. That’s why the Government of Canada is committed to making sure Canadians can benefit from the latest technologies, knowing that their personal information is safe and secure and that companies are acting responsibly.

Today, the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, together with the Honourable David Lametti, Minister of Justice and Attorney General of Canada, introduced the Digital Charter Implementation Act, 2022, which will significantly strengthen Canada’s private sector privacy law, create new rules for the responsible development and use of artificial intelligence (AI), and continue advancing the implementation of Canada’s Digital Charter. As such, the Digital Charter Implementation Act, 2022 will include three proposed acts: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act.

The proposed Consumer Privacy Protection Act will address the needs of Canadians who rely on digital technology and respond to feedback received on previous proposed legislation. This law will ensure that the privacy of Canadians will be protected and that innovative businesses can benefit from clear rules as technology continues to evolve. This includes:

  • increasing control and transparency when Canadians’ personal information is handled by organizations;
  • giving Canadians the freedom to move their information from one organization to another in a secure manner;
  • ensuring that Canadians can request that their information be disposed of when it is no longer needed;
  • establishing stronger protections for minors, including by limiting organizations’ right to collect or use information on minors and holding organizations to a higher standard when handling minors’ information;
  • providing the Privacy Commissioner of Canada with broad order-making powers, including the ability to order a company to stop collecting data or using personal information; and
  • establishing significant fines for non-compliant organizations—with fines of up to 5% of global revenue or $25 million, whichever is greater, for the most serious offences.

The proposed Personal Information and Data Protection Tribunal Act will enable the creation of a new tribunal to facilitate the enforcement of the Consumer Privacy Protection Act. 

The proposed Artificial Intelligence and Data Act will introduce new rules to strengthen Canadians’ trust in the development and deployment of AI systems, including:

  • protecting Canadians by ensuring high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias;
  • establishing an AI and Data Commissioner to support the Minister of Innovation, Science and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and
  • outlining clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment…(More)”.

To Play Is the Thing: How Game Design Principles Can Make Online Deliberation Compelling


Paper by John Gastil: “This essay draws from game design to improve the prospects of democratic deliberation during government consultation with the public. The argument begins by reviewing the problem of low-quality deliberation in contemporary discourse, then explains how games can motivate participants to engage in demanding behaviors, such as deliberation. Key design features include: the origin, governance, and oversight of the game; the networked small groups at the center of the game; the objectives of these groups; the purpose of artificial intelligence and automated metrics for measuring deliberation; the roles played by public officials and nongovernmental organizations during the game; and the long-term payoff of playing the game for both its convenors and its participants. The essay concludes by considering this project’s wider theoretical significance for deliberative democracy, the first steps for governments and nonprofit organizations adopting this design, and the hazards of using advanced digital technology…(More)”.

Smartphone apps in the COVID-19 pandemic


Paper by Jay A. Pandit, Jennifer M. Radin, Giorgio Quer & Eric J. Topol: “At the beginning of the COVID-19 pandemic, analog tools such as nasopharyngeal swabs for PCR tests were center stage and the major prevention tactics of masking and physical distancing were a throwback to the 1918 influenza pandemic. Overall, there has been scant regard for digital tools, particularly those based on smartphone apps, which is surprising given the ubiquity of smartphones across the globe. Smartphone apps, given accessibility in the time of physical distancing, were widely used for tracking, tracing and educating the public about COVID-19. Despite limitations, such as concerns around data privacy, data security, digital health illiteracy and structural inequities, there is ample evidence that apps are beneficial for understanding outbreak epidemiology, individual screening and contact tracing. While there were successes and failures in each category, outbreak epidemiology and individual screening were substantially enhanced by the reach of smartphone apps and accessory wearables. Continued use of apps within the digital infrastructure promises to provide an important tool for rigorous investigation of outcomes both in the ongoing outbreak and in future epidemics…(More)”.