National AI Strategies from a human rights perspective


Report by Global Partners Digital: “…looks at existing strategies adopted by governments and regional organisations since 2017. It assesses the extent to which human rights considerations have been incorporated and makes a series of recommendations to policymakers looking to develop or revise AI strategies in the future….

Our report found that while the majority of National AI Strategies mention human rights, very few contain a deep human rights-based analysis or concrete assessment of how various AI applications impact human rights. In all but a few cases, they also lacked depth or specificity on how human rights should be protected in the context of AI, which was in contrast to the level of specificity on other issues such as economic competitiveness or innovation advantage. 

The report provides recommendations to help governments develop human rights-based national AI strategies. These recommendations fall under six broad themes:

  • Include human rights explicitly and throughout the strategy: Thinking about the impact of AI on human rights-and how to mitigate the risks associated with those impacts- should be core to a national strategy. Each section should consider the risks and opportunities AI provides as related to human rights, with a specific focus on at-risk, vulnerable and marginalized communities.
  • Outline specific steps to be taken to ensure human rights are protected: As strategies engage with human rights, they should include specific goals, commitments or actions to ensure that human rights are protected.
  • Build in incentives or specific requirements to ensure rights-respecting practice: Governments should take steps within their strategies to incentivize human rights-respecting practices and actions across all sectors, as well as to ensure that their goals with regards to the protection of human rights are fulfilled.
  • Set out grievance and remediation processes for human rights violations: A National AI Strategy should look at the existing grievance and remedial processes available for victims of human rights violations relating to AI. The strategy should assess whether the process needs revision in light of the particular nature of AI as a technology or in the capacity-building of those involved so that they are able to receive complaints concerning AI.
  • Recognize the regional and international dimensions to AI policy: National strategies should clearly identify relevant regional and global fora and processes relating to AI, and the means by which the government will promote human rights-respecting approaches and outcomes at them through proactive engagement.
  • Include human rights experts and other stakeholders in the drafting of National AI Strategies: When drafting a national strategy, the government should ensure that experts on human rights and the impact of AI on human rights are a core part of the drafting process….(More)”.

Tear down this wall: Microsoft embraces open data


The Economist: “Two decades ago Microsoft was a byword for a technological walled garden. One of its bosses called free open-source programs a “cancer”. That was then. On April 21st the world’s most valuable tech firm joined a fledgling movement to liberate the world’s data. Among other things, the company plans to launch 20 data-sharing groups by 2022 and give away some of its digital information, including data it has aggregated on covid-19.

Microsoft is not alone in its newfound fondness for sharing in the age of the coronavirus. “The world has faced pandemics before, but this time we have a new superpower: the ability to gather and share data for good,” Mark Zuckerberg, the boss of Facebook, a social-media conglomerate, wrote in the Washington Post on April 20th. Despite the EU’s strict privacy rules, some Eurocrats now argue that data-sharing could speed up efforts to fight the coronavirus. 

But the argument for sharing data is much older than the virus. The OECD, a club mostly of rich countries, reckons that if data were more widely exchanged, many countries could enjoy gains worth between 1% and 2.5% of GDP. The estimate is based on heroic assumptions (such as putting a number on business opportunities created for startups). But economists agree that readier access to data is broadly beneficial, because data are “non-rivalrous”: unlike oil, say, they can be used and re-used without being depleted, for instance to power various artificial-intelligence algorithms at once. 

Many governments have recognised the potential. Cities from Berlin to San Francisco have “open data” initiatives. Companies have been cagier, says Stefaan Verhulst, who heads the Governance Lab at New York University, which studies such things. Firms worry about losing intellectual property, imperilling users’ privacy and hitting technical obstacles. Standard data formats (eg, JPEG images) can be shared easily, but much that a Facebook collects with its software would be meaningless to a Microsoft, even after reformatting. Less than half of the 113 “data collaboratives” identified by the lab involve corporations. Those that do, including initiatives by BBVA, a Spanish bank, and GlaxoSmithKline, a British drugmaker, have been small or limited in scope. 

Microsoft’s campaign is the most consequential by far. Besides encouraging more non-commercial sharing, the firm is developing software, licences and (with the Governance Lab and others) governance frameworks that permit firms to trade data or provide access to them without losing control. Optimists believe that the giant’s move could be to data what IBM’s embrace in the late 1990s of the Linux operating system was to open-source software. Linux went on to become a serious challenger to Microsoft’s own Windows and today underpins Google’s Android mobile software and much of cloud-computing…(More)”.

The global pandemic has spawned new forms of activism – and they’re flourishing


Erica Chenoweth, Austin Choi-Fitzpatrick, Jeremy Pressman, Felipe G Santos and Jay Ulfelder at The Guardian: “Before the Covid-19 pandemic, the world was experiencing unprecedented levels of mass mobilization. The decade from 2010 to 2019 saw more mass movements demanding radical change around the world than in any period since World War II. Since the pandemic struck, however, street mobilization – mass demonstrations, rallies, protests, and sit-ins – has largely ground to an abrupt halt in places as diverse as India, Lebanon, Chile, Hong Kong, Iraq, Algeria, and the United States.

The near cessation of street protests does not mean that people power has dissipated. We have been collecting data on the various methods that people have used to express solidarity or adapted to press for change in the midst of this crisis. In just several weeks’ time, we’ve identified nearly 100 distinct methods of nonviolent action that include physical, virtual and hybrid actions – and we’re still counting. Far from condemning social movements to obsolescence, the pandemic – and governments’ responses to it – are spawning new tools, new strategies, and new motivation to push for change.

In terms of new tools, all across the world, people have turned to methods like car caravanscacerolazos (collectively banging pots and pans inside the home), and walkouts from workplaces with health and safety challenges to voice personal concerns, make political claims, and express social solidarity. Activists have developed alternative institutions such as coordinated mask-sewing, community mutual aid pods, and crowdsourced emergency funds. Communities have placed teddy bears in their front windows for children to find during scavenger hunts, authors have posted live-streamed readings, and musicians have performed from their balconies and rooftops. Technologists are experimenting with drones adapted to deliver supplies, disinfect common areas, check individual temperatures, and monitor high-risk areas. And, of course, many movements are moving their activities online, with digital ralliesteachins, and information-sharing.

Such activities have had important impacts. Perhaps the most immediate and life-saving efforts have been those where movements have begun to coordinate and distribute critical resources to people in need. Local mutual aid pods, like those in Massachusetts, have emerged to highlight urgent needs and provide for crowdsourced and volunteer rapid response. Pop-up food banks, reclaiming vacant housing, crowdsourced hardship funds, free online medical-consultation clinics, mass donations of surgical masks, gloves, gowns, goggles and sanitizer, and making masks at home are all methods that people have developed in the past several weeks. Most people have made these items by hand. Others have even used 3D printers to make urgently-needed medical supplies. These actions of movements and communities have already saved countless lives….(More)”.

Mind the app – considerations on the ethical risks of COVID-19 apps


Blog by Luciano Floridi: “There is a lot of talk about apps to deal with the pandemic. Some of the best solutions use the Bluetooth connection of mobile phones to determine the contact between people and therefore the probability of contagion.

In theory, it’s simple. In practice, it is a minefield of ethical problems, not only technical ones. To understand them, it is useful to distinguish between the validation and the verification of a system. 
The validation of a system answers the question: “are we building the right system?”. The answer is no if the app

  • is illegal;
  • is unnecessary, for example, there are better solutions; 
  • is a disproportionate solution to the problem, for example, there are only a few cases in the country; 
  • goes beyond the purpose for which it was designed, for example, it is used to discriminate people; 
  • continues to be used even after the end of the emergency.

Assuming the app passes the validation stage, then it needs to be verified.
The verification of a system answers the question: “are we building the system in the right way?”. Here too the difficulties are considerable. I have become increasingly aware of them as I collaborate with two national projects about a coronavirus app, as an advisor on their ethical implications. 
For once, the difficult problem is not privacy. Of course, it is trivially true that there are and there might always be privacy issues. The point is that, in this case, they can be made much less pressing than other issues. However, once (or if you prefer, even if) privacy is taken care of, other difficulties appear to remain intractable. A Bluetooth-based app can use anonymous data, recorded only in the mobile phone, used exclusively to send alerts in case of the contact with people infected. It is not easy but it is feasible, as demonstrated by the approach adopted by the Pan-European Privacy Preserving Proximity Tracing initiative (PEPP-PT). The apparently intractable problems are the effectiveness and fairness of the app.

To be effective, an app must be adopted by many people. In Britain, I was told that it would be useless if used by less than 20% of the population. According to the PEPP-PT, real effectiveness seems to be reached around the threshold of 60% of the whole population. This means that in Italy, for example, the app should be consistently and correctly used by something between 11m to 33m people, out of a population of 55m. Consider that in 2019 Facebook Messenger was used by 23m Italians. Even the often-mentioned app TraceTogether has been downloaded by an insufficient number of people in Singapore.


Given that it is unlikely that the app will be adopted so extensively just voluntarily, out of social responsibility, and that governments are reluctant to impose it as mandatory (and rightly so, for it would be unfair, see below), it is clear that it will be necessary to encourage its use, but this only shifts the problem….

Therefore, one should avoid the risk of transforming the production of the app into a signalling process. To do so, the verification should not be severed from, but must feedback on, the validation. This means that if the verification fails so should the validation, and the whole project ought to be reconsidered. It follows that a clear deadline by when (and by whom) the whole project may be assessed (validation + verification) and in case be terminated, or improved, or even simply renewed as it is, is essential. At least this level of transparency and accountability should be in place.

An app will not save us. And the wrong app will be worse than useless, as it will cause ethical problems and potentially exacerbate health-related risks, e.g. by generating a false sense of security, or deepening the digital divide. A good app must be part of a wider strategy, and it needs to be designed to support a fair future. If this is not possible, better do something else, avoid its positive, negative and opportunity costs, and not play the political game of merely signalling that something (indeed anything) has been tried…(More)”.

Embracing digital government during the pandemic and beyond


UN DESA Policy Brief: “…Involving civil society organizations, businesses, social entrepreneurs and the general public in managing the COVID-19 pandemic and its aftermath can prove to be highly effective for policy- and decision-makers. Online engagement initiatives led by governments can help people cope with the crisis as well as improve government operations. In a crisis situation, it becomes more important than ever to reach out to vulnerable groups in society, respond to their needs and ensure social stability. Engaging with civil society allows governments to tackle socio-economic challenges in a more productive way that leaves no one behind….

Since the crisis has put public services under stress, governments are urged to deploy effective digital technologies to contain the outbreak. Most innovative quick-to-market solutions have stemmed from the private sector. However, the crisis has exposed the need for government leadership in the development and adoption of new technologies such as artificial intelligence (AI) and robotics to ensure an effective provision of public services…

The efforts in developing digital government strategies after the COVID-19 crisis should focus on improving data protection and digital inclusion policies as well as on strengthening the policy and technical capabilities of public institutions. Even though public-private partnerships are essential for implementing innovative technologies, government leadership, strong institutions and effective public policies are crucial to tailor digital solutions to countries’ needs as well as prioritize security, equity and the protection of people’s rights. The COVID-19 pandemic has emphasized the importance of technology, but also the pivotal role of an effective, inclusive and accountable government….(More)”.

How can digital tools support deliberation?


 Claudia Chwalisz at the OECD: “As part of our work on Innovative Citizen Participation, we’ve launched a series of articles to open a discussion and gather evidence on the use of digital tools and practices in representative deliberative processes. ….The current context is obliging policy makers and practitioners to think outside the box and adapt to the inability of physical deliberation. How can digital tools allow planned or ongoing processes like Citizens’ Assemblies to continue, ensuring that policy makers can still garner informed citizen recommendations to inform their decision making? New experiments are getting underway, and the evidence gathered could also be applied to other situations when face-to-face is not possible or more difficult like international processes or any situation that prevents physical gathering.

This series will cover the core phases that a representative deliberative process should follow, as established in the forthcoming OECD report: learning, deliberation, decision making, and collective recommendations. Due to the different nature of conducting a process online, we will additionally consider a phase required before learning: skills training. The articles will explore the use of digital tools at each phase, covering questions about the appropriate tools, methods, evidence, and limitations.

They will also consider how the use of certain digital tools could enhance good practice principles such as impact, transparency, and evaluation:

  • Impact: Digital tools can help participants and the public to better monitor the status of the proposed recommendations and the impact they had on final decision- making. A parallel can be drawn with the extensive use of this methodology by the United Nations for the monitoring and evaluation of the impact of the Sustainable Development Goals (SDGs).
  • Transparency: Digital tools can facilitate transparency across the process. The use of collaborative tools allows for transparency regarding who wrote the final outcome of the process (ability to trace the contributors of the document and the different versions). By publishing the code and the algorithms applied for the random selection (sortition) process and the data or statistics used for the stratification could give total transparency on how participants are selected.
  • Evaluation: Data collection and analysis can help researchers and policy makers assess the process (for e.g., deliberation quality, participant surveys, opinion evolution). Publishing this data in a structured and open format can allow for a broader evaluation and contribute to research. Over the course of the next year, the OECD will be preparing evaluation guidelines in accordance with the good practice principles to enable comparative data analysis.

The series will also consider how the use of emerging technologies and digital tools could complement face-to-face processes, for instance:

  • Artificial intelligence (AI) and text-based technologies (i.e. natural language processing, NLP): Could the use of AI-based tools enrich deliberative processes? For example: mapping opinion clusters, consensus building, analysis of massive inputs from external participants in the early stage of stakeholder input. Could NLP allow for simultaneous translation to other languages, feelings analysis, and automated transcription? These possibilities already exist, but raise more pertinent questions around reliability and user experience. How could they be connected to human analysis, discussion, and decision making?
  • Virtual/Augmented reality: Could the development of these emerging technologies allow participants to be immersed in virtual environments and thereby simulate face-to-face deliberation or experiences that enable and build empathy with possible futures or others’ lived experiences?…(More)”.

Global AI Ethics Consortium


About: “…The newly founded Global AI Ethics Consortium (GAIEC) on Ethics and the Use of Data and Artificial Intelligence in the Fight Against COVID-19 and other Pandemics aims to:

  1. Support immediate needs for expertise related to the COVID-19 crisis and the emerging ethical questions related to the use of AI in managing the pandemic.
  2. Create a repository that includes avenues of communication for sharing and disseminating current research, new research opportunities, and past research findings.
  3. Coordinate internal funding and research initiatives to allow for maximum opportunities to pursue vital research related to health crises and the ethical use of AI.
  4. Discuss research findings and opportunities for new areas of collaboration.

Read the Statement of Purpose and find out more about the Global AI Ethics Consortium and its founding members: Christoph Lütge (TUM Institute for Ethics in Artificial Intelligence, Technical University of Munich), Jean-Gabriel Ganascia (LIP6-CNRS, Sorbonne Université), Mark Findlay (Centre for AI and Data Governance, Law School, Singapore Management University), Ken Ito and Kan Hiroshi Suzuki (The University of Tokyo), Jeannie Marie Paterson (Centre for AI and Digital Ethics, University of Melbourne), Huw Price (Leverhulme Centre for the Future of Intelligence, University of Cambridge), Stefaan G. Verhulst (The GovLab, New York University), Yi Zeng (Research Center for AI Ethics and Safety, Beijing Academy of Artificial Intelligence), and Adrian Weller (The Allan Turing Institute).

If you or your organization is interested in the GAIEC — Global AI Ethics Consortium please contact us at [email protected]…(More)”.

The Atlas of Inequality and Cuebiq’s Data for Good Initiative


Data Collaborative Case Study by Michelle Winowatan, Andrew Young, and Stefaan Verhulst: “The Atlas of Inequality is a research initiative led by scientists at the MIT Media Lab and Universidad Carlos III de Madrid. It is a project within the larger Human Dynamics research initiative at the MIT Media Lab, which investigates how computational social science can improve society, government, and companies. Using multiple big data sources, MIT Media Lab researchers seek to understand how people move in urban spaces and how that movement influences or is influenced by income. Among the datasets used in this initiative was location data provided by Cuebiq, through its Data for Good initiative. Cuebiq offers location-intelligence services to approved research and nonprofit organizations seeking to address public problems. To date, the Atlas has published maps of inequality in eleven cities in the United States. Through the Atlas, the researchers hope to raise public awareness about segregation of social mobility in United States cities resulting from economic inequality and support evidence-based policymaking to address the issue.

Data Collaborative Model: Based on the typology of data collaborative practice areas developed by The GovLab, the use of Cuebiq’s location data by MIT Media Lab researchers for the Atlas of Inequality initiative is an example of the research and analysis partnership model of data collaboration, specifically a data transfer approach. In this approach, companies provide data to partners for analysis, sometimes under the banner of “data philanthropy.” Access to data remains highly restrictive, with only specific partners able to analyze the assets provided. Approved uses are also determined in a somewhat cooperative manner, often with some agreement outlining how and why parties requesting access to data will put it to use….(More)”.

A Data Ecosystem to Defeat COVID-19


Paper by Bapon Fakhruddin: “…A wide range of approaches could be applied to understand transmission, outbreak assessment, risk communication, cascading impacts assessment on essential and other services. The network-based modelling of System of Systems (SOS), mobile technology, frequentist statistics and maximum-likelihood estimation, interactive data visualization, geostatistics, graph theory, Bayesian statistics, mathematical modelling, evidence synthesis approaches and complex thinking frameworks for systems interactions on COVID-19 impacts could be utilized. An example of tools and technologies that could be utilized to act decisively and early to prevent the further spread or quickly suppress the transmission of COVID-19, strengthen the resilience of health systems and save lives and urgent support to developing countries with businesses and corporations are shown in Figure 2. There are also WHO guidance on ‘Health Emergency and Disaster Risk Management[8]’, UNDRR supported ‘Public Health Scorecard Addendum[9]’, and other guidelines (e.g. WHO practical considerations and recommendations for religious leaders and faith-based communities in the context of COVID-19[10]) that could enhance pandemic response plan. It needs to be ensured that any such use is proportionate, specific and protected and does not increase civil liberties’ risk. It is essential therefore to examine in detail the challenge of maximising data use in emergency situations, while ensuring it is task-limited, proportionate and respectful of necessary protections and limitations. This is a complex task and the COVID-19 wil provide us with important test cases. It is also important that data is interpreted accurately. Otherwise, misinterpretations could lead each sector down to incorrect paths.

Figure 2: Tools to strengthen resilience for COVID-19

Many countries are still learning how to make use of data for their decision making in this critical time. The COVID-19 pandemic will provide important lessons on the need for cross-domain research and on how, in such emergencies, to balance the use of technological opportunities and data to counter pandemics against fundamental protections….(More)”.

Epistemic Humility—Knowing Your Limits in a Pandemic


Essay by Erik Angner: “Ignorance,” wrote Charles Darwin in 1871, “more frequently begets confidence than does knowledge.”

Darwin’s insight is worth keeping in mind when dealing with the current coronavirus crisis. That includes those of us who are behavioral scientists. Overconfidence—and a lack of epistemic humility more broadly—can cause real harm.

In the middle of a pandemic, knowledge is in short supply. We don’t know how many people are infected, or how many people will be. We have much to learn about how to treat the people who are sick—and how to help prevent infection in those who aren’t. There’s reasonable disagreement on the best policies to pursue, whether about health care, economics, or supply distribution. Although scientists worldwide are working hard and in concert to address these questions, final answers are some ways away.

Another thing that’s in short supply is the realization of how little we know. Even a quick glance at social or traditional media will reveal many people who express themselves with way more confidence than they should…

Frequent expressions of supreme confidence might seem odd in light of our obvious and inevitable ignorance about a new threat. The thing about overconfidence, though, is that it afflicts most of us much of the time. That’s according to cognitive psychologists, who’ve studied the phenomenon systematically for half a century. Overconfidence has been called “the mother of all psychological biases.” The research has led to findings that are at the same time hilarious and depressing. In one classic study, for example, 93 percent of U.S. drivers claimed to be more skillful than the median—which is not possible.

“But surely,” you might object, “overconfidence is only for amateurs—experts would not behave like this.” Sadly, being an expert in some domain does not protect against overconfidence. Some research suggests that the more knowledgeable are more prone to overconfidence. In a famous study of clinical psychologists and psychology students, researchers asked a series of questions about a real person described in psychological literature. As the participants received more and more information about the case, their confidence in their judgment grew—but the quality of their judgment did not. And psychologists with a Ph.D. did no better than the students….(More)”.