Power to the People


Book by Kurth Cronin on “How Open Technological Innovation is Arming Tomorrow’s Terrorists…Never have so many possessed the means to be so lethal. The diffusion of modern technology (robotics, cyber weapons, 3-D printing, autonomous systems, and artificial intelligence) to ordinary people has given them access to weapons of mass violence previously monopolized by the state. In recent years, states have attempted to stem the flow of such weapons to individuals and non-state groups, but their efforts are failing.

As Audrey Kurth Cronin explains in Power to the People, what we are seeing now is an exacerbation of an age-old trend. Over the centuries, the most surprising developments in warfare have occurred because of advances in technologies combined with changes in who can use them. Indeed, accessible innovations in destructive force have long driven new patterns of political violence. When Nobel invented dynamite and Kalashnikov designed the AK-47, each inadvertently spurred terrorist and insurgent movements that killed millions and upended the international system.

That history illuminates our own situation, in which emerging technologies are altering society and redistributing power. The twenty-first century “sharing economy” has already disrupted every institution, including the armed forces. New “open” technologies are transforming access to the means of violence. Just as importantly, higher-order functions that previously had been exclusively under state military control – mass mobilization, force projection, and systems integration – are being harnessed by non-state actors. Cronin closes by focusing on how to respond so that we both preserve the benefits of emerging technologies yet reduce the risks. Power, in the form of lethal technology, is flowing to the people, but the same technologies that empower can imperil global security – unless we act strategically….(More)”.

Inside the ‘Wikipedia of Maps,’ Tensions Grow Over Corporate Influence


Corey Dickinson at Bloomberg: “What do Lyft, Facebook, the International Red Cross, the U.N., the government of Nepal and Pokémon Go have in common? They all use the same source of geospatial data: OpenStreetMap, a free, open-source online mapping service akin to Google Maps or Apple Maps. But unlike those corporate-owned mapping platforms, OSM is built on a network of mostly volunteer contributors. Researchers have described it as the “Wikipedia for maps.”

Since it launched in 2004, OpenStreetMap has become an essential part of the world’s technology infrastructure. Hundreds of millions of monthly users interact with services derived from its data, from ridehailing apps, to social media geotagging on Snapchat and Instagram, to humanitarian relief operations in the wake of natural disasters. 

But recently the map has been changing, due the growing impact of private sector companies that rely on it. In a 2019 paper published in the ISPRS International Journal of Geo-Information, a cross-institutional team of researchers traced how Facebook, Apple, Microsoft and other companies have gained prominence as editors of the map. Their priorities, the researchers say, are driving significant change to what is being mapped compared to the past. 

“OpenStreetMap’s data is crowdsourced, which has always made spectators to the project a bit wary about the quality of the data,” says Dipto Sarkar, a professor of geoscience at Carleton University in Ottawa, and one of the paper’s co-authors. “As the data becomes more valuable and is used for an ever-increasing list of projects, the integrity of the information has to be almost perfect. These companies need to make sure there’s a good map of the places they want to expand in, and nobody else is offering that, so they’ve decided to fill it in themselves.”…(More)”.

Critical Perspectives on Open Development


Book edited by Arul Chib, Caitlin M. Bentley, and Matthew L. Smith: “Over the last ten years, “open” innovations—the sharing of information and communications resources without access restrictions or cost—have emerged within international development. But do these innovations empower poor and marginalized populations? This book examines whether, for whom, and under what circumstances the free, networked, public sharing of information and communication resources contribute (or not) toward a process of positive social transformation. The contributors offer cross-cutting theoretical frameworks and empirical analyses that cover a broad range of applications, emphasizing the underlying aspects of open innovations that are shared across contexts and domains.

The book first outlines theoretical frameworks that span knowledge stewardship, trust, situated learning, identity, participation, and power decentralization. It then investigates these frameworks across a range of institutional and country contexts, considering each in terms of the key emancipatory principles and structural impediments it seeks to address. Taken together, the chapters offer an empirically tested theoretical direction for the field….(More)”.

A definition, benchmark and database of AI for social good initiatives


Paper by Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi: “Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives….(More)”.

Public-Private Partnerships: Compound and Data Sharing in Drug Discovery and Development


Paper by Andrew M. Davis et al: “Collaborative efforts between public and private entities such as academic institutions, governments, and pharmaceutical companies form an integral part of scientific research, and notable instances of such initiatives have been created within the life science community. Several examples of alliances exist with the broad goal of collaborating toward scientific advancement and improved public welfare. Such collaborations can be essential in catalyzing breaking areas of science within high-risk or global public health strategies that may have otherwise not progressed. A common term used to describe these alliances is public-private partnership (PPP). This review discusses different aspects of such partnerships in drug discovery/development and provides example applications as well as successful case studies. Specific areas that are covered include PPPs for sharing compounds at various phases of the drug discovery process—from compound collections for hit identification to sharing clinical candidates. Instances of PPPs to support better data integration and build better machine learning models are also discussed. The review also provides examples of PPPs that address the gap in knowledge or resources among involved parties and advance drug discovery, especially in disease areas with unfulfilled and/or social needs, like neurological disorders, cancer, and neglected and rare diseases….(More)”.

Time to evaluate COVID-19 contact-tracing apps


Letter to the Editor of Nature by Vittoria Colizza et al: “Digital contact tracing is a public-health intervention. Real-time monitoring and evaluation of the effectiveness of app-based contact tracing is key for improvement and public trust.

SARS-CoV-2 is likely to become endemic in many parts of the world, and there is still no certainty about how quickly vaccination will become available or how long its protection will last. For the foreseeable future, most countries will rely on a combination of various measures, including vaccination, social distancing, mask wearing and contact tracing.

Digital contact tracing via smartphone apps was established as a new public-health intervention in many countries in 2020. Most of these apps are now at a stage at which they need to be evaluated as public-health tools. We present here five key epidemiological and public-health requirements for COVID-19 contact-tracing apps and their evaluation.

1. Integration with local health policy. App notifications should be consistent with local health policies. The app should be integrated into access to testing, medical care and advice on isolation, and should work in conjunction with conventional contact tracing where available1. Apps should be interoperable across countries, as envisaged by the European Commission’s eHealth Network.

2. High user uptake and adherence. Contact-tracing apps can reduce transmission at low levels of uptake, including for those without smartphones2. However, large numbers of users increase effectiveness3,4. An effective communication strategy that explains the apps’ role and addresses privacy concerns is essential for increasing adoption5. Design, implementation and deployment should make the apps accessible to harder-to-reach communities. Adherence to quarantine should be encouraged and supported.

3. Quarantine infectious people as accurately as possible. The purpose of contact tracing is to quarantine as many potentially infectious people as possible, but to minimize the time spent in quarantine by uninfected people. To achieve optimal performance, apps’ algorithms must be ‘tunable’, to adjust to the epidemic as it evolves6.

4. Rapid notification. The time between the onset of symptoms in an index case and the quarantine of their contacts is of key importance in COVID-19 contact tracing7,8. Where a design feature introduces a delay, it needs to be outweighed by gains in, for example, specificity, uptake or adherence. If the delays exceed the period during which most contacts transmit the disease, the app will fail to reduce transmission.

5. Ability to evaluate effectiveness transparently. The public must be provided with evidence that notifications are based on the best available data. The tracing algorithm should therefore be transparent, auditable, under oversight and subject to review. Aggregated data (not linked to individual people) are essential for evaluation of and improvement in the performance of the app. Data on local uptake at a sufficiently coarse-grained spatial resolution are equally key. As apps in Europe do not ‘geolocate’ people, this additional information can be provided by the user or through surveys. Real-time monitoring should be performed whenever possible….(More)”.

Practical Fairness


Book by Aileen Nielsen: “Fairness is becoming a paramount consideration for data scientists. Mounting evidence indicates that the widespread deployment of machine learning and AI in business and government is reproducing the same biases we’re trying to fight in the real world. But what does fairness mean when it comes to code? This practical book covers basic concerns related to data security and privacy to help data and AI professionals use code that’s fair and free of bias.

Many realistic best practices are emerging at all steps along the data pipeline today, from data selection and preprocessing to closed model audits. Author Aileen Nielsen guides you through technical, legal, and ethical aspects of making code fair and secure, while highlighting up-to-date academic research and ongoing legal developments related to fairness and algorithms.

  • Identify potential bias and discrimination in data science models
  • Use preventive measures to minimize bias when developing data modeling pipelines
  • Understand what data pipeline components implicate security and privacy concerns
  • Write data processing and modeling code that implements best practices for fairness
  • Recognize the complex interrelationships between fairness, privacy, and data security created by the use of machine learning models
  • Apply normative and legal concepts relevant to evaluating the fairness of machine learning models…(More)”.

Data Responsibility in Humanitarian Action


InterAgency Standing Committee: “Data responsibility in humanitarian action is the safe, ethical and effective management of personal and non-personal data for operational response. It is a critical issue for the humanitarian system to address and the stakes are high. Ensuring we ‘do no harm’ while maximizing the benefits of data requires collective action that extends across all levels of the humanitarian system. Humanitarians must be careful when handling data to avoid placing already vulnerable individuals and communities at further risk. This is especially important in contexts where the urgency of humanitarian needs drives pressure for fast, sometimes untested, data solutions, and the politicization of data can have more extreme consequences for people. 

The implementation of data responsibility in practice is often inconsistent within and across humanitarian response contexts. This is true despite established principles, norms and professional standards regarding respect for the rights of affected populations; the range of resources on data responsibility available in the wider international data community; as well as significant efforts by many humanitarian organizations to develop and update their policies and guidance in this area. However, given that the humanitarian data ecosystem is inherently interconnected, no individual organization can tackle all these challenges alone. 

This system-wide Operational Guidance, which is a first, will ensure concrete steps for data responsibility in all phases of humanitarian action. It is the result of an inclusive and consultative process, involving more than 250 stakeholders from the humanitarian sector. Partners across the system will implement these guidelines in accordance with their respective mandates and the decisions of their governing bodies….(More)”

AI Ethics: Global Perspectives


“The Governance Lab (The GovLab), NYU Tandon School of Engineering, Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and Technical University of Munich (TUM) Institute for Ethics in Artificial Intelligence (IEAI) jointly launched a free, online course, AI Ethics: Global Perspectives, on February 1, 2021. Designed for a global audience, it conveys the breadth and depth of the ongoing interdisciplinary conversation on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

“The use of data and AI is steadily growing around the world – there should be simultaneous efforts to increase literacy, awareness, and education around the ethical implications of these technologies,” said Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab. “The course will allow experts to jointly develop a global understanding of AI.”

“AI is a global challenge, and so is AI ethics,” said Christoph Lütge, the director of IEAI. “Τhe ethical challenges related to the various uses of AI require multidisciplinary and multi-stakeholder engagement, as well as collaboration across cultures, organizations, academic institutions, etc. This online course is GAIEC’s attempt to approach and apply AI ethics effectively in practice.”

The course modules comprise pre-recorded lectures on either AI Applications, Data and AI, and Governance Frameworks, along with supplemental readings. New course lectures will be released the first week of every month. 

“The goal of this course is to create a nuanced understanding of the role of technology in society so that we, the people, have tools to make AI work for the benefit of society,” said Julia Stoyanvoich, a Tandon Assistant Professor of Computer Science and Engineering, Director of the Center for Responsible AI at NYU Tandon, and an Assistant Professor at the NYU Center for Data Science. “It is up to us — current and future data scientists, business leaders, policy makers, and members of the public — to make AI what we want it to be.”

The collaboration will release four new modules in February. These include lectures from: 

  • Idoia Salazar, President and Co-Founder of OdiselA, who presents “Alexa vs Alice: Cultural Perspectives on the Impact of AI.” Salazar explores why it is important to take into account the cultural, geographical, and temporal aspects of AI, as well as their precise identification, in order to achieve the correct development and implementation of AI systems; 
  • Jerry John Kponyo, Associate Professor of Telecommunication Engineering at KNUST, who sheds light on the fundamentals of Artificial Intelligence in Transportation System (AITS) and safety, and looks at the technologies at play in its implementation; 
  • Danya Glabau, Director of Science and Technology studies at the NYU Tandon School of Engineering, asks and answers the question, “Who is artificial intelligence for?” and presents evidence that AI systems do not always help their intended users and constituencies; 
  • Mark Findlay, Director of the Centre for AI and Data Governance at SMU, reviews the ethical challenges — discrimination, lack of transparency, neglect of individual rights, and more — which have arisen from COVID-19 technologies and their resultant mass data accumulation.

To learn more and sign up to receive updates as new modules are added, visit the course website at aiethicscourse.org

A Worldwide Assessment of COVID-19 Pandemic-Policy Fatigue


Paper by Anna Petherick et al: “As the COVID-19 pandemic lingers, signs of “pandemic-policy fatigue” have raised worldwide concerns. But the phenomenon itself is yet to be thoroughly defined, documented, and delved into. Based on self-reported behaviours from samples of 238,797 respondents, representative of the populations of 14 countries, as well as global mobility and policy data, we systematically examine the prevalence and shape of people’s alleged gradual reduction in adherence to governments’ protective-behaviour policies against COVID-19. Our results show that from March through December 2020, pandemic-policy fatigue was empirically meaningful and geographically widespread. It emerged for high-cost and sensitising behaviours (physical distancing) but not for low-cost and habituating ones (mask wearing), and was less intense among retired people, people with chronic diseases, and in countries with high interpersonal trust. Particularly due to fatigue reversal patterns in high- and upper-middle-income countries, we observe an arch rather than a monotonic decline in global pandemic-policy fatigue….(More)”.