AI Ethics: Global Perspectives


“The Governance Lab (The GovLab), NYU Tandon School of Engineering, Global AI Ethics Consortium (GAIEC), Center for Responsible AI @ NYU (R/AI), and Technical University of Munich (TUM) Institute for Ethics in Artificial Intelligence (IEAI) jointly launched a free, online course, AI Ethics: Global Perspectives, on February 1, 2021. Designed for a global audience, it conveys the breadth and depth of the ongoing interdisciplinary conversation on AI ethics and seeks to bring together diverse perspectives from the field of ethical AI, to raise awareness and help institutions work towards more responsible use.

“The use of data and AI is steadily growing around the world – there should be simultaneous efforts to increase literacy, awareness, and education around the ethical implications of these technologies,” said Stefaan Verhulst, Co-Founder and Chief Research and Development Officer of The GovLab. “The course will allow experts to jointly develop a global understanding of AI.”

“AI is a global challenge, and so is AI ethics,” said Christoph Lütge, the director of IEAI. “Τhe ethical challenges related to the various uses of AI require multidisciplinary and multi-stakeholder engagement, as well as collaboration across cultures, organizations, academic institutions, etc. This online course is GAIEC’s attempt to approach and apply AI ethics effectively in practice.”

The course modules comprise pre-recorded lectures on either AI Applications, Data and AI, and Governance Frameworks, along with supplemental readings. New course lectures will be released the first week of every month. 

“The goal of this course is to create a nuanced understanding of the role of technology in society so that we, the people, have tools to make AI work for the benefit of society,” said Julia Stoyanvoich, a Tandon Assistant Professor of Computer Science and Engineering, Director of the Center for Responsible AI at NYU Tandon, and an Assistant Professor at the NYU Center for Data Science. “It is up to us — current and future data scientists, business leaders, policy makers, and members of the public — to make AI what we want it to be.”

The collaboration will release four new modules in February. These include lectures from: 

  • Idoia Salazar, President and Co-Founder of OdiselA, who presents “Alexa vs Alice: Cultural Perspectives on the Impact of AI.” Salazar explores why it is important to take into account the cultural, geographical, and temporal aspects of AI, as well as their precise identification, in order to achieve the correct development and implementation of AI systems; 
  • Jerry John Kponyo, Associate Professor of Telecommunication Engineering at KNUST, who sheds light on the fundamentals of Artificial Intelligence in Transportation System (AITS) and safety, and looks at the technologies at play in its implementation; 
  • Danya Glabau, Director of Science and Technology studies at the NYU Tandon School of Engineering, asks and answers the question, “Who is artificial intelligence for?” and presents evidence that AI systems do not always help their intended users and constituencies; 
  • Mark Findlay, Director of the Centre for AI and Data Governance at SMU, reviews the ethical challenges — discrimination, lack of transparency, neglect of individual rights, and more — which have arisen from COVID-19 technologies and their resultant mass data accumulation.

To learn more and sign up to receive updates as new modules are added, visit the course website at aiethicscourse.org

Solving Public Problems


“Today the Governance Lab (The GovLab) at the NYU Tandon School of Engineering launched a free, online course on Solving Public Problems. The 12-part program, presented by Beth Simone Noveck, and over two-dozen global changemakers, trains participants in the skills needed to move from demanding change to making it. 

Taking a practical approach to addressing entrenched problems, from systemic racism to climate change, the course combines the teaching of quantitative and qualitative methods with participatory and equitable techniques for tapping the collective wisdom of communities to design and deliver powerful solutions to contemporary problems. 

“We cannot expect to tackle tomorrow’s problems with yesterday’s toolkit,” said Noveck, a former advisor on open government to President Barack Obama. “In the 21st century, we must equip ourselves with the skills to solve public problems. But those skills are not innate, and this program is designed to help people learn how to implement workable solutions to our hardest but most important challenges.”  

Based on Professor Noveck’s new book, Solving Public Problems: A Practical Guide to Fix Government and Change the World (Yale University Press 2021), this online program is intended to democratize access to public problem-solving education, providing citizens with  innovative tools to tap the collective wisdom of communities to take effective, organized action for change. …(More)”.

Are New Technologies Changing the Nature of Work? The Evidence So Far


Report by Kristyn Frank and Marc Frenette for the Institute for Research on Public Policy (Canada): “In recent years, ground breaking advances in artificial intelligence and their implications for automation technology have fuelled speculation that the very nature of work is being altered in unprecedented ways. News headlines regularly refer to the ”changing nature of work,” but what does it mean? Is there evidence that work has already been transformed by the new technologies? And if so, are these changes more dramatic than those experienced before?

In this paper, Kristyn Frank and Marc Frenette offer insights on these questions, based on the new research they conducted with their colleague Zhe Yang at Statistics Canada. Two aspects of work are under the microscope: the mix of work activities (or tasks) that constitute a job, and the mix of jobs in the economy. If new automation technologies are indeed changing the nature of work, the authors argue, then nonautomatable tasks should be increasingly important, and employment should be shifting toward occupations primarily involving such tasks.

According to the authors, nonroutine cognitive tasks (analytical or interpersonal) did become more important between 2011 and 2018. However, the changes were relatively modest, ranging from a 1.5 percent increase in the average importance of establishing and maintaining interpersonal relationships, to a 3.7 percent increase in analyzing data or information. Routine cognitive tasks — such as data entry — also gained importance, but these gains were even smaller. The picture is less clear for routine manual tasks, as the importance of tasks for which the pace is determined by the speed of equipment declined by close to 3 percent, whereas other tasks in that category became slightly more important.

Looking at longer-term shifts in overall employment, between 1987 and 2018, the authors find a gradual increase in the share of workers employed in occupations associated with nonroutine tasks, and a decline in routine-task-related occupations. The most pronounced shift in employment was away from production, craft, repair and operative occupations toward managerial, professional and technical occupations. However, they note that this shift to nonroutine occupations was not more pronounced between 2011 and 2018 than it was in the preceding decades. For instance, the share of employment in managerial, professional and technical occupations increased by 1.8 percentage points between 2011 and 2018, compared with a 6 percentage point increase between 1987 and 2010.

Most sociodemographic groups experienced the shift toward nonroutine jobs, although there were some exceptions. For instance, the employment share of workers in managerial, professional and technical occupations increased for all workers, but much more so for women than for men. Interestingly, there was a decline in the employment shares of workers in these occupations among those with a post-­secondary education. The explanation for this lies in the major increase over the past three decades in the proportion of workers with post-secondary education, which led some of them to move into jobs for which they are overqualified….(More)”.

Mining Twitter Data to Identify Topics of Discussion by Indian Feminist Activists


Brief by the Center on Gender Equity and Health at the University of California at San Diego (UC San Diego): “Over the past decade, social media platforms have become ubiquitous, serving as a democratic space for activism and providing new opportunities for social movements. Twitter has emerged as a popular tool used by feminist activists for spreading awareness and organizing. Research examining feminist movements on social media have highlighted the role of Twitter in emphasizing issues related to gender-based violence (GBV) victimization including the MeToo movement, as well as calling out male privilege and regressive gender norms.

Scholars have examined the high levels of engagement in Twitter discussions and debates by grassroots feminists, as well as the effect of this activity on advancing the feminist agenda in the digital space and amplifying minority voices. Studying Twitter conversations of feminist activists can help identify gender issues that need attention but are underprioritized politically. This brief presents findings from
our analysis of a corpus of tweets by 59 Indian feminist activists, tweeted between March and August 2020. The analysis examines how the feminist community in India has used Twitter as a tool for activism during the COVID-19 pandemic. In addition to providing insights related to mainstream gender issues in India, this analysis hopes to contribute to methodological advancement in gender research….(More)”.

Guide to Good Practice on the Use of New Technologies for the Administration of Justice


Report by México Evalúa: “This document offers a brief review of decisions, initiatives and implementation processes of various policies designed by the judiciary to incorporate the use of new technologies in their work. We are interested in highlighting the role that these tools can play not only in diversifying the means through which the public accesses the service of imparting justice, but also in facilitating and improving the organization of work in the courts and tribunals. We also analyzed the way in which the application of certain technological developments in justiciary tasks, in particular tele or videoconferences, has redefined the traditional structure of the judicial proceeding by allowing remote, simultaneous and collective interaction of the subjects involved. We also reflect on the dilemmas, viability and not always intended effects of the use of new technologies in the administration of justice.

(…)

We chose to analyze them from the focus of the procedural moment in which they intervene, that is, from the user’s perspective, because although technological solutions may have a wide range of objectives, it seems to us that, behind any technological development, the goal of facilitating, expanding and improving citizens’ access to justice should always prevail. We report several experiences aimed at reorganizing the processing of legal proceedings in the various phases that structure them, from the activation stage procedural (filing of lawsuit or judicialization of a criminal investigation) to the execution of court rulings (judgments, arbitral awards), passing through the processing of cases (hearings, proceedings). We would like to emphasize that access to justice includes everything from the processing of cases to the timely enforcement of court rulings. That vision can be summarized with the following figure:…(More)”.

Facebook Data for Good


Foreword by Sheryl Sandberg: “When Facebook launched the Data for Good program in 2017, we never imagined it would play a role so soon in response to a truly global emergency. The COVID-19 pandemic is not just a public health crisis, but also a social and economic one. It has caused hardship in every part of the world, but its impact hasn’t been felt equally. It has hit women and the most disadvantaged communities the hardest – something this work has helped shine a light on.

In response to the pandemic, Facebook has been part of an unprecedented collaboration between technology companies, the public sector, universities, nonprofits and others. Our partners operate in some of the most challenging environments in the world, where lengthy analysis and debate is often a luxury they don’t have. The policies that govern delivery of vaccines, masks, and financial support can mean the difference between life and death. By sharing tools that provide real-time insights, Facebook can make decision-making on the ground just a little bit easier and more effective.

This report highlights some of the ways Facebook data – shared in a way that protects the privacy of individuals – assisted the response efforts to the pandemic and other major crises in 2020. I hope the examples included help illustrate what successful data sharing projects can look like, and how future projects can be improved. Above all, I hope we can continue to work together in 2021 and beyond to save lives and mitigate the damage caused by the pandemic and any crises that may follow….(More)”.

Enabling the future of academic research with the Twitter API


Twitter Developer Blog: “When we introduced the next generation of the Twitter API in July 2020, we also shared our plans to invest in the success of the academic research community with tailored solutions that better serve their goals. Today, we’re excited to launch the Academic Research product track on the new Twitter API. 

Why we’re launching this & how we got here

Since the Twitter API was first introduced in 2006, academic researchers have used data from the public conversation to study topics as diverse as the conversation on Twitter itself – from state-backed efforts to disrupt the public conversation to floods and climate change, from attitudes and perceptions about COVID-19 to efforts to promote healthy conversation online. Today, academic researchers are one of the largest groups of people using the Twitter API. 

Our developer platform hasn’t always made it easy for researchers to access the data they need, and many have had to rely on their own resourcefulness to find the right information. Despite this, for over a decade, academic researchers have used Twitter data for discoveries and innovations that help make the world a better place.

Over the past couple of years, we’ve taken iterative steps to improve the experience for researchers, like when we launched a webpage dedicated to Academic Research, and updated our Twitter Developer Policy to make it easier to validate or reproduce others’ research using Twitter data.

We’ve also made improvements to help academic researchers use Twitter data to advance their disciplines, answer urgent questions during crises, and even help us improve Twitter. For example, in April 2020, we released the COVID-19 stream endpoint – the first free, topic-based stream built solely for researchers to use data from the global conversation for the public good. Researchers from around the world continue to use this endpoint for a number of projects.

Over two years ago, we started our own extensive research to better understand the needs, constraints and challenges that researchers have when studying the public conversation. In October 2020, we tested this product track in a private beta program where we gathered additional feedback. This gave us a glimpse into some of the important work that the free Academic Research product track we’re launching today can now enable….(More)”.

Facebook will let researchers study how advertisers targeted users with political ads prior to Election Day


Nick Statt at The Verge: “Facebook is aiming to improve transparency around political advertising on its platform by opening up more data to independent researchers, including targeting information on more than 1.3 million ads that ran in the three months prior to the US election on November 3rd of last year. Researchers interested in studying the ads can apply for access to the Facebook Open Research and Transparency (FORT) platform here.

The move is significant because Facebook has long resisted willfully allowing access to data around political advertising, often citing user privacy. The company has gone so far as to even disable third-party web plugins, like ProPublica’s Facebook Political Ad Collector tool, that collect such data without Facebook’s express consent.

Numerous research groups around the globe have spent years now studying Facebook’s impact on everything from democratic elections to news dissemination, but sometimes without full access to all the desired data. Only last year, after partnering with Harvard University’s Social Science One (the group overseeing applications for the new political ad targeting initiative), did Facebook better formalize the process of granting anonymized user data for research studies.

In the past, Facebook has made some crucial political ad information in its Ad Library available to the public, including the amount spent on certain ads and demographic information about who saw those ads. But now the company says it wants to do more to improve transparency, specifically around how advertisers target certain subsets of users with political advertising….(More)”.

Twitter’s misinformation problem is much bigger than Trump. The crowd may help solve it.


Elizabeth Dwoskin at the Washington Post: “A pilot program called Birdwatch lets selected users write corrections and fact checks on potentially misleading tweets…

The presidential election is over, but the fight against misinformation continues.

The latest volley in that effort comes from Twitter, which on MondayannouncedBirdwatch, a pilot project that uses crowdsourcing techniques to combat falsehoods and misleading statements on its service.

The pilot, which is open to only about 1,000 select users who can apply to be contributors, will allow people to write notes with corrections and accurate information directly into misleading tweets — a method that has the potential to get quality information to people more quickly than traditional fact-checking. Fact checks that are rated by other contributors as high quality may get bumped up or rewarded with greater visibility.

Birdwatch represents Twitter’s most experimental response to one of the biggest lessons that social media companies drew from the historic events of 2020: that their existing efforts to combat misinformation — including labeling, fact-checking and sometimes removing content — were not enough to prevent falsehoods about a stolen election or the coronavirus from reaching and influencing broad swaths of the population. Researchers who studied enforcement actions by social media companies last year found that fact checks and labels are usually implemented too late, after a post or a tweet has gone viral.

The Birdwatch project — which for the duration of the pilot will function as a separate website — is novel in that it attempts to build new mechanisms into Twitter’s product that foreground fact-checking by its community of 187 million daily users worldwide. Rather than having to comb through replies to tweets to sift through what’s true or false — or having Twitter employees append to a tweet a label providing additional context — users will be able to click on a separate notes folder attached to a tweet where they can see the consensus-driven responses from the community. Twitter will have a team reviewing winning responses to prevent manipulation, though a major question is whether any part of the process will be automated and therefore more easily gamed….(More)”

Sharing Student Health Data with Health Agencies: Considerations and Recommendations


Memo by the Center for Democracy and Technology: “As schools respond to COVID-19 on their campuses, some have shared student information with state and local health agencies, often to aid in contact tracing or to provide services to students. Federal and state student privacy laws, however, do not necessarily permit that sharing, and schools should seek to protect both student health and student privacy.

How Are Schools Sharing COVID-Related Student Data?

When it comes to sharing student data, schools’ practices vary widely. For example, the New York City Department of Education provides a consent form for sharing COVID-related student data. Other schools do not have consent forms, but instead, share COVID-related data as required by local or state health agencies. For instance, Orange County Public Schools in Florida assists the local health agency in contact tracing by collecting information such as students’ names and dates of birth. Some districts, such as the Dallas Independent School District in Texas, report positive cases to the county, but do not publicly specify what information is reported. Many schools, however, do not publicly disclose their collection and sharing of COVID-related student data….(More)”