AI Ethics: Global Perspectives


The AI Ethics: Global Perspectives course released three new video modules this week.

  • In “AI Ethics and Hate Speech”, Maha Jouini from the African Center for Artificial Intelligence and Digital Technology, explores the intersection between AI and hate speech in the context of the MENA region.
  • Maxime Ducret at the University of Lyon and Carl Mörch from the FARI AI Institute for the Common Good introduce the ethical implications of the use of AI technologies in the field of dentistry in their module “How Your Teeth, Your Smile and AI Ethics are Related”.
  • And finally, in “Ethics in AI for Peace”, AI for Peace’s Branka Panic talks about how the “algo age” brought with it many technical, legal, and ethical questions that exceeded the scope of existing peacebuilding and peacetech ethics frameworks.

To watch these lectures in full and register for the course, visit our website

The Low Threshold for Face Recognition in New Delhi


Article by Varsha Bansal: “Indian law enforcement is starting to place huge importance on facial recognition technology. Delhi police, looking into identifying people involved in civil unrest in northern India in the past few years, said that they would consider 80 percent accuracy and above as a “positive” match, according to documents obtained by the Internet Freedom Foundation through a public records request.

Facial recognition’s arrival in India’s capital region marks the expansion of Indian law enforcement officials using facial recognition data as evidence for potential prosecution, ringing alarm bells among privacy and civil liberties experts. There are also concerns about the 80 percent accuracy threshold, which critics say is arbitrary and far too low, given the potential consequences for those marked as a match. India’s lack of a comprehensive data protection law makes matters even more concerning.

The documents further state that even if a match is under 80 percent, it would be considered a “false positive” rather than a negative, which would make that individual “subject to due verification with other corroborative evidence.”

“This means that even though facial recognition is not giving them the result that they themselves have decided is the threshold, they will continue to investigate,” says Anushka Jain, associate policy counsel for surveillance and technology with the IFF, who filed for this information. “This could lead to harassment of the individual just because the technology is saying that they look similar to the person the police are looking for.” She added that this move by the Delhi Police could also result in harassment of people from communities that have been historically targeted by law enforcement officials…(More)”

Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous


Essay by Henry Farrell, Abraham Newman, and Jeremy Wallace: “In policy circles, discussions about artificial intelligence invariably pit China against the United States in a race for technological supremacy. If the key resource is data, then China, with its billion-plus citizens and lax protections against state surveillance, seems destined to win. Kai-Fu Lee, a famous computer scientist, has claimed that data is the new oil, and China the new OPEC. If superior technology is what provides the edge, however, then the United States, with its world class university system and talented workforce, still has a chance to come out ahead. For either country, pundits assume that superiority in AI will lead naturally to broader economic and military superiority.

But thinking about AI in terms of a race for dominance misses the more fundamental ways in which AI is transforming global politics. AI will not transform the rivalry between powers so much as it will transform the rivals themselves. The United States is a democracy, whereas China is an authoritarian regime, and machine learning challenges each political system in its own way. The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.

Early pioneers of AI, including the political scientist Herbert Simon, realized that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications. Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system—one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking. Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force…(More)”

Voices in the Code: A Story about People, Their Values, and the Algorithm They Made


Book by David G. Robinson: “Algorithms–rules written into software–shape key moments in our lives: from who gets hired or admitted to a top public school, to who should go to jail or receive scarce public benefits. Today, high stakes software is rarely open to scrutiny, but its code navigates moral questions: Which of a person’s traits are fair to consider as part of a job application? Who deserves priority in accessing scarce public resources, whether those are school seats, housing, or medicine? When someone first appears in a courtroom, how should their freedom be weighed against the risks they might pose to others?

Policymakers and the public often find algorithms to be complex, opaque and intimidating—and it can be tempting to pretend that hard moral questions have simple technological answers. But that approach leaves technical experts holding the moral microphone, and it stops people who lack technical expertise from making their voices heard. Today, policymakers and scholars are seeking better ways to share the moral decisionmaking within high stakes software — exploring ideas like public participation, transparency, forecasting, and algorithmic audits. But there are few real examples of those techniques in use.

In Voices in the Code, scholar David G. Robinson tells the story of how one community built a life-and-death algorithm in a relatively inclusive, accountable way. Between 2004 and 2014, a diverse group of patients, surgeons, clinicians, data scientists, public officials and advocates collaborated and compromised to build a new transplant matching algorithm – a system to offer donated kidneys to particular patients from the U.S. national waiting list…(More)”.

China May Be Chasing Impossible Dream by Trying to Harness Internet Algorithms


Article by Karen Hao: “China’s powerful cyberspace regulator has taken the first step in a pioneering—and uncertain—government effort to rein in the automated systems that shape the internet.

Earlier this month, the Cyberspace Administration of China published summaries of 30 core algorithms belonging to two dozen of the country’s most influential internet companies, including TikTok owner ByteDance Ltd., e-commerce behemoth Alibaba Group Holding Ltd. and Tencent Holdings Ltd., owner of China’s ubiquitous WeChat super app.

The milestone marks the first systematic effort by a regulator to compel internet companies to reveal information about the technologies powering their platforms, which have shown the capacity to radically alter everything from pop culture to politics. It also puts Beijing on a path that some technology experts say few governments, if any, are equipped to handle….

One important question the effort raises, algorithm experts say, is whether direct government regulation of algorithms is practically possible.

The majority of today’s internet platform algorithms are based on a technology called machine learning, which automates decisions such as ad-targeting by learning to predict user behaviors from vast repositories of data. Unlike traditional algorithms that contain explicit rules coded by engineers, most machine-learning systems are black boxes, making it hard to decipher their logic or anticipate the consequences of their use.

Beijing’s interest in regulating algorithms started in 2020, after TikTok sought an American buyer to avoid being banned in the U.S., according to people familiar with the government’s thinking. When several bidders for the short-video platform lost interest after Chinese regulators announced new export controls on information-recommendation technology, it tipped off Beijing to the importance of algorithms, the people said…(More)”.

Public preferences for governing AI technology: Comparative evidence


Paper by Soenke Ehret: “Citizens’ attitudes concerning aspects of AI such as transparency, privacy, and discrimination have received considerable attention. However, it is an open question to what extent economic consequences affect preferences for public policies governing AI. When does the public demand imposing restrictions on – or even prohibiting – emerging AI technologies? Do average citizens’ preferences depend causally on normative and economic concerns or only on one of these causes? If both, how might economic risks and opportunities interact with assessments based on normative factors? And to what extent does the balance between the two kinds of concerns vary by context? I answer these questions using a comparative conjoint survey experiment conducted in Germany, the United Kingdom, India, Chile, and China. The data analysis suggests strong effects regarding AI systems’ economic and normative attributes. Moreover, I find considerable cross-country variation in normative preferences regarding the prohibition of AI systems vis-a-vis economic concerns…(More)”.

Artificial intelligence was supposed to transform health care. It hasn’t.


Article by Ben Leonard and Ruth Reader: “Artificial intelligence is spreading into health care, often as software or a computer program capable of learning from large amounts of data and making predictions to guide care or help patients. | Seth Wenig/AP Photo

Investors see health care’s future as inextricably linked with artificial intelligence. That’s obvious from the cash pouring into AI-enabled digital health startups, including more than $3 billion in the first half of 2022 alone and nearly $10 billion in 2021, according to a Rock Health investment analysis commissioned by POLITICO.

And no wonder, considering the bold predictions technologists have made. At a conference in 2016, Geoffrey Hinton, British cognitive psychologist and “godfather” of AI, said radiologists would soon go the way of typesetters and bank tellers: “People should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better.”

But more than five years since Hinton’s forecast, radiologists are still training to read image scans. Instead of replacing doctors, health system administrators now see AI as a tool clinicians will use to improve everything from their diagnoses to billing practices. AI hasn’t lived up to the hype, medical experts said, because health systems’ infrastructure isn’t ready for it yet. And the government is just beginning to grapple with its regulatory role.

“Companies come in promising the world and often don’t deliver,” said Bob Wachter, head of the department of medicine at the University of California, San Francisco. “When I look for examples of … true AI and machine learning that’s really making a difference, they’re pretty few and far between. It’s pretty underwhelming.”

Administrators say algorithms — the software that processes data — from outside companies don’t always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But it’s slow going. Research based on job postings shows health care behind every industry except construction in adopting AI…(More)”.

Designing Human-Centric AI Experiences


Book by Akshay Kore: “User experience (UX) design practices have seen a fundamental shift as more and more software products incorporate machine learning (ML) components and artificial intelligence (AI) algorithms at their core. This book will probe into UX design’s role in making technologies inclusive and enabling user collaboration with AI.  

AI/ML-based systems have changed the way of traditional UX design. Instead of programming a method to do a specific action, creators of these systems provide data and nurture them to curate outcomes based on inputs. These systems are dynamic and while AI systems change over time, their user experience, in many cases, does not adapt to this dynamic nature.  

Applied UX Design for Artificial Intelligence will explore this problem, addressing the challenges and opportunities in UX design for AI/ML systems, look at best practices for designers, managers, and product creators and showcase how individuals from a non-technical background can collaborate effectively with AI and Machine learning teams…(More)”.

Algorithms for Decision Making


Book by Mykel J. Kochenderfer, Tim A. Wheeler and Kyle H. Wray: “Automated decision-making systems or decision-support systems—used in applications that range from aircraft collision avoidance to breast cancer screening—must be designed to account for various sources of uncertainty while carefully balancing multiple objectives. This textbook provides a broad introduction to algorithms for decision making under uncertainty, covering the underlying mathematical problem formulations and the algorithms for solving them.

The book first addresses the problem of reasoning about uncertainty and objectives in simple decisions at a single point in time, and then turns to sequential decision problems in stochastic environments where the outcomes of our actions are uncertain. It goes on to address model uncertainty, when we do not start with a known model and must learn how to act through interaction with the environment; state uncertainty, in which we do not know the current state of the environment due to imperfect perceptual information; and decision contexts involving multiple agents. The book focuses primarily on planning and reinforcement learning, although some of the techniques presented draw on elements of supervised learning and optimization. Algorithms are implemented in the Julia programming language. Figures, examples, and exercises convey the intuition behind the various approaches presented…(More)”

AI-powered cameras to enforce bus lanes


Article by Chris Teale: “New York’s Metropolitan Transportation Authority will use an automated camera system to ensure bus lanes in New York City are free from illegally parked vehicles.

The MTA is partnering with Hayden AI to deploy Automated Bus Lane Enforcement camera systems to 300 buses, which will be mounted on the interior of the windshield and powered by artificial intelligence. The agency has the option to add the cameras to 200 more buses if it chooses.

Chris Carson, Hayden AI’s CEO and co-founder, said when the cameras detect an encroachment on a bus lane, they use real-time automated license plate recognition and edge computing to compile a packet of evidence that includes the time, date and location of the offense, as well as a brief video that shows the violator’s license plate. 

That information is encrypted and sent securely to the cloud, where MTA officials can access and analyze it for violations. If there is no encroachment on a bus lane, the cameras do not record anything…

An MTA spokesperson said the agency will also use data from the system to identify locations that have the highest instances of vehicles blocking bus lanes. New York City has 140 miles of bus lanes and has plans to build 150 more miles in the next four years, but congestion and lane violations from other road users slows the speed of the buses. The city already uses cameras and police patrols to attempt to enforce proper bus lane use…(More)”.