Paper by Nathalie A. Smuha: “In this paper, I distinguish three types of harm that can arise in the context of artificial intelligence (AI): individual harm, collective harm and societal harm. Societal harm is often overlooked, yet not reducible to the two former types of harm. Moreover, mechanisms to tackle individual and collective harm raised by AI are not always suitable to counter societal harm. As a result, policymakers’ gap analysis of the current legal framework for AI not only risks being incomplete, but proposals for new legislation to bridge these gaps may also inadequately protect societal interests that are adversely impacted by AI. By conceptualising AI’s societal harm, I argue that a shift in perspective is needed beyond the individual, towards a regulatory approach of AI that addresses its effects on society at large. Drawing on a legal domain specifically aimed at protecting a societal interest—environmental law—I identify three ‘societal’ mechanisms that EU policymakers should consider in the context of AI. These concern (1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm. Finally, I consider to what extent the European Commission’s new proposal for an AI regulation takes these mechanisms into consideration, before offering concluding remarks….(More)”.
Americans Need a Bill of Rights for an AI-Powered World
Article by Eric Lander and Alondra Nelson: “…Soon after ratifying our Constitution, Americans adopted a Bill of Rights to guard against the powerful government we had just created—enumerating guarantees such as freedom of expression and assembly, rights to due process and fair trials, and protection against unreasonable search and seizure. Throughout our history we have had to reinterpret, reaffirm, and periodically expand these rights. In the 21st century, we need a “bill of rights” to guard against the powerful technologies we have created.
Our country should clarify the rights and freedoms we expect data-driven technologies to respect. What exactly those are will require discussion, but here are some possibilities: your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you.
Of course, enumerating the rights is just a first step. What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this “bill of rights,” or adopting new laws and regulations to fill gaps. States might choose to adopt similar practices….(More)”.
Algorithms Are Not Enough: Creating General Artificial Intelligence
Book by Herbert L. Roitblat: “Since the inception of artificial intelligence, we have been warned about the imminent arrival of computational systems that can replicate human thought processes. Before we know it, computers will become so intelligent that humans will be lucky to be kept as pets. And yet, although artificial intelligence has become increasingly sophisticated—with such achievements as driverless cars and humanless chess-playing—computer science has not yet created general artificial intelligence. In Algorithms Are Not Enough, Herbert Roitblat explains how artificial general intelligence may be possible and why a robopocalypse is neither imminent nor likely.
Existing artificial intelligence, Roitblat shows, has been limited to solving path problems, in which the entire problem consists of navigating a path of choices—finding specific solutions to well-structured problems. Human problem-solving, on the other hand, includes problems that consist of ill-structured situations, including the design of problem-solving paths themselves. These are insight problems, and insight is an essential part of intelligence that has not been addressed by computer science. Roitblat draws on cognitive science, including psychology, philosophy, and history, to identify the essential features of intelligence needed to achieve general artificial intelligence.
Roitblat describes current computational approaches to intelligence, including the Turing Test, machine learning, and neural networks. He identifies building blocks of natural intelligence, including perception, analogy, ambiguity, common sense, and creativity. General intelligence can create new representations to solve new problems, but current computational intelligence cannot. The human brain, like the computer, uses algorithms; but general intelligence, he argues, is more than algorithmic processes…(More)”.
Statistics and Data Science for Good
Introduction to Special Issue of Chance by Caitlin Augustin, Matt Brems, and Davina P. Durgana: “One lesson that our team has taken from the past 18 months is that no individual, no team, and no organization can be successful on their own. We’ve been grateful and humbled to witness incredible collaboration—taking on forms of resource sharing, knowledge exchange, and reimagined outcomes. Some advances, like breakthrough medicine, have been widely publicized. Other advances have received less fanfare. All of these advances are in the public interest and demonstrate how collaborations can be done “for good.”
In reading this issue, we hope that you realize the power of diverse multidisciplinary collaboration; you recognize the positive social impact that statisticians, data scientists, and technologists can have; and you learn that this isn’t limited to companies with billions of dollars or teams of dozens of people. You, our reader, can get involved in similar positive social change.
This special edition of CHANCE focuses on using data and statistics for the public good and on highlighting collaborations and innovations that have been sparked by partnerships between pro bono institutions and social impact partners. We recognize that the “pro bono” or “for good” field is vast, and we welcome all actors working in the public interest into the big tent.
Through the focus of this edition, we hope to demonstrate how new or novel collaborations might spark meaningful and lasting positive change in communities, sectors, and industries. Anchored by work led through Statistics Without Borders and DataKind, this edition features reporting on projects that touch on many of the United Nations Sustainable Development Goals (SDGs).
Pro bono volunteerism is one way of democratizing access to high-skill, high-expense services that are often unattainable for social impact organizations. Statistics Without Borders (founded in 2008), DataKind (founded in 2012), and numerous other volunteer organizations began with this model in mind: If there was an organizing or galvanizing body that could coordinate the myriad requests for statistical, data science, machine learning, or data engineering help, there would be a ready supply of talented individuals who would want to volunteer to see those projects through. Or, put another way, “If you build it, they will come.”
Doing pro bono work requires more than positive intent. Plenty of well-meaning organizations and individuals charitably donate their time, their energy, their expertise, only to have an unintended adverse impact. To do work for good, ethics is an important part of the projects. In this issue, you’ll notice the writers’ attention to independent review boards (IRBs), respecting client and data privacy, discussing ethical considerations of methods used, and so on.
While no single publication can fully capture the great work of pro bono organizations working in “data for good,” we hope readers will be inspired to contribute to open source projects, solve problems in a new way, or even volunteer themselves for a future cohort of projects. We’re thrilled that this special edition represents programs, partners, and volunteers from around the world. You will learn about work that is truly representative of the SDGs, such as international health organizations’ work in Uganda, political justice organizations in Kenya, and conservationists in Madagascar, to name a few.
Several articles describe projects that are contextualized with the SDGs. While achieving many goals is interconnected, such as the intertwining of economic attainment and reducing poverty, we hope that calling out key themes here will whet your appetite for exploration.
- • Multiple articles focused on tackling aspects of SDG 3: Ensuring healthy lives and promoting well-being for people at all ages.
- • An article tackling SDG 8: Promote sustained, inclusive, and sustainable economic growth; full and productive employment; and decent work for all.
- • Several articles touching on SDG 9: Build resilient infrastructure; promote inclusive and sustainable industrialization, and foster innovation; one is a reflection on building and sustaining free and open source software as a public good.
- • A handful of articles highlighting the needs for capacity-building and systems-strengthening aligned to SDG 16: Promote peaceful and inclusive societies for sustainable development; provide access to justice for all; and build effective, accountable, and inclusive institutions at all levels.
- • An article about migration along the southern borders of the United States addressing multiple issues related to poverty (SDG 1), opportunity (SDG 10), and peace and justice (SDG 16)….(More)”
Gathering Strength, Gathering Storms
The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report: “In the five years since we released the first AI100 report, much has been written about the state of artificial intelligence and its influences on society. Nonetheless, AI100 remains unique in its combination of two key features. First, it is written by a Study Panel of core multi-disciplinary researchers in the field—experts who create artificial intelligence algorithms or study their influence on society as their main professional activity, and who have been doing so for many years. The authors are firmly rooted within the field of AI and provide an “insider’s” perspective. Second, it is a longitudinal study, with reports by such Study Panels planned once every five years, for at least one hundred years.
This report, the second in that planned series of studies, is being released five years after the first report. Published on September 1, 2016, the first report was covered widely in the popular press and is known to have influenced discussions on governmental advisory boards and workshops in multiple countries. It has also been used in a variety of artificial intelligence curricula.
In preparation for the second Study Panel, the Standing Committee commissioned two study-workshops held in 2019. These workshops were a response to feedback on the first AI100 report. Through them, the Standing Committee aimed to engage a broader, multidisciplinary community of scholars and stakeholders in its next study. The goal of the workshops was to draw on the expertise of computer scientists and engineers, scholars in the social sciences and humanities (including anthropologists, economists, historians, media scholars, philosophers, psychologists, and sociologists), law and public policy experts, and representatives from business management as well as the private and public sectors…(More)”.
Old Cracks, New Tech
Paper for the Oxford Commission on AI & Good Governance: “Artificial intelligence (AI) systems are increasingly touted as solutions to many complex social and political issues around the world, particularly in developing countries like Kenya. Yet AI has also exacerbated cleavages and divisions in society, in part because those who build the technology often do not have a strong understanding of the politics of the societies in which the technology is deployed.
In her new report ‘Old Cracks, New Tech: Artificial Intelligence, Human Rights, and Good Governance in Highly Fragmented and Socially Stratified Societies: The Case of Kenya’ writer and activist Nanjala Nyabola explores the Kenyan government’s policy on AI and blockchain technology and evaluates it’s success.Commissioned by the Oxford Commission for Good Governance (OxCAIGG), the report highlights lessons learnt from the Kenyan experience and sets out four key recommendations to help government officials and policy makers ensure good governance in AI in public and private contexts in Kenya.
The report recommends:
- Conducting a deeper and more wide-ranging analysis of the political implications of existing and proposed applications of AI in Kenya, including comparisons with other countries where similar technology has been deployed.
- Carrying out a comprehensive review of ongoing implementations of AI in both private and public contexts in Kenya in order to identify existing legal and policy gaps.
- Conducting deeper legal research into developing meaningful legislation to govern the development and deployment of AI technology in Kenya. In particular, a framework for the implementation of the Data Protection Act (2019) vis-à-vis AI and blockchain technology is urgently required.
- Arranging training for local political actors and researchers on the risks and opportunities for AI to empower them to independently evaluate proposed interventions with due attention to the local context…(More)”.
Harms of AI
Paper by Daron Acemoglu: “This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy’s most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI’s promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment – to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient….(More)”.
UN urges moratorium on use of AI that imperils human rights
Jamey Keaten and Matt O’Brien at the Washington Post: “The U.N. human rights chief is calling for a moratorium on the use of artificial intelligence technology that poses a serious risk to human rights, including face-scanning systems that track people in public spaces.
Michelle Bachelet, the U.N. High Commissioner for Human Rights, also said Wednesday that countries should expressly ban AI applications which don’t comply with international human rights law.
Applications that should be prohibited include government “social scoring” systems that judge people based on their behavior and certain AI-based tools that categorize people into clusters such as by ethnicity or gender.
AI-based technologies can be a force for good but they can also “have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said in a statement.
Her comments came along with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.
“This is not about not having AI,” Peggy Hicks, the rights office’s director of thematic engagement, told journalists as she presented the report in Geneva. “It’s about recognizing that if AI is going to be used in these human rights — very critical — function areas, that it’s got to be done the right way. And we simply haven’t yet put in place a framework that ensures that happens.”
Bachelet didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards….(More)” (Report).
Rule of the Robots
Book by Martin Ford: “If you have a smartphone, you have AI in your pocket. AI is impossible to avoid online. And it has already changed everything from how doctors diagnose disease to how you interact with friends or read the news. But in Rule of the Robots, Martin Ford argues that the true revolution is yet to come.
In this sequel to his prescient New York Times bestseller Rise of the Robots, Ford presents us with a striking vision of the very near future. He argues that AI is a uniquely powerful technology that is altering every dimension of human life, often for the better. For example, advanced science is being done by machines, solving devilish problems in molecular biology that humans could not, and AI can help us fight climate change or the next pandemic. It also has a capacity for profound harm. Deep fakes—AI-generated audio or video of events that never happened—are poised to cause havoc throughout society. AI empowers authoritarian regimes like China with unprecedented mechanisms for social control. And AI can be deeply biased, learning bigoted attitudes from us and perpetuating them.
In short, this is not a technology to simply embrace, or let others worry about. The machines are coming, and they won’t stop, and each of us needs to know what that means if we are to thrive in the twenty-first century. And Rule of the Robots is the essential guide to all of it: both AI and the future of our economy, our politics, our lives…(More)”.
Enrollment algorithms are contributing to the crises of higher education
Paper by Alex Engler: “Hundreds of higher education institutions are procuring algorithms that strategically allocate scholarships to convince more students to enroll. In doing so, these enrollment management algorithms help colleges vary the cost of attendance to students’ willingness to pay, a crucial aspect of competition in the higher education market. This paper elaborates on the specific two-stage process by which these algorithms first predict how likely prospective students are to enroll, and second help decide how to disburse scholarships to convince more of those prospective students to attend the college. These algorithms are valuable to colleges for institutional planning and financial stability, as well as to help reach their preferred financial, demographic, and scholastic outcomes for the incoming student body.
Unfortunately, the widespread use of enrollment management algorithms may also be hurting students, especially due to their narrow focus on enrollment. The prevailing evidence suggests that these algorithms generally reduce the amount of scholarship funding offered to students. Further, algorithms excel at identifying a student’s exact willingness to pay, meaning they may drive enrollment while also reducing students’ chances to persist and graduate. The use of this two-step process also opens many subtle channels for algorithmic discrimination to perpetuate unfair financial aid practices. Higher education is already suffering from low graduation rates, high student debt, and stagnant inequality for racial minorities—crises that enrollment algorithms may be making worse.
This paper offers a range of recommendations to ameliorate the risks of enrollment management algorithms in higher education. Categorically, colleges should not use predicted likelihood to enroll in either the admissions process or in awarding need-based aid—these determinations should only be made based on the applicant’s merit and financial circumstances, respectively. When colleges do use algorithms to distribute scholarships, they should proceed cautiously and document their data, processes, and goals. Colleges should also examine how scholarship changes affect students’ likelihood to graduate, or whether they may deepen inequities between student populations. Colleges should also ensure an active role for humans in these processes, such as exclusively using people to evaluate application quality and hiring internal data scientists who can challenge algorithmic specifications. State policymakers should consider the expanding role of these algorithms too, and should try to create more transparency about their use in public institutions. More broadly, policymakers should consider enrollment management algorithms as a concerning symptom of pre-existing trends towards higher tuition, more debt, and reduced accessibility in higher education….(More)”.