Paper by Jamie Danemayer, Andrew Young, Siobhan Green, Lydia Ezenwa and Michael Klein: “Innovative, responsible data use is a critical need in the global response to the coronavirus disease-2019 (COVID-19) pandemic. Yet potentially impactful data are often unavailable to those who could utilize it, particularly in data-poor settings, posing a serious barrier to effective pandemic mitigation. Data challenges, a public call-to-action for innovative data use projects, can identify and address these specific barriers. To understand gaps and progress relevant to effective data use in this context, this study thematically analyses three sets of qualitative data focused on/based in low/middle-income countries: (a) a survey of innovators responding to a data challenge, (b) a survey of organizers of data challenges, and (c) a focus group discussion with professionals using COVID-19 data for evidence-based decision-making. Data quality and accessibility and human resources/institutional capacity were frequently reported limitations to effective data use among innovators. New fit-for-purpose tools and the expansion of partnerships were the most frequently noted areas of progress. Discussion participants identified building capacity for external/national actors to understand the needs of local communities can address a lack of partnerships while de-siloing information. A synthesis of themes demonstrated that gaps, progress, and needs commonly identified by these groups are relevant beyond COVID-19, highlighting the importance of a healthy data ecosystem to address emerging threats. This is supported by data holders prioritizing the availability and accessibility of their data without causing harm; funders and policymakers committed to integrating innovations with existing physical, data, and policy infrastructure; and innovators designing sustainable, multi-use solutions based on principles of good data governance…(More)”.
Eye of the Beholder: Defining AI Bias Depends on Your Perspective
Article by Mike Barlow: “…Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.
Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.
She foresees that AIs will find correlations in data and assume they are causal relationships—even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety.
Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over—or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next…
As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa.
“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”…(More)”.
AI Ethics
Textbook by Paula Boddington: “This book introduces readers to critical ethical concerns in the development and use of artificial intelligence. Offering clear and accessible information on central concepts and debates in AI ethics, it explores how related problems are now forcing us to address fundamental, age-old questions about human life, value, and meaning. In addition, the book shows how foundational and theoretical issues relate to concrete controversies, with an emphasis on understanding how ethical questions play out in practice.
All topics are explored in depth, with clear explanations of relevant debates in ethics and philosophy, drawing on both historical and current sources. Questions in AI ethics are explored in the context of related issues in technology, regulation, society, religion, and culture, to help readers gain a nuanced understanding of the scope of AI ethics within broader debates and concerns…(More)”
Data and Democracy at Work: Advanced Information Technologies, Labor Law, and the New Working Class
Book by Brishen Rogers: “As our economy has shifted away from industrial production and service industries have become dominant, many of the nation’s largest employers are now in fields like retail, food service, logistics, and hospitality. These companies have turned to data-driven surveillance technologies that operate over a vast distance, enabling cheaper oversight of massive numbers of workers. Data and Democracy at Work argues that companies often use new data-driven technologies as a power resource—or even a tool of class domination—and that our labor laws allow them to do so.
Employers have established broad rights to use technology to gather data on workers and their performance, to exclude others from accessing that data, and to use that data to refine their managerial strategies. Through these means, companies have suppressed workers’ ability to organize and unionize, thereby driving down wages and eroding working conditions. Labor law today encourages employer dominance in many ways—but labor law can also be reformed to become a tool for increased equity. The COVID-19 pandemic and subsequent Great Resignation have indicated an increased political mobilization of the so-called essential workers of the pandemic, many of them service industry workers. This book describes the necessary legal reforms to increase workers’ associational power and democratize workplace data, establishing more balanced relationships between workers and employers and ensuring a brighter and more equitable future for us all…(More)”.
Prediction Fiction
Essay by Madeline Ashby: “…This contributes to what my colleague Scott Smith calls “flat-pack futures”, or what the Canadian scholar Sun-ha Hong calls “technofutures”, which “preach revolutionary change while practicing a politics of inertia”. These visions of possible future realities possess a mass-market sameness. They look like what happens when you tell an AI image generator to draw the future: just a slurry of genuine human creativity machined into a fine paste. Drone delivery, driverless cars, blockchain this, alt-currency that, smart mirrors, smart everything,and not a speck of dirt or illness or poverty or protest anywhere. Bloodless, bland, boring, banal. It is like ordering your future from the kids’ menu.
When we cannot acknowledge how bad things are, we cannot imagine how to improve them. As with so many challenges, the first step is admitting there is a problem. But if you are isolated, ignored, or ridiculed at work or at home for acknowledging that problem, the problem becomes impossible to deal with. How we treat existential threats to the planet today is how doctors treated women’s cancers until the latter half of the 20th century: by refusing to tell the patient she was dying.
But the issue is not just toxic positivity. Remember those myths about the warnings that go unheeded? The moral of those stories is not that some people are doomed never to be listened to. The moral of those stories is that people in power do not want to hear how they might lose it. It is not that the predictions were wrong, but that they were simply not what people wanted to hear. To work in futures, you have to tell people things they don’t want to hear. And this is when it is useful to tell a story….(More)”
Am I Normal? The 200-Year Search for Normal People (and Why They Don’t Exist)
Book by Sarah Chaney: “Before the 19th century, the term ’normal’ was rarely ever associated with human behaviour. Normal was a term used in maths, for right angles. People weren’t normal; triangles were.
But from the 1830s, this branch of science really took off across Europe and North America, with a proliferation of IQ tests, sex studies, a census of hallucinations – even a UK beauty map (which concluded the women in Aberdeen were “the most repellent”). This book tells the surprising history of how the very notion of the normal came about, how it shaped us all, often while entrenching oppressive values.
Sarah Chaney looks at why we’re still asking the internet: Do I have a normal body? Is my sex life normal? Are my kids normal? And along the way, she challenges why we ever thought it might be a desirable thing to be…(More)”.
The Normative Challenges of AI in Outer Space: Law, Ethics, and the Realignment of Terrestrial Standards
Paper by Ugo Pagallo, Eleonora Bassi & Massimo Durante: “The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space…(More)”.
The Moral Economy of High-Tech Modernism
Essay by Henry Farrell and Marion Fourcade: “While people in and around the tech industry debate whether algorithms are political at all, social scientists take the politics as a given, asking instead how this politics unfolds: how algorithms concretely govern. What we call “high-tech modernism”—the application of machine learning algorithms to organize our social, economic, and political life—has a dual logic. On the one hand, like traditional bureaucracy, it is an engine of classification, even if it categorizes people and things very differently. On the other, like the market, it provides a means of self-adjusting allocation, though its feedback loops work differently from the price system. Perhaps the most important consequence of high-tech modernism for the contemporary moral political economy is how it weaves hierarchy and data-gathering into the warp and woof of everyday life, replacing visible feedback loops with invisible ones, and suggesting that highly mediated outcomes are in fact the unmediated expression of people’s own true wishes…(More)”.
Protecting the integrity of survey research
Paper by Jamieson, Kathleen Hall, et al: “Although polling is not irredeemably broken, changes in technology and society create challenges that, if not addressed well, can threaten the quality of election polls and other important surveys on topics such as the economy. This essay describes some of these challenges and recommends remediations to protect the integrity of all kinds of survey research, including election polls. These 12 recommendations specify ways that survey researchers, and those who use polls and other public-oriented surveys, can increase the accuracy and trustworthiness of their data and analyses. Many of these recommendations align practice with the scientific norms of transparency, clarity, and self-correction. The transparency recommendations focus on improving disclosure of factors that affect the nature and quality of survey data. The clarity recommendations call for more precise use of terms such as “representative sample” and clear description of survey attributes that can affect accuracy. The recommendation about correcting the record urges the creation of a publicly available, professionally curated archive of identified technical problems and their remedies. The paper also calls for development of better benchmarks and for additional research on the effects of panel conditioning. Finally, the authors suggest ways to help people who want to use or learn from survey research understand the strengths and limitations of surveys and distinguish legitimate and problematic uses of these methods…(More)”.
The Incredible Challenge of Counting Every Global Birth and Death
Jeneen Interlandi at The New York Times: “…The world’s wealthiest nations are awash in so much personal data that data theft has become a lucrative business and its protection a common concern. From such a vantage point, it can be difficult to even fathom the opposite — a lack of any identifying information at all — let alone grapple with its implications. But the undercounting of human lives is pervasive, data scientists say. The resulting ills are numerous and consequential, and recent history is littered with missed opportunities to solve the problem.
More than two decades ago, 147 nations rallied around the Millennium Development Goals, the United Nations’ bold new plan for halving extreme poverty, curbing childhood mortality and conquering infectious diseases like malaria and H.I.V. The health goals became the subject of countless international summits and steady news coverage, ultimately spurring billions of dollars in investment from the world’s wealthiest nations, including the United States. But a fierce debate quickly ensued. Critics said that health officials at the United Nations and elsewhere had almost no idea what the baseline conditions were in many of the countries they were trying to help. They could not say whether maternal mortality was increasing or decreasing, or how many people were being infected with malaria, or how fast tuberculosis was spreading. In a 2004 paper, the World Health Organization’s former director of evidence, Chris Murray, and other researchers described the agency’s estimates as “serial guessing.” Without that baseline data, progress toward any given goal — to halve hunger, for example — could not be measured…(More)”.