Metrics at Work: Journalism and the Contested Meaning of Algorithms


Book by Angèle Christin: “When the news moved online, journalists suddenly learned what their audiences actually liked, through algorithmic technologies that scrutinize web traffic and activity. Has this advent of audience metrics changed journalists’ work practices and professional identities? In Metrics at Work, Angèle Christin documents the ways that journalists grapple with audience data in the form of clicks, and analyzes how new forms of clickbait journalism travel across national borders.

Drawing on four years of fieldwork in web newsrooms in the United States and France, including more than one hundred interviews with journalists, Christin reveals many similarities among the media groups examined—their editorial goals, technological tools, and even office furniture. Yet she uncovers crucial and paradoxical differences in how American and French journalists understand audience analytics and how these affect the news produced in each country. American journalists routinely disregard traffic numbers and primarily rely on the opinion of their peers to define journalistic quality. Meanwhile, French journalists fixate on internet traffic and view these numbers as a sign of their resonance in the public sphere. Christin offers cultural and historical explanations for these disparities, arguing that distinct journalistic traditions structure how journalists make sense of digital measurements in the two countries.

Contrary to the popular belief that analytics and algorithms are globally homogenizing forces, Metrics at Work shows that computational technologies can have surprisingly divergent ramifications for work and organizations worldwide….(More)”.

Why Modeling the Spread of COVID-19 Is So Damn Hard



Matthew Hutson at IEEE Spectrum: “…Researchers say they’ve learned a lot of lessons modeling this pandemic, lessons that will carry over to the next.

The first set of lessons is all about data. Garbage in, garbage out, they say. Jarad Niemi, an associate professor of statistics at Iowa State University who helps run the forecast hub used by the CDC, says it’s not clear what we should be predicting. Infections, deaths, and hospitalization numbers each have problems, which affect their usefulness not only as inputs for the model but also as outputs. It’s hard to know the true number of infections when not everyone is tested. Deaths are easier to count, but they lag weeks behind infections. Hospitalization numbers have immense practical importance for planning, but not all hospitals release those figures. How useful is it to predict those numbers if you never have the true numbers for comparison? What we need, he said, is systematized random testing of the population, to provide clear statistics of both the number of people currently infected and the number of people who have antibodies against the virus, indicating recovery. Prakash, of Georgia Tech, says governments should collect and release data quickly in centralized locations. He also advocates for central repositories of policy decisions, so modelers can quickly see which areas are implementing which distancing measures.

Researchers also talked about the need for a diversity of models. At the most basic level, averaging an ensemble of forecasts improves reliability. More important, each type of model has its own uses—and pitfalls. An SEIR model is a relatively simple tool for making long-term forecasts, but the devil is in the details of its parameters: How do you set those to match real-world conditions now and into the future? Get them wrong and the model can head off into fantasyland. Data-driven models can make accurate short-term forecasts, and machine learning may be good for predicting complicated factors. But will the inscrutable computations of, for instance, a neural network remain reliable when conditions change? Agent-based models look ideal for simulating possible interventions to guide policy, but they’re a lot of work to build and tricky to calibrate.

Finally, researchers emphasize the need for agility. Niemi of Iowa State says software packages have made it easier to build models quickly, and the code-sharing site GitHub lets people share and compare their models. COVID-19 is giving modelers a chance to try out all their newest tools, says Meyers, of the University of Texas. “The pace of innovation, the pace of development, is unlike ever before,” she says. “There are new statistical methods, new kinds of data, new model structures.”…(More)”.

AI planners in Minecraft could help machines design better cities


Article by Will Douglas Heaven: “A dozen or so steep-roofed buildings cling to the edges of an open-pit mine. High above them, on top of an enormous rock arch, sits an inaccessible house. Elsewhere, a railway on stilts circles a group of multicolored tower blocks. Ornate pagodas decorate a large paved plaza. And a lone windmill turns on an island, surrounded by square pigs. This is Minecraft city-building, AI style.

Minecraft has long been a canvas for wild invention. Fans have used the hit block-building game to create replicas of everything from downtown Chicago and King’s Landing to working CPUs. In the decade since its first release, anything that can be built has been.

Since 2018, Minecraft has also been the setting for a creative challenge that stretches the abilities of machines. The annual Generative Design in Minecraft (GDMC) competition asks participants to build an artificial intelligence that can generate realistic towns or villages in previously unseen locations. The contest is just for fun, for now, but the techniques explored by the various AI competitors are precursors of ones that real-world city planners could use….(More)”.

Models and Modeling in the Sciences: A Philosophical Introduction


Book by Stephen M. Downes: “Biologists, climate scientists, and economists all rely on models to move their work forward. In this book, Stephen M. Downes explores the use of models in these and other fields to introduce readers to the various philosophical issues that arise in scientific modeling. Readers learn that paying attention to models plays a crucial role in appraising scientific work. 

This book first presents a wide range of models from a number of different scientific disciplines. After assembling some illustrative examples, Downes demonstrates how models shed light on many perennial issues in philosophy of science and in philosophy in general. Reviewing the range of views on how models represent their targets introduces readers to the key issues in debates on representation, not only in science but in the arts as well. Also, standard epistemological questions are cast in new and interesting ways when readers confront the question, “What makes for a good (or bad) model?”…(More)’.

How Algorithms Can Fight Bias Instead of Entrench It


Essay by Tobias Baer: “…How can we build algorithms that correct for biased data and that live up to the promise of equitable decision-making?

When we consider changing an algorithm to eliminate bias, it is helpful to distinguish what we can change at three different levels (from least to most technical): the decision algorithm, formula inputs, and the formula itself.

In discussing the levels, I will use a fictional example, involving Martians and Zeta Reticulans. I do this because picking a real-life example would, in fact, be stereotyping—I would perpetuate the very biases I try to fight by reiterating a simplified version of the world, and every time I state that a particular group of people is disadvantaged, I also can negatively affect the self-perception of people who consider themselves members of these groups. I do apologize if I unintentionally insult any Martians reading this article!

On the simplest and least technical level, we would adjust only the overall decision algorithm that takes one or more statistical formulas (typically to predict unknown outcomes such as academic success, recidivation, or marital bliss) as an input and applies rules to translate the predictions of these formulas into decisions (e.g., by comparing predictions with externally chosen cutoff values or contextually picking one prediction over another). Such rules can be adjusted without touching the statistical formulas themselves.

An example of such an intervention is called boxing. Imagine you have a score of astrological ability. The astrological ability score is a key criterion for shortlisting candidates for the Interplanetary Economic Forecasting Institute. You would have no objective reason to believe that Martians are any less apt at prognosticating white noise than Zeta Reticulans; however, due to racial prejudice in our galaxy, Martian children tend to get asked a lot less for their opinion and therefore have a lot less practice in gabbing than Zeta Reticulans, and as a result only one percent of Martian applicants achieve the minimum score required to be hired for the Interplanetary Economic Forecasting Institute as compared to three percent of Zeta Reticulans.

Boxing would posit that for hiring decisions to be neutral of race, for each race two percent of applicants should be eligible, and boxing would achieve it by calibrating different cut-off scores (i.e., different implied probabilities of astrological success) for Martians and Zeta Reticulans.

Another example of a level-one adjustment would be to use multiple rank-ordering scores and to admit everyone who achieves a high score on any one of them. This approach is particularly well suited if you have different methods of assessment at your disposal, but each method implies a particular bias against one or more subsegments. An example for a crude version of this approach is admissions to medical school in Germany, where routes include college grades, a qualitative assessment through an interview, and a waitlist….(More)”.

AI ethics groups are repeating one of society’s classic mistakes


Article by Abhishek Gupta and Victoria Heath: “International organizations and corporations are racing to develop global guidelines for the ethical use of artificial intelligence. Declarations, manifestos, and recommendations are flooding the internet. But these efforts will be futile if they fail to account for the cultural and regional contexts in which AI operates.

AI systems have repeatedly been shown to cause problems that disproportionately affect marginalized groups while benefiting a privileged few. The global AI ethics efforts under way today—of which there are dozens—aim to help everyone benefit from this technology, and to prevent it from causing harm. Generally speaking, they do this by creating guidelines and principles for developers, funders, and regulators to follow. They might, for example, recommend routine internal audits or require protections for users’ personally identifiable information.

We believe these groups are well-intentioned and are doing worthwhile work. The AI community should, indeed, agree on a set of international definitions and concepts for ethical AI. But without more geographic representation, they’ll produce a global vision for AI ethics that reflects the perspectives of people in only a few regions of the world, particularly North America and northwestern Europe.

This work is not easy or straightforward. “Fairness,” “privacy,” and “bias” mean different things (pdf) in different places. People also have disparate expectations of these concepts depending on their own political, social, and economic realities. The challenges and risks posed by AI also differ depending on one’s locale.

If organizations working on global AI ethics fail to acknowledge this, they risk developing standards that are, at best, meaningless and ineffective across all the world’s regions. At worst, these flawed standards will lead to more AI systems and tools that perpetuate existing biases and are insensitive to local cultures….(More)”.

AI Governance through Political Fora and Standards Developing Organizations


Report by Philippe Lorenz: “Shaping international norms around the ethics of Artificial Intelligence (AI) is perceived as a new responsibility by foreign policy makers. This responsibility is accompanied by a desire to play an active role in the most important international fora. Given the limited resources in terms of time and budget, foreign ministries need to set priorities for their involvement in the gover­nance of AI. First and foremost, this requires an understanding of the entire AI governance landscape and the actors involved. The intention of this paper is to take a step back and familiarize foreign policy makers with the internal structures of the individual AI governance initiatives and the relationships between the involved actors. A basic understanding of the landscape also makes it easier to classify thematic developments and emerging actors, their agendas, and strategies.

This paper provides foreign policy practitioners with a mapping that can serve as a compass to navigate the complex web of stakeholders that shape the international debate on AI ethics. It plots political fora that serve as a platform for actors to agree upon ethical principles and pursue binding regulation. The mapping supplements the political purview with key actors who create technical standards on the ethics of AI. Furthermore, it describes the dynamic relationships between actors from these two domains. International governance addresses AI ethics through two different dimensions: political fora and Standards Developing Organizations (SDOs). Although it may be tempting to only engage on the diplomatic stage, this would be insufficient to help shape AI policy. Foreign policy makers must tend to both dimensions. While both governance worlds share the same topics and themes (in this case, AI ethics), they differ in their stakeholders, goals, outputs, and reach.

Key political and economic organizations such as the United Nations (UN), the Organisation for Economic Co-operation and Development (OECD), and the European Commission (EC) address ethical concerns raised by AI technologies. But so do SDOs such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the IEEE Standards Association (IEEE SA). Although actors from the latter category are typically concerned with the development of standards that address terminology, ontology, and technical benchmarks that facilitate product interoperability and market access, they, too, address AI ethics.

But these discussions on AI ethics will be useless if they do not inform the development of concrete policies for how to govern the technology.
At international political fora, on the one hand, states shape the outputs that are often limited to non-binding, soft AI principles. SDOs, on the other hand, tend to the private sector. They are characterized by consensus-based decision-making processes that facilitate the adoption of industry standards. These fora are generally not accessible to (foreign) policy makers. Either because they exclusively cater to private sector and bar policy makers from joining, or because active participation requires in-depth technical expertise as well as industry knowledge which may surpass diplomats’ skill sets. Nonetheless, as prominent standard setting bodies such as ISO, IEC, and IEEE SA pursue industry standards in AI ethics, foreign policy makers need to take notice, as this will likely have consequences for their negotiations at international political fora.

The precondition for active engagement is to gain an overview of the AI Governance environment. Foreign policy practitioners need to understand the landscape of stakeholders, identify key actors, and start to strategically engage with questions relevant to AI governance. This is necessary to determine whether a given initiative on AI ethics is aligned with one’s own foreign policy goals and, therefore, worth engaging with. It is also helpful to assess industry dynamics that might affect geo-economic deliberations. Lastly, all of this is vital information to report back to government headquarters to inform policy making, as AI policy is a matter of domestic and foreign policy….(More)”.

The Oxford Handbook of Ethics of AI


Book edited by Markus D. Dubber, Frank Pasquale, and Sunit Das: “This volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches.

The term “A.I.” is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and are capable of tasks which require learning and ‘intelligence’, presents difficult ethical questions, and has drawn concerns from many quarters about individual and societal welfare, democratic decision-making, moral agency, and the prevention of harm. This work ranges from explorations of normative constraints on specific applications of machine learning algorithms today-in everyday medical practice, for instance-to reflections on the (potential) status of AI as a form of consciousness with attendant rights and duties and, more generally still, on the conceptual terms and frameworks necessarily to understand tasks requiring intelligence, whether “human” or “A.I.”…(More)”.

The Broken Algorithm That Poisoned American Transportation


Aaron Gordon at Vice: “…The Louisville highway project is hardly the first time travel demand models have missed the mark. Despite them being a legally required portion of any transportation infrastructure project that gets federal dollars, it is one of urban planning’s worst kept secrets that these models are error-prone at best and fundamentally flawed at worst.

Recently, I asked Renn how important those initial, rosy traffic forecasts of double-digit growth were to the boondoggle actually getting built.

“I think it was very important,” Renn said. “Because I don’t believe they could have gotten approval to build the project if they had not had traffic forecasts that said traffic across the river is going to increase substantially. If there isn’t going to be an increase in traffic, how do you justify building two bridges?”

ravel demand models come in different shapes and sizes. They can cover entire metro regions spanning across state lines or tackle a small stretch of a suburban roadway. And they have gotten more complicated over time. But they are rooted in what’s called the Four Step process, a rough approximation of how humans make decisions about getting from A to B. At the end, the model spits out numbers estimating how many trips there will be along certain routes.

As befits its name, the model goes through four steps in order to arrive at that number. First, it generates a kind of algorithmic map based on expected land use patterns (businesses will generate more trips than homes) and socio-economic factors (for example, high rates of employment will generate more trips than lower ones). Then it will estimate where people will generally be coming from and going to. The third step is to guess how they will get there, and the fourth is to then plot their actual routes, based mostly on travel time. The end result is a number of how many trips there will be in the project area and how long it will take to get around. Engineers and planners will then add a new highway, transit line, bridge, or other travel infrastructure to the model and see how things change. Or they will change the numbers in the first step to account for expected population or employment growth into the future. Often, these numbers are then used by policymakers to justify a given project, whether it’s a highway expansion or a light rail line…(More)”.

Too many AI researchers think real-world problems are not relevant


Essay by Hannah Kerner: “Any researcher who’s focused on applying machine learning to real-world problems has likely received a response like this one: “The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.”

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. I’ve seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and I’ve heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, or—in the case of deep learning—a new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word “application” seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors’ only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled “Machine Learning that Matters” (pdf), by NASA computer scientist Kiri Wagstaff: “Much of current machine learning research has lost its connection to problems of import to the larger world of science and society.” The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then….(More)”.