The Algorithmic Divide and Equality in the Age of Artificial Intelligence


Paper by Peter Yu: “In the age of artificial intelligence, highly sophisticated algorithms have been deployed to detect patterns, optimize solutions, facilitate self-learning, and foster improvements in technological products and services. Notwithstanding these tremendous benefits, algorithms and intelligent machines do not provide equal benefits to all. Just as the digital divide has separated those with access to the Internet, information technology, and digital content from those without, an emerging and ever-widening algorithmic divide now threatens to take away the many political, social, economic, cultural, educational, and career opportunities provided by machine learning and artificial intelligence.

Although policymakers, commentators, and the mass media have paid growing attention to algorithmic bias and the shortcomings of machine learning and artificial intelligence, the algorithmic divide has yet to attract much policy and scholarly attention. To fill this lacuna, this article draws on the digital divide literature to systematically analyze this new inequitable gap between the technology haves and have-nots. Utilizing the analytical framework that the Author developed in the early 2000s, the article discusses the five attributes of the algorithmic divide: awareness, access, affordability, availability, and adaptability.

This article then turns to three major problems precipitated by an emerging and fast-expanding algorithmic divide: (1) algorithmic deprivation; (2) algorithmic discrimination; and (3) algorithmic distortion. While the first two problems affect primarily those on the unfortunate side of the algorithmic divide, the latter impacts individuals on both sides of the divide. This article concludes by proposing seven nonexhaustive clusters of remedial actions to help bridge this emerging and ever-widening algorithmic divide. Combining law, communications policy, ethical principles, institutional mechanisms, and business practices, the article fashions a holistic response to help foster equality in the age of artificial intelligence….(More)”.

The Extended Corporate Mind: When Corporations Use AI to Break the Law


Paper by Mihailis Diamantis: “Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. Unless the law adapts, corporations will become increasingly immune to civil and criminal liability as they transfer responsibility from employees to algorithms.

This Article is the first to tackle the full extent of the growing doctrinal gap left by algorithmic corporate misconduct. To hold corporations accountable, the law must sometimes treat them as if they “know” information stored on their servers and “intend” decisions reached by their automated systems. Cognitive science and the philosophy of mind offer a path forward. The “extended mind thesis” complicates traditional views about the physical boundaries of the mind. The thesis states that the mind encompasses any system that sufficiently assists thought, e.g. by facilitating recall or enhancing decision-making. For natural people, the thesis implies that minds can extend beyond the brain to include external cognitive aids, like rolodexes and calculators. This Article adapts the thesis to corporate law. It motivates and proposes a doctrinal framework for extending the corporate mind to the algorithms that are increasingly integral to corporate thought. The law needs such an innovation if it is to hold future corporations to account for their most serious harms….(More)”.

Next generation disaster data infrastructure


Report by the IRDR Working Group on DATA and the CODATA Task Group on Linked Open Data for Global Disaster Risk Research: “Based on the targets of the Sendai Framework, this white paper proposes the next generation of disaster data infrastructure, which includes both novel and the most essential information systems and services that a country or a region can depend on to successfully gather, process and display disaster data to reduce the impact of natural hazards.

Fundamental requirements of disaster data infrastructure include (1) effective multi-source big disaster data collection (2) efficient big disaster data fusion, exchange and query (3) strict big disaster data quality control and standard construction (4) real time big data analysis and decision making and (5) user-friendly big data visualization.

The rest of the paper is organized as follows: first, several future scenarios of disaster management are developed based on existing disaster management systems and communication technology. Second, fundamental requirements of next generation disaster data infrastructure inspired by the proposed scenarios are discussed. Following that, research questions and issues are highlighted. Finally, policy recommendations and conclusions are provided at the end of the paper….(More)”.

Insurance Discrimination and Fairness in Machine Learning: An Ethical Analysis


Paper by Michele Loi and Markus Christen: “Here we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts and business managers. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g. medical diagnosis, sentencing). Moreover, the computer science literature has demonstrated the existence of a trade-off in the extent to which one can pursue non- discrimination versus predictive accuracy. Again the moral assessment of this trade-off is related to the context of application…(More)”

How does a computer ‘see’ gender?


Pew Research Center: “Machine vision tools like facial recognition are increasingly being used for law enforcement, advertising, and other purposes. Pew Research Center itself recently used a machine vision system to measure the prevalence of men and women in online image search results. This kind of system develops its own rules for identifying men and women after seeing thousands of example images, but these rules can be hard for to humans to discern. To better understand how this works, we showed images of the Center’s staff members to a trained machine vision system similar to the one we used to classify image searches. We then systematically obscured sections of each image to see which parts of the face caused the system to change its decision about the gender of the person pictured. Some of the results seemed intuitive, others baffling. In this interactive challenge, see if you can guess what makes the system change its decision.

Here’s how it works:…(More)”.

Traffic Data Is Good for More than Just Streets, Sidewalks


Skip Descant at Government Technology: “The availability of highly detailed daily traffic data is clearly an invaluable resource for traffic planners, but it can also help officials overseeing natural lands or public works understand how to better manage those facilities.

The Natural Communities Coalition, a conservation nonprofit in southern California, began working with the traffic analysis firm StreetLight Data in early 2018 to study the impacts from the thousands of annual visitors to 22 parks and natural lands. StreetLight Data’s use of de-identified cellphone data held promise for the project, which will continue into early 2020.

“You start to see these increases,” Milan Mitrovich, science director for the Natural Communities Coalition, said of the uptick in visitor activity the data showed. “So being able to have this information, and share it with our executive committee… these folks, they’re seeing it for the first time.”…

Officials with the Natural Communities Coalition were able to use the StreetLight data to gain insights into patterns of use not only per day, but at different times of the day. The data also told researchers where visitors were traveling from, a detail park officials found “jaw-dropping.”

“What we were able to see is, these resources, these natural areas, cast an incredible net across southern California,” said Mitrovich, noting visitors come from not only Orange County, but Los Angeles, San Bernardino and San Diego counties as well, a region of more than 20 million residents.

The data also allows officials to predict traffic levels during certain parts of the week, times of day or even holidays….(More)”.

Urban Systems Design: From “science for design” to “design in science”


Introduction to Special Issue of Urban Analytics and City Science by Perry PJ Yang and Yoshiki Yamagata: “The direct design of cities is often regarded as impossible, owing to the fluidity, complexity, and uncertainty entailed in urban systems. And yet, we do design our cities, however imperfectly. Cities are objects of our own creation, they are intended landscapes, manageable, experienced and susceptible to analysis (Lynch, 1984). Urban design as a discipline has always focused on “design” in its professional practices. Urban designers tend to ask normative questions about how good city forms are designed, or how a city and its urban spaces ought to be made, thereby problematizing urban form-making and the values entailed. These design questions are analytically distinct from “science”-related research that tends to ask positive questions such as how cities function, or what properties emerge from interactive processes of urban systems. The latter questions require data, analytic techniques, and research methods to generate insight.

This theme issue “Urban Systems Design” is an attempt to outline a research agenda by connecting urban design and systems science, which is grounded in both normative and positive questions. It aims to contribute to the emerging field of urban analytics and city science that is central to this journal. Recent discussions of smart cities inspire urban design, planning and architectural professionals to address questions of how smart cities are shaped and what should be made. What are the impacts of information and communication technologies (ICT) on the questions of how built environments are designed and developed? How would the internet of things (IoT), big data analytics and urban automation influence how humans perceive, experience, use and interact with the urban environment? In short, what are the emerging new urban forms driven by the rapid move to ‘smart cities’?…(More)”.

Big Data, Political Campaigning and the Law


Book edited by Normann Witzleb, Moira Paterson, and Janice Richardson on “Democracy and Privacy in the Age of Micro-Targeting”…: “In this multidisciplinary book, experts from around the globe examine how data-driven political campaigning works, what challenges it poses for personal privacy and democracy, and how emerging practices should be regulated.

The rise of big data analytics in the political process has triggered official investigations in many countries around the world, and become the subject of broad and intense debate. Political parties increasingly rely on data analytics to profile the electorate and to target specific voter groups with individualised messages based on their demographic attributes. Political micro-targeting has become a major factor in modern campaigning, because of its potential to influence opinions, to mobilise supporters and to get out votes. The book explores the legal, philosophical and political dimensions of big data analytics in the electoral process. It demonstrates that the unregulated use of big personal data for political purposes not only infringes voters’ privacy rights, but also has the potential to jeopardise the future of the democratic process, and proposes reforms to address the key regulatory and ethical questions arising from the mining, use and storage of massive amounts of voter data.

Providing an interdisciplinary assessment of the use and regulation of big data in the political process, this book will appeal to scholars from law, political science, political philosophy, and media studies, policy makers and anyone who cares about democracy in the age of data-driven political campaigning….(More)”.

AI Global Surveillance Technology


Carnegie Endowment: “Artificial intelligence (AI) technology is rapidly proliferating around the world. A growing number of states are deploying advanced AI surveillance tools to monitor, track, and surveil citizens to accomplish a range of policy objectives—some lawful, others that violate human rights, and many of which fall into a murky middle ground.

In order to appropriately address the effects of this technology, it is important to first understand where these tools are being deployed and how they are being used.

To provide greater clarity, Carnegie presents an AI Global Surveillance (AIGS) Index—representing one of the first research efforts of its kind. The index compiles empirical data on AI surveillance use for 176 countries around the world. It does not distinguish between legitimate and unlawful uses of AI surveillance. Rather, the purpose of the research is to show how new surveillance capabilities are transforming the ability of governments to monitor and track individuals or systems. It specifically asks:

  • Which countries are adopting AI surveillance technology?
  • What specific types of AI surveillance are governments deploying?
  • Which countries and companies are supplying this technology?

Learn more about our findings and how AI surveillance technology is spreading rapidly around the globe….(More)”.

Real-time flu tracking. By monitoring social media, scientists can monitor outbreaks as they happen.


Charles Schmidt at Nature: “Conventional influenza surveillance describes outbreaks of flu that have already happened. It is based on reports from doctors, and produces data that take weeks to process — often leaving the health authorities to chase the virus around, rather than get on top of it.

But every day, thousands of unwell people pour details of their symptoms and, perhaps unknowingly, locations into search engines and social media, creating a trove of real-time flu data. If such data could be used to monitor flu outbreaks as they happen and to make accurate predictions about its spread, that could transform public-health surveillance.

Powerful computational tools such as machine learning and a growing diversity of data streams — not just search queries and social media, but also cloud-based electronic health records and human mobility patterns inferred from census information — are making it increasingly possible to monitor the spread of flu through the population by following its digital signal. Now, models that track flu in real time and forecast flu trends are making inroads into public-health practice.

“We’re becoming much more comfortable with how these models perform,” says Matthew Biggerstaff, an epidemiologist who works on flu preparedness at the US Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia.

In 2013–14, the CDC launched the FluSight Network, a website informed by digital modelling that predicts the timing, peak and short-term intensity of the flu season in ten regions of the United States and across the whole country. According to Biggerstaff, flu forecasting helps responders to plan ahead, so they can be ready with vaccinations and communication strategies to limit the effects of the virus. Encouraged by progress in the field, the CDC announced in January 2019 that it will spend US$17.5 million to create a network of influenza-forecasting centres of excellence, each tasked with improving the accuracy and communication of real-time forecasts.

The CDC is leading the way on digital flu surveillance, but health agencies elsewhere are following suit. “We’ve been working to develop and apply these models with collaborators using a range of data sources,” says Richard Pebody, a consultant epidemiologist at Public Health England in London. The capacity to predict flu trajectories two to three weeks in advance, Pebody says, “will be very valuable for health-service planning.”…(More)”.