An algorithm intended to reduce poverty in Jordan disqualifies people in need


Article by Tate Ryan-Mosley: “An algorithm funded by the World Bank to determine which families should get financial assistance in Jordan likely excludes people who should qualify, according to an investigation published this morning by Human Rights Watch. 

The algorithmic system, called Takaful, ranks families applying for aid from least poor to poorest using a secret calculus that assigns weights to 57 socioeconomic indicators. Applicants say that the calculus is not reflective of reality, however, and oversimplifies people’s economic situation, sometimes inaccurately or unfairly. Takaful has cost over $1 billion, and the World Bank is funding similar projects in eight other countries in the Middle East and Africa. 

Human Rights Watch identified several fundamental problems with the algorithmic system that resulted in bias and inaccuracies. Applicants are asked how much water and electricity they consume, for example, as two of the indicators that feed into the ranking system. The report’s authors conclude that these are not necessarily reliable indicators of poverty. Some families interviewed believed the fact that they owned a car affected their ranking, even if the car was old and necessary for transportation to work. 

The report reads, “This veneer of statistical objectivity masks a more complicated reality: the economic pressures that people endure and the ways they struggle to get by are frequently invisible to the algorithm.”..(More)”.

Understanding the relationship between informal public transport and economic vulnerability in Dar es Salaam


WhereIsMyTransport Case Study: “In most African cities, formal public transport—such as government-run or funded bus and rail networks—has limited coverage and fails to meet overall mobility demand. As African cities grow and densify, planners are questioning whether these networks can serve the economically vulnerable communities who benefit most from public transport access to opportunities and services.

In the absence of formal public transport or private vehicles, low-income commuters have long relied on informal public transport—think tro tros in Accra, boda bodas in Kampala, danfos in Lagos—to meet their mobility needs. Yet there is little reliable data on the relationship between informal public transport and economic vulnerability in and around Africa’s cities, making it challenging to understand:

  • Which communities are the most vulnerable?
  • What opportunities and services do people typically attempt to access?
  • What routes do informal public transport operators follow?
  • What are the occupation and gender-related impacts?

Addressing these questions benefits from combining data assets. For example, pairing data on informal public transport coverage with data on the socioeconomic characteristics of the communities that rely on this type of transport…(More)”.

Data for Environmentally Sustainable and Inclusive Urban Mobility


Report by Anusha Chitturi and Robert Puentes: “Data on passenger movements, vehicle fleets, fare payments, and transportation infrastructure has immense potential to inform cities to better plan, regulate, and enforce their urban mobility systems. This report specifically examines the opportunities that exist for U.S. cities to use mobility data – made available through adoption of new mobility services and data-based technologies – to improve transportation’s environmental sustainability, accessibility, and equity. Cities are advancing transportation sustainability in several ways, including making trips more efficient, minimizing the use of single-occupancy vehicles, prioritizing sustainable modes of transport, and enabling a transition to zero and low-emission fuels. They are improving accessibility and equity by planning for and offering a range of transportation services that serve all people, irrespective of their physical abilities, economic power, and geographic location.
Data sharing is an important instrument for furthering these mobility outcomes. Ridership data from ride-hailing companies, for example, can inform cities about whether they are replacing sustainable transport trips, resulting in an increase in congestion and emissions; such data can further be used for designing targeted emission-reduction programs such as a congestion fee program, or for planning high-quality sustainable transport services to reduce car trips. Similarly, mobility data can be used to plan on-demand services in certain transit-poor neighborhoods, where fixed transit services don’t make financial sense due to low urban densities. Sharing mobility data, however, often comes with certain risks,..(More)”.

Critical factors influencing information disclosure in public organisations


Paper by Francisca Tejedo-Romero & Joaquim Filipe Ferraz Esteves Araujo: “Open government initiatives around the world and the passage of freedom of information laws are opening public organisations through information disclosure to ensure transparency and encourage citizen participation and engagement. At the municipal level, social, economic, and political factors are found to account for this trend. However, the findings on this issue are inconclusive and may differ from country to country. This paper contributes to this discussion by analysing a unitary country where the same set of laws and rules governs the constituent municipalities. It seeks to identify critical factors that affect the disclosure of municipal information. For this purpose, a longitudinal study was carried out over a period of 4 years using panel data methodology. The main conclusions seem to point to municipalities’ intention to increase the dissemination of information to reduce low levels of voter turnout and increase civic involvement and political participation. Municipalities governed by leftist parties and those that have high indebtedness are most likely to disclose information. Additionally, internet access has created new opportunities for citizens to access information, which exerts pressure for greater dissemination of information by municipalities. These findings are important to practitioners because they indicate the need to improve citizens’ access to the Internet and maintain information disclosure strategies beyond election periods…(More)”.

The A.I. Revolution Will Change Work. Nobody Agrees How.


Sarah Kessler in The New York Times: “In 2013, researchers at Oxford University published a startling number about the future of work: 47 percent of all United States jobs, they estimated, were “at risk” of automation “over some unspecified number of years, perhaps a decade or two.”

But a decade later, unemployment in the country is at record low levels. The tsunami of grim headlines back then — like “The Rich and Their Robots Are About to Make Half the World’s Jobs Disappear” — look wildly off the mark.

But the study’s authors say they didn’t actually mean to suggest doomsday was near. Instead, they were trying to describe what technology was capable of.

It was the first stab at what has become a long-running thought experiment, with think tanks, corporate research groups and economists publishing paper after paper to pinpoint how much work is “affected by” or “exposed to” technology.

In other words: If cost of the tools weren’t a factor, and the only goal was to automate as much human labor as possible, how much work could technology take over?

When the Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, were conducting their study, IBM Watson, a question-answering system powered by artificial intelligence, had just shocked the world by winning “Jeopardy!” Test versions of autonomous vehicles were circling roads for the first time. Now, a new wave of studies follows the rise of tools that use generative A.I.

In March, Goldman Sachs estimated that the technology behind popular A.I. tools such as DALL-E and ChatGPT could automate the equivalent of 300 million full-time jobs. Researchers at Open AI, the maker of those tools, and the University of Pennsylvania found that 80 percent of the U.S. work force could see an effect on at least 10 percent of their tasks.

“There’s tremendous uncertainty,” said David Autor, a professor of economics at the Massachusetts Institute of Technology, who has been studying technological change and the labor market for more than 20 years. “And people want to provide those answers.”

But what exactly does it mean to say that, for instance, the equivalent of 300 million full-time jobs could be affected by A. I.?

It depends, Mr. Autor said. “Affected could mean made better, made worse, disappeared, doubled.”…(More)”.

From Ethics to Law: Why, When, and How to Regulate AI


Paper by Simon Chesterman: “The past decade has seen a proliferation of guides, frameworks, and principles put forward by states, industry, inter- and non-governmental organizations to address matters of AI ethics. These diverse efforts have led to a broad consensus on what norms might govern AI. Far less energy has gone into determining how these might be implemented — or if they are even necessary. This chapter focuses on the intersection of ethics and law, in particular discussing why regulation is necessary, when regulatory changes should be made, and how it might work in practice. Two specific areas for law reform address the weaponization and victimization of AI. Regulations aimed at general AI are particularly difficult in that they confront many ‘unknown unknowns’, but the threat of uncontrollable or uncontainable AI became more widely discussed with the spread of large language models such as ChatGPT in 2023. Additionally, however, there will be a need to prohibit some conduct in which increasingly lifelike machines are the victims — comparable, perhaps, to animal cruelty laws…(More)”

Detecting Human Rights Violations on Social Media during Russia-Ukraine War


Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War

“My sex-related data is more sensitive than my financial data and I want the same level of security and privacy”: User Risk Perceptions and Protective Actions in Female-oriented Technologies


Paper by Maryam Mehrnezhad, and Teresa Almeida: “The digitalization of the reproductive body has engaged myriads of cutting-edge technologies in supporting people to know and tackle their intimate health. Generally understood as female technologies (aka female-oriented technologies or ‘FemTech’), these products and systems collect a wide range of intimate data which are processed, transferred, saved and shared with other parties. In this paper, we explore how the “data-hungry” nature of this industry and the lack of proper safeguarding mechanisms, standards, and regulations for vulnerable data can lead to complex harms or faint agentic potential. We adopted mixed methods in exploring users’ understanding of the security and privacy (SP) of these technologies. Our findings show that while users can speculate the range of harms and risks associated with these technologies, they are not equipped and provided with the technological skills to protect themselves against such risks. We discuss a number of approaches, including participatory threat modelling and SP by design, in the context of this work and conclude that such approaches are critical to protect users in these sensitive systems…(More)”.

Atlas of the Senseable City


Book by Antoine Picon and Carlo Ratti: “What have smart technologies taught us about cities? What lessons can we learn from today’s urbanites to make better places to live? Antoine Picon and Carlo Ratti argue that the answers are in the maps we make. For centuries, we have relied on maps to navigate the enormity of the city. Now, as the physical world combines with the digital world, we need a new generation of maps to navigate the city of tomorrow. Pervasive sensors allow anyone to visualize cities in entirely new ways—ebbs and flows of pollution, traffic, and internet connectivity.
 
This book explores how the growth of digital mapping, spurred by sensing technologies, is affecting cities and daily lives. It examines how new cartographic possibilities aid urban planners, technicians, politicians, and administrators; how digitally mapped cities could reveal ways to make cities smarter and more efficient; how monitoring urbanites has political and social repercussions; and how the proliferation of open-source maps and collaborative platforms can aid activists and vulnerable populations. With its beautiful, accessible presentation of cutting-edge research, this book makes it easy for readers to understand the stakes of the new information age—and appreciate the timeless power of the city…(More)”

Opportunities and Challenges in Reusing Public Genomics Data


Introduction to Special Issue by Mahmoud Ahmed and Deok Ryong Kim: “Genomics data is accumulating in public repositories at an ever-increasing rate. Large consortia and individual labs continue to probe animal and plant tissue and cell cultures, generating vast amounts of data using established and novel technologies. The human genome project kickstarted the era of systems biology (1, 2). Ambitious projects followed to characterize non-coding regions, variations across species, and between populations (3, 4, 5). The cost reduction allowed individual labs to generate numerous smaller high-throughput datasets (6, 7, 8, 9). As a result, the scientific community should consider strategies to overcome the challenges and maximize the opportunities to use these resources for research and the public good. In this collection, we will elicit opinions and perspectives from researchers in the field on the opportunities and challenges of reusing public genomics data. The articles in this research topic converge on the need for data sharing while acknowledging the challenges that come with it. Two articles defined and highlighted the distinction between data and metadata. The characteristic of each should be considered when designing optimal sharing strategies. One article focuses on the specific issues surrounding the sharing of genomics interval data, and another on balancing the need for protecting pediatric rights and the sharing benefits.

The definition of what counts as data is itself a moving target. As technology advances, data can be produced in more ways and from novel sources. Events of recent years have highlighted this fact. “The pandemic has underscored the urgent need to recognize health data as a global public good with mechanisms to facilitate rapid data sharing and governance,” Schwalbe and colleagues (2020). The challenges facing these mechanisms could be technical, economic, legal, or political. Defining what data is and its type, therefore, is necessary to overcome these barriers because “the mechanisms to facilitate data sharing are often specific to data types.” Unlike genomics data, which has established platforms, sharing clinical data “remains in a nascent phase.” The article by Patrinos and colleagues (2022) considers the strong ethical imperative for protecting pediatric data while acknowledging the need not to overprotections. The authors discuss a model of consent for pediatric research that can balance the need to protect participants and generate health benefits.

Xue et al. (2023) focus on reusing genomic interval data. Identifying and retrieving the relevant data can be difficult, given the state of the repositories and the size of these data. Similarly, integrating interval data in reference genomes can be hard. The author calls for standardized formats for the data and the metadata to facilitate reuse.

Sheffield and colleagues (2023) highlight the distinction between data and metadata. Metadata describes the characteristics of the sample, experiment, and analysis. The nature of this information differs from that of the primary data in size, source, and ways of use. Therefore, an optimal strategy should consider these specific attributes for sharing metadata. Challenges specifics to sharing metadata include the need for standardized terms and formats, making it portable and easier to find.

We go beyond the reuse issue to highlight two other aspects that might increase the utility of available public data in Ahmed et al. (2023). These are curation and integration…(More)”.