Datafication, Identity, and the Reorganization of the Category Individual


Paper by Juan Ortiz Freuler: “A combination of political, sociocultural, and technological shifts suggests a change in the way we understand human rights. Undercurrents fueling this process are digitization and datafication. Through this process of change, categories that might have been cornerstones of our past and present might very well become outdated. A key category that is under pressure is that of the individual. Since datafication is typically accompanied by technologies and processes aimed at segmenting and grouping, such groupings become increasingly relevant at the expense of the notion of the individual. This concept might become but another collection of varied characteristics, a unit of analysis that is considered at times too broad—and at other times too narrow—to be considered relevant or useful by the systems driving our key economic, social, and political processes.

This Essay provides a literature review and a set of key definitions linking the processes of digitization, datafication, and the concept of the individual to existing conceptions of individual rights. It then presents a framework to dissect and showcase the ways in which current technological developments are putting pressure on our existing conceptions of the individual and individual rights…(More)”.

How Good Are Privacy Guarantees? Platform Architecture and Violation of User Privacy


Paper by Daron Acemoglu, Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian & Asuman Ozdaglar: “Many platforms deploy data collected from users for a multitude of purposes. While some are beneficial to users, others are costly to their privacy. The presence of these privacy costs means that platforms may need to provide guarantees about how and to what extent user data will be harvested for activities such as targeted ads, individualized pricing, and sales to third parties. In this paper, we build a multi-stage model in which users decide whether to share their data based on privacy guarantees. We first introduce a novel mask-shuffle mechanism and prove it is Pareto optimal—meaning that it leaks the least about the users’ data for any given leakage about the underlying common parameter. We then show that under any mask-shuffle mechanism, there exists a unique equilibrium in which privacy guarantees balance privacy costs and utility gains from the pooling of user data for purposes such as assessment of health risks or product development. Paradoxically, we show that as users’ value of pooled data increases, the equilibrium of the game leads to lower user welfare. This is because platforms take advantage of this change to reduce privacy guarantees so much that user utility declines (whereas it would have increased with a given mechanism). Even more strikingly, we show that platforms have incentives to choose data architectures that systematically differ from those that are optimal from the user’s point of view. In particular, we identify a class of pivot mechanisms, linking individual privacy to choices by others, which platforms prefer to implement and which make users significantly worse off…(More)”.

Barred From Grocery Stores by Facial Recognition


Article by Adam Satariano and Kashmir Hill: “Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.

On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.

“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.

Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave…(More)”.

Rewiring The Web: The future of personal data


Paper by Jon Nash and Charlie Smith: “In this paper, we argue that the widespread use of personal information online represents a fundamental flaw in our digital infrastructure that enables staggeringly high levels of fraud, undermines our right to privacy, and limits competition.Charlie Smith

To realise a web fit for the twenty-first century, we need to fundamentally rethink the ways in which we interact with organisations online.

If we are to preserve the founding values of an open, interoperable web in the face of such profound change, we must update the institutions, regulatory regimes, and technologies that make up this network of networks.

Many of the problems we face stem from the vast amounts of personal information that currently flow through the internet—and fixing this fundamental flaw would have a profound effect on the quality of our lives and the workings of the web…(More)”

Detecting Human Rights Violations on Social Media during Russia-Ukraine War


Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War

The Prediction Society: Algorithms and the Problems of Forecasting the Future


Paper by Hideyuki Matsumi and Daniel J. Solove: “Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions.

Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.

More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future…(More)”

Digital Freedoms in French-Speaking African Countries


Report by AFD: “As digital penetration increases in countries across the African continent, its citizens face growing risks and challenges. Indeed, beyond facilitated access to knowledge such as the online encyclopedia Wikipedia, to leisure-related tools such as Youtube, and to sociability such as social networks, digital technology offers an unprecedented space for democratic expression. 

However, these online civic spaces are under threat. Several governments have enacted vaguely-defined laws, allowing for random arrests.

Several countries have implemented repressive practices restricting freedom of expression and access to information. This is what is known as “digital authoritarianism”, which is on the rise in many countries.

This report takes stock of digital freedoms in 26 French-speaking African countries, and proposes concrete actions to improve citizen participation and democracy…(More)”

From LogFrames to Logarithms – A Travel Log


Article by Karl Steinacker and Michael Kubach: “..Today, authorities all over the world are experimenting with predictive algorithms. That sounds technical and innocent but as we dive deeper into the issue, we realise that the real meaning is rather specific: fraud detection systems in social welfare payment systems. In the meantime, the hitherto banned terminology had it’s come back: welfare or social safety nets are, since a couple of years, en vogue again. But in the centuries-old Western tradition, welfare recipients must be monitored and, if necessary, sanctioned, while those who work and contribute must be assured that there is no waste. So it comes at no surprise that even today’s algorithms focus on the prime suspect, the individual fraudster, the undeserving poor.

Fraud detection systems promise that the taxpayer will no longer fall victim to fraud and efficiency gains can be re-directed to serve more people. The true extent of welfare fraud is regularly exaggerated  while the costs of such systems is routinely underestimated. A comparison of the estimated losses and investments doesn’t take place. It is the principle to detect and punish the fraudsters that prevail. Other issues don’t rank high either, for example on how to distinguish between honest mistakes and deliberate fraud. And as case workers spent more time entering and analysing data and in front of a computer screen, the less they have time and inclination to talk to real people and to understand the context of their life at the margins of society.

Thus, it can be said that routinely hundreds of thousands of people are being scored. Example Denmark: Here, a system called Udbetaling Danmark was created in 2012 to streamline the payment of welfare benefits. Its fraud control algorithms can access the personal data of millions of citizens, not all of whom receive welfare payments. In contrast to the hundreds of thousands affected by this data mining, the number of cases referred to the Police for further investigation are minute. 

In the city of Rotterdam in the Netherlands every year, data of 30,000 welfare recipients is investigated in order to flag suspected welfare cheats. However, an analysis of its scoring system based on machine learning and algorithms showed systemic discrimination with regard to ethnicity, age, gender, and parenthood. It revealed evidence of other fundamental flaws making the system both inaccurate and unfair. What might appear to a caseworker as a vulnerability is treated by the machine as grounds for suspicion. Despite the scale of data used to calculate risk scores, the output of the system is not better than random guesses. However, the consequences of being flagged by the “suspicion machine” can be drastic, with fraud controllers empowered to turn the lives of suspects inside out.

As reported by the World Bank, the recent Covid-19 pandemic provided a great push to implement digital social welfare systems in the global South. In fact, for the World Bank the so-called Digital Public Infrastructure (DPI), enabling “Digitizing Government to Person Payments (G2Px)”, are as fundamental for social and economic development today as physical infrastructure was for previous generations. Hence, the World Bank is finances globally systems modelled after the Indian Aadhaar system, where more than a billion persons have been registered biometrically. Aadhaar has become, for all intents and purposes, a pre-condition to receive subsidised food and other assistance for 800 million Indian citizens.

Important international aid organisations are not behaving differently from states. The World Food Programme alone holds data of more than 40 million people on its Scope data base. Unfortunately, WFP like other UN organisations, is not subject to data protection laws and the jurisdiction of courts. This makes the communities they have worked with particularly vulnerable.

In most places, the social will become the metric, where logarithms determine the operational conduit for delivering, controlling and withholding assistance, especially welfare payments. In other places, the power of logarithms may go even further, as part of trust systems, creditworthiness, and social credit. These social credit systems for individuals are highly controversial as they require mass surveillance since they aim to track behaviour beyond financial solvency. The social credit score of a citizen might not only suffer from incomplete, or inaccurate data, but also from assessing political loyalties and conformist social behaviour…(More)”.

Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond


Book by Tamara Kneese: “Since the internet’s earliest days, people have died and mourned online. In quiet corners of past iterations of the web, the dead linger. But attempts at preserving the data of the dead are often ill-fated, for websites and devices decay and die, just as people do. Death disrupts technologists’ plans for platforms. It reveals how digital production is always collaborative, undermining the entrepreneurial platform economy and highlighting the flaws of techno-solutionism.
 
Big Tech has authority not only over people’s lives but over their experiences of death as well. Ordinary users and workers, though, advocate for changes to tech companies’ policies around death. Drawing on internet histories along with interviews with founders of digital afterlife startups, caretakers of illness blogs, and transhumanist tinkerers, the technology scholar Tamara Kneese takes readers on a vibrant tour of the ways that platforms and people work together to care for digital remains. What happens when commercial platforms encounter the messiness of mortality?..(More)”.

China’s new AI rules protect people — and the Communist Party’s power


Article by Johanna M. Costigan: “In April, in an effort to regulate rapidly advancing artificial intelligence technologies, China’s internet watchdog introduced draft rules on generative AI. They cover a wide range of issues — from how data is trained to how users interact with generative AI such as chatbots. 

Under the new regulations, companies are ultimately responsible for the “legality” of the data they use to train AI models. Additionally, generative AI providers must not share personal data without permission, and must guarantee the “veracity, accuracy, objectivity, and diversity” of their pre-training data. 

These strict requirements by the Cyberspace Administration of China (CAC) for AI service providers could benefit Chinese users, granting them greater protections from private companies than many of their global peers. Article 11 of the regulations, for instance, prohibits providers from “conducting profiling” on the basis of information gained from users. Any Instagram user who has received targeted ads after their smartphone tracked their activity would stand to benefit from this additional level of privacy.  

Another example is Article 10 — it requires providers to employ “appropriate measures to prevent users from excessive reliance on generated content,” which could help prevent addiction to new technologies and increase user safety in the long run. As companion chatbots such as Replika become more popular, companies should be responsible for managing software to ensure safe use. While some view social chatbots as a cure for loneliness, depression, and social anxiety, they also present real risks to users who become reliant on them…(More)”.