The Man Who Trapped Us in Databases


McKenzie Funk in The New York University: “One of Asher’s innovations — or more precisely one of his companies’ innovations — was what is now known as the LexID. My LexID, I learned, is 000874529875. This unique string of digits is a kind of shadow Social Security number, one of many such “persistent identifiers,” as they are called, that have been issued not by the government but by data companies like Acxiom, Oracle, Thomson Reuters, TransUnion — or, in this case, LexisNexis.

My LexID was created sometime in the early 2000s in Asher’s computer room in South Florida, as many still are, and without my consent it began quietly stalking me. One early data point on me would have been my name; another, my parents’ address in Oregon. From my birth certificate or my driver’s license or my teenage fishing license — and from the fact that the three confirmed one another — it could get my sex and my date of birth. At the time, it would have been able to collect the address of the college I attended, Swarthmore, which was small and expensive, and it would have found my first full-time employer, the National Geographic Society, quickly amassing more than enough data to let someone — back then, a human someone — infer quite a bit more about me and my future prospects…(More)”

Peace by Design? Unlocking the Potential of Seven Technologies for a More Peaceful Future


Article by Stefaan G. Verhulst and Artur Kluz: “Technology has always played a crucial role in human history, both in winning wars and building peace. Even Leonardo da Vinci, the genius of the Renaissance time, in his 1482 letter to Ludovico Il Moro Sforza, Duke of Milan promised to invent new technological warfare for attack or defense. While serving top military and political leaders, he was working on technological advancements that could potentially have a significant impact on geopolitics.

(Picture from @iwpg_la)

Today, we are living in exceptional times, where disruptive technologies such as AI, space-based technologies, quantum computing, and many others are leading to the reimagination of everything around us and transforming our lives, state interactions in the global arena, and wars. The next great industrial revolution may well be occurring over 250 miles above us in outer space and putting our world into a new perspective. This is not just a technological transformation; this is a social and human transformation.

Perhaps to a greater extent than ever since World War II, recent news has been dominated by talk of war, as well as the destructive power of AI for human existence. The headlines are of missiles and offensives in Ukraine, of possible — and catastrophic — conflict over Taiwan, and of AI as humanity’s biggest existential threat.

A critical difference between this era and earlier times of conflict is the potential role of technology for peace. Along with traditional weaponry and armaments, it is clear that new space, data, and various other information and communication technologies will play an increasingly prominent role in 21st-century conflicts, especially when combined.

Much of the discussion today focuses on the potential offensive capabilities of technology. In a recent report titled “Seven Critical Technologies for Winning the Next War”, CSIS highlighted that “the next war will be fought on a high-tech battlefield….The consequences of failure on any of these technologies are tremendous — they could make the difference between victory and defeat.”

However, in the following discussion, we shift our focus to a distinctly different aspect of technology — its potential to cultivate peace and prevent conflicts. We present seven forms of PeaceTech, which encompass technologies that can actively avert or alleviate conflicts. These technologies are part of a broader range of innovations that contribute to the greater good of society and foster the overall well-being of humanity.

The application of frontier technologies has speedy, broad, and impactful effects in building peace. From preventing military conflicts and disinformation, connecting people, facilitating dialogue, drone delivery of humanitarian aid, and solving water access conflicts, to satellite imagery to monitor human rights violations and monitor peacekeeping efforts; technology has demonstrated its strong footprint in building peace.

One important caveat is in order: readers may note the absence of data in the list below. We have chosen to include data as a cross-cutting category that applies across the seven technologies. This points to the ubiquity of data in today’s digital ecology. In an era of rapid datafication, data can no longer be classified as a single technology, but rather as an asset or tool embedded within virtually every other technology. (See our writings on the role of data for peace here)…(More)”.

The Urgent Need to Reimagine Data Consent


Article by Stefaan G. Verhulst, Laura Sandor & Julia Stamm: “Recognizing the significant benefits that can arise from the use and reuse of data to tackle contemporary challenges such as migration, it is worth exploring new approaches to collect and utilize data that empower individuals and communities, granting them the ability to determine how their data can be utilized for various personal, community, and societal causes. This need is not specific to migrants alone. It applies to various regions, populations, and fields, ranging from public health and education to urban mobility. There is a pressing demand to involve communities, often already vulnerable, to establish responsible access to their data that aligns with their expectations, while simultaneously serving the greater public good.

We believe the answer lies through a reimagination of the concept of consent. Traditionally, consent has been the tool of choice to secure agency and individual rights, but that concept, we would suggest, is no longer sufficient to today’s era of datafication. Instead, we should strive to establish a new standard of social license. Here, we’ll define what we mean by a social license and outline some of the limitations of consent (as it is typically defined and practiced today). Then we’ll describe one possible means of securing social license—through participatory decision -making…(More)”.

The Case Against AI Everything, Everywhere, All at Once


Essay by Judy Estrin: “The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability “...a sense that the future is just more of the present, … that there are no alternatives, and therefore nothing really to be done.” There is no discussion of underlying values. Facts that don’t fit the narrative are disregarded.

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valley’s economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common values—democratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate friction—removing any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didn’t question whether the only way to build community, find like-minded people, or be heard, was through one enormous “town square,” rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

It’s now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI…(More)”.

Datafication, Identity, and the Reorganization of the Category Individual


Paper by Juan Ortiz Freuler: “A combination of political, sociocultural, and technological shifts suggests a change in the way we understand human rights. Undercurrents fueling this process are digitization and datafication. Through this process of change, categories that might have been cornerstones of our past and present might very well become outdated. A key category that is under pressure is that of the individual. Since datafication is typically accompanied by technologies and processes aimed at segmenting and grouping, such groupings become increasingly relevant at the expense of the notion of the individual. This concept might become but another collection of varied characteristics, a unit of analysis that is considered at times too broad—and at other times too narrow—to be considered relevant or useful by the systems driving our key economic, social, and political processes.

This Essay provides a literature review and a set of key definitions linking the processes of digitization, datafication, and the concept of the individual to existing conceptions of individual rights. It then presents a framework to dissect and showcase the ways in which current technological developments are putting pressure on our existing conceptions of the individual and individual rights…(More)”.

How Good Are Privacy Guarantees? Platform Architecture and Violation of User Privacy


Paper by Daron Acemoglu, Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian & Asuman Ozdaglar: “Many platforms deploy data collected from users for a multitude of purposes. While some are beneficial to users, others are costly to their privacy. The presence of these privacy costs means that platforms may need to provide guarantees about how and to what extent user data will be harvested for activities such as targeted ads, individualized pricing, and sales to third parties. In this paper, we build a multi-stage model in which users decide whether to share their data based on privacy guarantees. We first introduce a novel mask-shuffle mechanism and prove it is Pareto optimal—meaning that it leaks the least about the users’ data for any given leakage about the underlying common parameter. We then show that under any mask-shuffle mechanism, there exists a unique equilibrium in which privacy guarantees balance privacy costs and utility gains from the pooling of user data for purposes such as assessment of health risks or product development. Paradoxically, we show that as users’ value of pooled data increases, the equilibrium of the game leads to lower user welfare. This is because platforms take advantage of this change to reduce privacy guarantees so much that user utility declines (whereas it would have increased with a given mechanism). Even more strikingly, we show that platforms have incentives to choose data architectures that systematically differ from those that are optimal from the user’s point of view. In particular, we identify a class of pivot mechanisms, linking individual privacy to choices by others, which platforms prefer to implement and which make users significantly worse off…(More)”.

Barred From Grocery Stores by Facial Recognition


Article by Adam Satariano and Kashmir Hill: “Simon Mackenzie, a security officer at the discount retailer QD Stores outside London, was short of breath. He had just chased after three shoplifters who had taken off with several packages of laundry soap. Before the police arrived, he sat at a back-room desk to do something important: Capture the culprits’ faces.

On an aging desktop computer, he pulled up security camera footage, pausing to zoom in and save a photo of each thief. He then logged in to a facial recognition program, Facewatch, which his store uses to identify shoplifters. The next time those people enter any shop within a few miles that uses Facewatch, store staff will receive an alert.

“It’s like having somebody with you saying, ‘That person you bagged last week just came back in,’” Mr. Mackenzie said.

Use of facial recognition technology by the police has been heavily scrutinized in recent years, but its application by private businesses has received less attention. Now, as the technology improves and its cost falls, the systems are reaching further into people’s lives. No longer just the purview of government agencies, facial recognition is increasingly being deployed to identify shoplifters, problematic customers and legal adversaries.

Facewatch, a British company, is used by retailers across the country frustrated by petty crime. For as little as 250 pounds a month, or roughly $320, Facewatch offers access to a customized watchlist that stores near one another share. When Facewatch spots a flagged face, an alert is sent to a smartphone at the shop, where employees decide whether to keep a close eye on the person or ask the person to leave…(More)”.

Rewiring The Web: The future of personal data


Paper by Jon Nash and Charlie Smith: “In this paper, we argue that the widespread use of personal information online represents a fundamental flaw in our digital infrastructure that enables staggeringly high levels of fraud, undermines our right to privacy, and limits competition.Charlie Smith

To realise a web fit for the twenty-first century, we need to fundamentally rethink the ways in which we interact with organisations online.

If we are to preserve the founding values of an open, interoperable web in the face of such profound change, we must update the institutions, regulatory regimes, and technologies that make up this network of networks.

Many of the problems we face stem from the vast amounts of personal information that currently flow through the internet—and fixing this fundamental flaw would have a profound effect on the quality of our lives and the workings of the web…(More)”

Detecting Human Rights Violations on Social Media during Russia-Ukraine War


Paper by Poli Nemkova, et al: “The present-day Russia-Ukraine military conflict has exposed the pivotal role of social media in enabling the transparent and unbridled sharing of information directly from the frontlines. In conflict zones where freedom of expression is constrained and information warfare is pervasive, social media has emerged as an indispensable lifeline. Anonymous social media platforms, as publicly available sources for disseminating war-related information, have the potential to serve as effective instruments for monitoring and documenting Human Rights Violations (HRV). Our research focuses on the analysis of data from Telegram, the leading social media platform for reading independent news in post-Soviet regions. We gathered a dataset of posts sampled from 95 public Telegram channels that cover politics and war news, which we have utilized to identify potential occurrences of HRV. Employing a mBERT-based text classifier, we have conducted an analysis to detect any mentions of HRV in the Telegram data. Our final approach yielded an F2 score of 0.71 for HRV detection, representing an improvement of 0.38 over the multilingual BERT base model. We release two datasets that contains Telegram posts: (1) large corpus with over 2.3 millions posts and (2) annotated at the sentence-level dataset to indicate HRVs. The Telegram posts are in the context of the Russia-Ukraine war. We posit that our findings hold significant implications for NGOs, governments, and researchers by providing a means to detect and document possible human rights violations…(More)” See also Data for Peace and Humanitarian Response? The Case of the Ukraine-Russia War

The Prediction Society: Algorithms and the Problems of Forecasting the Future


Paper by Hideyuki Matsumi and Daniel J. Solove: “Predictions about the future have been made since the earliest days of humankind, but today, we are living in a brave new world of prediction. Today’s predictions are produced by machine learning algorithms that analyze massive quantities of personal data. Increasingly, important decisions about people are being made based on these predictions.

Algorithmic predictions are a type of inference. Many laws struggle to account for inferences, and even when they do, the laws lump all inferences together. But as we argue in this Article, predictions are different from other inferences. Predictions raise several unique problems that current law is ill-suited to address. First, algorithmic predictions create a fossilization problem because they reinforce patterns in past data and can further solidify bias and inequality from the past. Second, algorithmic predictions often raise an unfalsiability problem. Predictions involve an assertion about future events. Until these events happen, predictions remain unverifiable, resulting in an inability for individuals to challenge them as false. Third, algorithmic predictions can involve a preemptive intervention problem, where decisions or interventions render it impossible to determine whether the predictions would have come true. Fourth, algorithmic predictions can lead to a self-fulfilling prophecy problem where they actively shape the future they aim to forecast.

More broadly, the rise of algorithmic predictions raises an overarching concern: Algorithmic predictions not only forecast the future but also have the power to create and control it. The increasing pervasiveness of decisions based on algorithmic predictions is leading to a prediction society where individuals’ ability to author their own future is diminished while the organizations developing and using predictive systems are gaining greater power to shape the future…(More)”