Responsible Data for Children Goes Polyglot: New Translations of Principles & Resources Available


Responsible Data for Children Blog: “In 2018, UNICEF and The GovLab launched the Responsible Data for Children (RD4C) initiative with the aim of supporting organisations and practitioners in ensuring that the interest of children is put at the centre of any work involving data for and about them.

Since its inception, the RD4C initiative has aimed to be field-oriented, driven by the needs of both children and practitioners across sectors and contexts. It has done so by ensuring that actors from the data responsibility sphere are informed and engaged on the RD4C work.

We want them to know what responsible data for and about children entails, why it is important, and how they can realize it in their own work.

In this spirit, the RD4C initiative has started translating its resources into different languages. We would like anyone willing to enhance their responsible data handling practices for and about children to be equipped with resources they can understand. As a global effort, we want to guarantee anyone willing to share their expertise and contribute be given the opportunity to do it.

Importantly, we would like children around the world—including the most marginalised and vulnerable groups—to be aware of what they can expect from organisations handling data for and about them and to have the means to demand and enforce their rights.

Last month, we released the RD4C Video, which is now available in ArabicFrench and Spanish. Soon, the rest of the RD4C resources, such as our principlestools and case studies will be translated as well.”

Democracy Disrupted: Governance in an Increasingly Virtual and Massively Distributed World.


Essay by Eric B. Schnurer: “…In short, it is often difficult to see where new technologies actually will lead. The same technological development can, in different settings, have different effects: The use of horses in warfare, which led seemingly inexorably in China and Europe to more centralized and autocratic states, had the effect on the other side of the world of enabling Hernán Cortés, with an army of roughly five hundred Spaniards, to defeat the massed infantries of the highly centralized, autocratic Aztec regime. Cortés’s example demonstrates that a particular technology generally employed by a concentrated power to centralize and dominate can also be used by a small insurgent force to disperse and disrupt (although in Cortés’s case this was on behalf of the eventual imposition of an even more despotic rule).

Regardless of the lack of inherent ideological content in any given technology, however, our technological realities consistently give metaphorical shape to our ideological constructs. In ancient Egypt, the regularity of the Nile’s flood cycle, which formed the society’s economic basis, gave rise to a belief in recurrent cycles of life and death; in contrast, the comparatively harsh and static agricultural patterns of the more-or-less contemporaneous Mesopotamian world produced a society that conceived of gods who simply tormented humans and then relegated them after death to sit forever in a place of dust and silence; meanwhile, the pastoral societies of the Fertile Crescent have handed down to us the vision of God as shepherd of his flock. (The Bible also gives us, in the story of Cain and Abel, a parable of the deadly conflict that technologically driven economic changes wreak: Abel was a traditional pastoralist—he tended sheep—while Cain, who planted seeds in the ground, represented the disruptive “New Economy” of settled agriculture. Tellingly, after killing off the pastoralist, the sedentarian Cain exits to found the first city.88xGenesis 4:17.)

As humans developed more advanced technologies, these in turn reshaped our conceptions of the world around us, including the proper social order. Those who possessed superior technological knowledge were invested with supernatural authority: The key to early Rome’s defense was the ability quickly to assemble and disassemble the bridges across the Tiber, so much so that the pontifex maximus—literally the “greatest bridge-builder”—became the high priest, from whose Latin title we derive the term pontiff. The most sophisticated—and arguably most crucial—technology in any town in medieval Europe was its public clock. The clock, in turn, became a metaphor for the mechanical working of the universe—God, in fact, was often conceived of as a clockmaker (a metaphor still frequently invoked to argue against evolution and for the necessity of an intelligent creator)—and for the proper form of social organization: All should know their place and move through time and space as predictably as the figurines making their regular appearances and performing their routinized interactions on the more elaborate and entertaining of these town-square timepieces.

In our own time, the leading technologies continue to provide the organizing concepts for our economic, political, and theological constructs. The factory became such a ubiquitous reflection of economic and social realities that, from the early nineteenth century onward, virtually every social and cultural institution—welfare (the poorhouse, or, as it was often called, the “workhouse”), public safety (the penitentiary), health care (the hospital), mental health (the insane asylum), “workforce” or public housing, even (as teachers often suggest to me) the education system—was consciously remodeled around it. Even when government finally tried to get ahead of the challenges posed by the Industrial Revolution by building the twentieth-century welfare state, it wound up constructing essentially a new capital of the Industrial Age in Washington, DC, with countless New Deal ministries along the Mall—resembling, as much as anything, the rows of factory buildings one can see in the steel and mill towns of the same era.

By the middle of the twentieth century, the atom and the computer came to dominate most intellectual constructs. First, the uncertainty of quantum mechanics upended mechanistic conceptions of social and economic relations, helping to foster conceptions of relativism in everything from moral philosophy to literary criticism. More recently, many scientists have come to the conclusion that the universe amounts to a massive information processor, and popular culture to the conviction that we all simply live inside a giant video game.

In sum, while technological developments are not deterministic—their outcomes being shaped, rather, by the uses we conceive to employ them—our conceptions are largely molded by these dominant technologies and the transformations they effect.99xI should note that while this argument is not deterministic, like those of most current thinkers about political and economic development such as Francis Fukuyama, Jared Diamond, and Yuval Noah Harari, neither is it materialistic, like that of Karl Marx. Marx thoroughly rejected human ideas and thinking as movers of history, which he saw as simply shaped and dictated by the technology. I am suggesting instead a dialectic between the ideal and the material. To repeat the metaphor, technological change constitutes the plate tectonics on which human contingencies are then built. To understand, then, the deeper movements of thought, economic arrangements, and political developments, both historical and contemporary, one must understand the nature of the technologies underlying and driving their unfolding…(More)“.

Crime Prediction Keeps Society Stuck in the Past


Article by Chris Gilliard: “…All of these policing systems operate on the assumption that the past determines the future. In Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition, digital media scholar Wendy Hui Kyong Chun argues that the most common methods used by technologies such as PredPol and Chicago’s heat list to make predictions do nothing of the sort. Rather than anticipating what might happen out of the myriad and unknowable possibilities on which the very idea of a future depends, machine learning and other AI-based methods of statistical correlation “restrict the future to the past.” In other words, these systems prevent the future in order to “predict” it—they ensure that the future will be just the same as the past was.

“If the captured and curated past is racist and sexist,” Chun writes, “these algorithms and models will only be verified as correct if they make sexist and racist predictions.” This is partly a description of the familiar garbage-in/garbage-out problem with all data analytics, but it’s something more: Ironically, the putatively “unbiased” technology sold to us by promoters is said to “work” precisely when it tells us that what is contingent in history is in fact inevitable and immutable. Rather than helping us to manage social problems like racism as we move forward, as the McDaniel case shows in microcosm, these systems demand that society not change, that things that we should try to fix instead must stay exactly as they are.

It’s a rather glaring observation that predictive policing tools are rarely if ever (with the possible exception of the parody “White Collar Crime Risk Zone” project) focused on wage theft or various white collar crimes, even though the dollar amounts of those types of offenses far outstrip property crimes in terms of dollar value by several orders of magnitude. This gap exists because of how crime exists in the popular imagination. For instance, news reports in recent weeks bludgeoned readers with reports of a so-called “crime wave” of shoplifting at high-end stores. Yet just this past February, Amazon agreed to pay regulators a whopping $61.7 million, the amount the FTC says the company shorted drivers in a two-and-a-half-year period. That story received a fraction of the coverage, and aside from the fine, there will be no additional charges.

The algorithmic crystal ball that promises to predict and forestall future crimes works from a fixed notion of what a criminal is, where crimes occur, and how they are prosecuted (if at all). Those parameters depend entirely on the power structure empowered to formulate them—and very often the explicit goal of those structures is to maintain existing racial and wealth hierarchies. This is the same set of carceral logics that allow the placement of children into gang databases, or the development of a computational tool to forecast which children will become criminals. The process of predicting the lives of children is about cementing existing realities rather than changing them. Entering children into a carceral ranking system is in itself an act of violence, but as in the case of McDaniel, it also nearly guarantees that the system that sees them as potential criminals will continue to enact violence on them throughout their lifetimes…(More)”.

The Privacy Elasticity of Behavior: Conceptualization and Application


Paper by Inbal Dekel, Rachel Cummings, Ori Heffetz & Katrina Ligett: “We propose and initiate the study of privacy elasticity—the responsiveness of economic variables to small changes in the level of privacy given to participants in an economic system. Individuals rarely experience either full privacy or a complete lack of privacy; we propose to use differential privacy—a computer-science theory increasingly adopted by industry and government—as a standardized means of quantifying continuous privacy changes. The resulting privacy measure implies a privacy-elasticity notion that is portable and comparable across contexts. We demonstrate the feasibility of this approach by estimating the privacy elasticity of public-good contributions in a lab experiment…(More)”.

Responsible by Design – Principles for the ethical use of behavioural science in government


OECD Report: “The use of behavioural insights (BI) in public policy has grown over the last decade, with the largest increase of new behavioural teams emerging in the last five years. More and more governments are turning to behavioural science – a multidisciplinary approach to policy making encompassing lessons from psychology, cognitive science, neuroscience, anthropology, economics and more. There are a wide variety of frameworks and resources currently available, such as the OECD BASIC framework, designed with the purpose of helping BI practitioners and government officials infusing behavioural science throughout the policy cycle.

Despite the availability of such frameworks, there are less resources available with the primary purpose of safeguarding the responsible use of behavioural science in government. Oftentimes, teams are left to establish their own ethical standards and practices, which has resulted in an uncoordinated mosaic of procedures guiding the international community interested in upholding ethical behavioural practices. Until now, few attempts have been made to standardize ethical principles for behavioural science in public policy, and to concisely gather and present international best practices.

In light of this, we developed the first-of-its-kind Good Practice Principles for the Ethical Use of Behavioural Science in Public Policy to advance the responsible use of BI in government…(More)”.

Social Media, Freedom of Speech, and the Future of our Democracy


Book edited by Lee C. Bollinger and Geoffrey R. Stone: “One of the most fiercely debated issues of this era is what to do about “bad” speech-hate speech, disinformation and propaganda campaigns, and incitement of violence-on the internet, and in particular speech on social media platforms such as Facebook and Twitter. In Social Media, Freedom of Speech, and the Future of our Democracy, Lee C. Bollinger and Geoffrey R. Stone have gathered an eminent cast of contributors—including Hillary Clinton, Amy Klobuchar, Sheldon Whitehouse, Mark Warner, Newt Minow,Tim Wu, Cass Sunstein, Jack Balkin, Emily Bazelon, and others—to explore the various dimensions of this problem in the American context. They stress how difficult it is to develop remedies given that some of these forms of “bad” speech are ordinarily protected by the First Amendment. Bollinger and Stone argue that it is important to remember that the last time we encountered major new communications technology-television and radio-we established a federal agency to provide oversight and to issue regulations to protect and promote “the public interest.” Featuring a variety of perspectives from some of America’s leading experts on this hotly contested issue, this volume offers new insights for the future of free speech in the social media era…(More)”.

Roe’s overturn is tech’s privacy apocalypse


Scott Rosenberg at Axios: “America’s new abortion reality is turning tech firms’ data practices into an active field of conflict — a fight that privacy advocates have long predicted and company leaders have long feared.

Why it matters: A long legal siege in which abortion-banning states battle tech companies, abortion-friendly states and their own citizens to gather criminal evidence is now a near certainty.

  • The once-abstract privacy argument among policy experts has transformed overnight into a concrete real-world problem, superheated by partisan anger, affecting vast swaths of the U.S. population, with tangible and easily understood consequences.

Driving the news: Google announced Friday a new program to automatically delete the location data of users who visit “particularly personal” locations like “counseling centers, domestic violence shelters, abortion clinics, fertility centers, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others.”

  • Google tracks the location of any user who turns on its “location services” — a choice that’s required to make many of its programs, like Google Search and Maps, more useful.
  • That tracking happens even when you’re logged into non-location-related Google services like YouTube, since Google long ago unified all its accounts.

Between the lines: Google’s move won cautious applause but left plenty of open concerns.

  • It’s not clear how, and how reliably, Google will identify the locations that trigger automatic data deletion.
  • The company will not delete search requests automatically — users who want to protect themselves will have to do so themselves.
  • A sudden gap in location data could itself be used as evidence in court…(More)”.

How Does the Public Sector Identify Problems It Tries to Solve with AI?


Article by Maia Levy Daniel: “A correct analysis of the implementation of AI in a particular field or process needs to start by identifying if there actually is a problem to be solved. For instance, in the case of job matching, the problem would be related to the levels of unemployment in the country, and presumably addressing imbalances in specific fields. Then, would AI be the best way to address this specific problem? Are there any alternatives? Is there any evidence that shows that AI would be a better tool? Building AI systems is expensive and the funds being used by the public sector come from taxpayers. Are there any alternatives that could be less expensive? 

Moreover, governments must understand from the outset that these systems could involve potential risks for civil and human rights. Thus, it should be justified in detail why the government might be choosing a more expensive or riskier option. A potential guide to follow is the one developed by the UK’s Office for Artificial Intelligence on how to use AI in the public sector. This guide includes a section specifically devoted to how to assess whether AI is the right solution to a problem.

AI is such a buzzword that it has become appealing for governments to use as a solution to any public problem, without even starting to look for available alternatives. Although automation could accelerate decision-making processes, speed should not be prioritized over quality or over human rights protection. As Daniel Susser argues in his recent paper, the speed at which automated decisions are reached has normative implications. By incorporating digital technologies in decision-making processes, temporal norms and values that govern them are impacted, disrupting prior norms, re-calibrating balanced trade-offs, or displacing automation’s costs. As Susser suggests, speed is not necessarily bad; however, “using computational tools to speed up (or slow down) certain decisions is not a ‘neutral’ adjustment without further explanations.” 

So, conducting a thorough diagnosis including the identification of the specific problem to address and the best way to address it is key to protecting citizens’ rights. And this is why transparency must be mandatory. As citizens, we have a right to know how these processes are being conceived and designed, the reasons governments choose to implement technologies, as well as the risks involved.

In addition, maybe a good way to ultimately approach the systemic problem and change the structure of incentives is to stop using the pretentious terms “artificial intelligence”, “AI”, and “machine learning”, as Emily Tucker, the Executive Director of the Center on Privacy & Technology at Georgetown Law Center announced the Center would do. As Tucker explained, these terms are confusing for the average person, and the way they are typically employed makes us think it’s a machine rather than human beings making the decisions. By removing marketing terms from the equation and giving more visibility to the humans involved, these technologies may not ultimately seem so exotic…(More)”.

How China uses search engines to spread propaganda


Blog by Jessica Brandt and Valerie Wirtschafter: “Users come to search engines seeking honest answers to their queries. On a wide range of issues—from personal health, to finance, to news—search engines are often the first stop for those looking to get information online. But as authoritarian states like China increasingly use online platforms to disseminate narratives aimed at weakening their democratic competitors, these search engines represent a crucial battleground in their information war with rivals. For Beijing, search engines represent a key—and underappreciated vector—to spread propaganda to audiences around the world.  

On a range of topics of geopolitical importance, Beijing has exploited search engine results to disseminate state-backed media that amplify the Chinese Communist Party’s propaganda. As we demonstrate in our recent report, published by the Brookings Institution in collaboration with the German Marshall Fund’s Alliance for Securing Democracy, users turning to search engines for information on Xinjiang, the site of the CCP’s egregious human rights abuses of the region’s Uyghur minority, or the origins of the coronavirus pandemic are surprisingly likely to encounter articles on these topics published by Chinese state-media outlets. By prominently surfacing this type of content, search engines may play a key role in Beijing’s effort to shape external perceptions, which makes it crucial that platforms—along with authoritative outlets that syndicate state-backed content without clear labeling—do more to address their role in spreading these narratives…(More)“.

Moral Expansiveness Around the World: The Role of Societal Factors Across 36 Countries


Paper by Kelly Kirkland et al: “What are the things that we think matter morally, and how do societal factors influence this? To date, research has explored several individual-level and historical factors that influence the size of our ‘moral circles.’ There has, however, been less attention focused on which societal factors play a role. We present the first multi-national exploration of moral expansiveness—that is, the size of people’s moral circles across countries. We found low generalized trust, greater perceptions of a breakdown in the social fabric of society, and greater perceived economic inequality were associated with smaller moral circles. Generalized trust also helped explain the effects of perceived inequality on lower levels of moral inclusiveness. Other inequality indicators (i.e., Gini coefficients) were, however, unrelated to moral expansiveness. These findings suggest that societal factors, especially those associated with generalized trust, may influence the size of our moral circles…(More)”.