A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility Within a Human Rights Framework


Report by Karen Yeung: “This study was commissioned by the Council of Europe’s Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT). It was prompted by concerns about the potential adverse consequences of advanced digital technologies (including artificial intelligence (‘AI’)), particularly their impact on the enjoyment of human rights and fundamental freedoms. This draft report seeks to examine the implications of these technologies for the concept of responsibility, and this includes investigating where responsibility should lie for their adverse consequences. In so doing, it seeks to understand (a) how human rights and fundamental freedoms protected under the ECHR may be adversely affected by the development of AI technologies and (b) how responsibility for those risks and consequences should be allocated. 

Its methodological approach is interdisciplinary, drawing on concepts and academic scholarship from the humanities, the social sciences and, to a more limited extent, from computer science. It concludes that, if we are to take human rights seriously in a hyperconnected digital age, we cannot allow the power of our advanced digital technologies and systems, and those who develop and implement them, to be accrued and exercised without responsibility. Nations committed to protecting human rights must therefore ensure that those who wield and derive benefits from developing and deploying these technologies are held responsible for their risks and consequences. This includes obligations to ensure that there are effective and legitimate mechanisms that will operate to prevent and forestall violations to human rights which these technologies may threaten, and to attend to the health of the larger collective and shared socio-technical environment in which human rights and the rule of law are anchored….(More)”.

Political Selection and Bureaucratic Productivity


Paper by James P. Habyarimana et al: “Economic theory of public bureaucracies as complex organizations predicts that bureaucratic productivity can be shaped by the selection of different types of agents, beyond their incentives. This theory applies to the institutions of local government in the developing world, where nationally appointed bureaucrats and locally elected politicians together manage the implementation of public policies and the delivery of services. Yet, there is no evidence on whether (which) selection traits of these bureaucrats and politicians matter for the productivity of local bureaucracies.

This paper addresses the empirical gap by gathering rich data in an institutional context of district governments in Uganda, which is typical of the local state in poor countries. The paper measures traits such as the integrity, altruism, personality, and public service motivation of bureaucrats and politicians. It finds robust evidence that higher integrity among locally elected politicians is associated with substantively better delivery of public health services by district bureaucracies. Together with the theory, this evidence suggests that policy makers seeking to build local state capacity in poor countries should take political selection seriously….(More)”.

Societal costs and benefits of high-value open government data: a case study in the Netherlands


Paper by F.M. Welle Donker and B. van Loenen: “Much research has emphasised the benefits of open government data, and especially high-value data. The G8 Open Data Charter defines high-value data as data that improve democracy and encourage the innovative reuse of the particular data. Thus, governments worldwide invest resources to identify potential high-value datasets and to publish these data as open data. However, while the benefits of open data are well researched, the costs of publishing data as open data are less researched. This research examines the relationship between the costs of making data suitable for publication as (linked) open data and the societal benefits thereof. A case study of five high-value datasets was carried out in the Netherlands to provide a societal cost-benefit analysis of open high-value data. Different options were investigated, ranging from not publishing the dataset at all to publishing the dataset as linked open data.

In general, it can be concluded that the societal benefits of (linked) open data are higher than the costs. The case studies show that there are differences between the datasets. In many cases, costs for open data are an integral part of general data management costs and hardly lead to additional costs. In certain cases, however, the costs to anonymize /aggregate the data are high compared to the potential value of an open data version of the dataset. Although, for these datasets, this leads to a less favourable relationship between costs and benefits, the societal benefits would still be higher than without an open data version….(More)”.

Defining subnational open government: does local context influence policy and practice?


M. Chatwin, G. Arku and E. Cleave in Policy Sciences: “What is open government? The contemporary conceptualization of open government remains rooted in transparency and accountability, but it is embedded within the political economy of policy, where forces of globalization through supranational organizations strongly influence the creation and dispersion of policy across the globe. Recognizing the direct impact of subnational governments on residents, in 2016 the Open Government Partnership (OGP) launched the Subnational Pioneer’s Pilot Project with 15 participating government authorities globally. Each subnational participant submitted an action plan for opening their government information and processes in 2017. The uniformity of the OGP action plan provides a unique opportunity to assess the conception of open government at the subnational level globally. This paper uses a document analysis to examine how open government is conceptualized at the subnational level, including the salience of various components, and how local context can influence the development of action plans that are responsive to the realities of each participating jurisdiction. This paper assesses whether being a part of the political economy of policy homogenizes the action plans of 15 subnational governments or allows for local context to influence the design of commitments still aligned within a general theme….(More)”.

Trusting Nudges: Toward A Bill of Rights for Nudging


Book by Cass R. Sunstein and Lucia A. Reisch: “Many “nudges” aim to make life simpler, safer, or easier for people to navigate, but what do members of the public really think about these policies? Drawing on surveys from numerous nations around the world, Sunstein and Reisch explore whether citizens approve of nudge policies. Their most important finding is simple and striking. In diverse countries, both democratic and nondemocratic, strong majorities approve of nudges designed to promote health, safety, and environmental protection—and their approval cuts across political divisions.

In recent years, many governments have implemented behaviorally informed policies, focusing on nudges—understood as interventions that preserve freedom of choice, but that also steer people in certain directions. In some circles, nudges have become controversial, with questions raised about whether they amount to forms of manipulation. This fascinating book carefully considers these criticisms and answers important questions. What do citizens actually think about behaviorally informed policies? Do citizens have identifiable principles in mind when they approve or disapprove of the policies? Do citizens of different nations agree with each other?

From the answers to these questions, the authors identify six principles of legitimacy—a “bill of rights” for nudging that build on strong public support for nudging policies around the world, while also recognizing what citizens disapprove of. Their bill of rights is designed to capture citizens’ central concerns, reflecting widespread commitments to freedom and welfare that transcend national boundaries….(More)”.

Agile research


Michael Twidale and Preben Hansen at First Monday: “Most of us struggle when starting a new research project, even if we have considerable prior experience. It is a new topic and we are unsure about what to do, how to do it and what it all means. We may not have reflected much on our research process. Furthermore the way that research is described in the literature can be rather disheartening. Those papers describe what seems to be a nice, clear, linear, logical, even inevitable progression through a series of stages. It seems like proper researchers carefully plan everything out in advance and then execute that plan. How very different from the mess, the bewilderment, the false starts, the dead ends, the reversions and changes that we make along the way. Are we just doing research wrong? If it feels like that to established researchers with decades of experience and a nice publication record, how much worse must it feel to a new researcher, such as a Ph.D. student? If they are lucky they may have a good mentoring experience, effectively serving an apprenticeship with a wise and nurturing adviser in a supportive group of fellow researchers. Even so, it can be all too easy to feel like an imposter who must be doing it all wrong because what you are doing is not at all like what you read about what others are doing.

In the light of these confusions, fears, doubts and mismatches with what you experience while doing research and what you think is the right and proper way as alluded to in all the papers you read, we want to explore ideas around a title, or at least a provocative metaphor of “agile research”. We want to ask the question: “how might we take the ideas, the methods and the underlying philosophy behind agile software development and explore how these might be applied in the context of doing research?” This paper is not about sharing a set of methods that we have developed but more about provoking a discussion about the issue: What might agile research be like? How might it work? When might it be useful? When might it be problematic? Is it worth trying? Are people doing it already?

We are not claiming that this idea is wholly new. Many people have been using small scale rapid iterative methods within the research process for a long time. Rather we think that it can be useful to consider all these and other possible methods in the light of the successful deployment of agile software development processes, and to contrast them with more conventional research processes that rely more on careful advance planning. That is not to say that the latter methods are bad, just that other methods that might be characterized as more agile can be useful in particular circumstances.

We believe that it is worth exploring this idea as a way of addressing the problems that arise in trying to do a new research project, especially where an exploratory approach is useful. This could be in a domain that is new to the researcher, or where the domain is new in some way, such as involving new use contexts, new ways of interacting, new technologies, novel technology combinations, or new appropriations of existing technologies. We suspect this may be especially useful in helping new researchers such as PhD students get a better understanding of the research process in a less daunting manner. This work builds on prior thinking about how agile may be applied in university teaching and administration (Twidale and Nichols, 2013)….(More)”.

Los Angeles Accuses Weather Channel App of Covertly Mining User Data


Jennifer Valentino-DeVries and Natasha Singer in The New York Times: “The Weather Channel app deceptively collected, shared and profited from the location information of millions of American consumers, the city attorney of Los Angeles said in a lawsuit filed on Thursday.

One of the most popular online weather services in the United States, the Weather Channel app has been downloaded more than 100 million times and has 45 million active users monthly.

The government said the Weather Company, the business behind the app, unfairly manipulated users into turning on location tracking by implying that the information would be used only to localize weather reports. Yet the company, which is owned by IBM, also used the data for unrelated commercial purposes, like targeted marketing and analysis for hedge fundsaccording to the lawsuit

In the complaint, the city attorney excoriated the Weather Company, saying it unfairly took advantage of its app’s popularity and the fact that consumers were likely to give their location data to get local weather alerts. The city said that the company failed to sufficiently disclose its data practices when it got users’ permission to track their location and that it obscured other tracking details in its privacy policy.

“These issues certainly aren’t limited to our state,” Mr. Feuer said. “Ideally this litigation will be the catalyst for other action — either litigation or legislative activity — to protect consumers’ ability to assure their private information remains just that, unless they speak clearly in advance.”…(More)”.

Can a set of equations keep U.S. census data private?


Jeffrey Mervis at Science: “The U.S. Census Bureau is making waves among social scientists with what it calls a “sea change” in how it plans to safeguard the confidentiality of data it releases from the decennial census.

The agency announced in September 2018 that it will apply a mathematical concept called differential privacy to its release of 2020 census data after conducting experiments that suggest current approaches can’t assure confidentiality. But critics of the new policy believe the Census Bureau is moving too quickly to fix a system that isn’t broken. They also fear the changes will degrade the quality of the information used by thousands of researchers, businesses, and government agencies.

The move has implications that extend far beyond the research community. Proponents of differential privacy say a fierce, ongoing legal battle over plans to add a citizenship question to the 2020 census has only underscored the need to assure people that the government will protect their privacy....

Differential privacy, first described in 2006, isn’t a substitute for swapping and other ways to perturb the data. Rather, it allows someone—in this case, the Census Bureau—to measure the likelihood that enough information will “leak” from a public data set to open the door to reconstruction.

“Any time you release a statistic, you’re leaking something,” explains Jerry Reiter, a professor of statistics at Duke University in Durham, North Carolina, who has worked on differential privacy as a consultant with the Census Bureau. “The only way to absolutely ensure confidentiality is to release no data. So the question is, how much risk is OK? Differential privacy allows you to put a boundary” on that risk....

In the case of census data, however, the agency has already decided what information it will release, and the number of queries is unlimited. So its challenge is to calculate how much the data must be perturbed to prevent reconstruction....

A professor of labor economics at Cornell University, Abowd first learned that traditional procedures to limit disclosure were vulnerable—and that algorithms existed to quantify the risk—at a 2005 conference on privacy attended mainly by cryptographers and computer scientists. “We were speaking different languages, and there was no Rosetta Stone,” he says.

He took on the challenge of finding common ground. In 2008, building on a long relationship with the Census Bureau, he and a team at Cornell created the first application of differential privacy to a census product. It is a web-based tool, called OnTheMap, that shows where people work and live….

The three-step process required substantial computing power. First, the researchers reconstructed records for individuals—say, a 55-year-old Hispanic woman—by mining the aggregated census tables. Then, they tried to match the reconstructed individuals to even more detailed census block records (that still lacked names or addresses); they found “putative matches” about half the time.

Finally, they compared the putative matches to commercially available credit databases in hopes of attaching a name to a particular record. Even if they could, however, the team didn’t know whether they had actually found the right person.

Abowd won’t say what proportion of the putative matches appeared to be correct. (He says a forthcoming paper will contain the ratio, which he calls “the amount of uncertainty an attacker would have once they claim to have reidentified a person from the public data.”) Although one of Abowd’s recent papers notes that “the risk of re-identification is small,” he believes the experiment proved reidentification “can be done.” And that, he says, “is a strong motivation for moving to differential privacy.”…

Such arguments haven’t convinced Ruggles and other social scientists opposed to applying differential privacy on the 2020 census. They are circulating manuscripts that question the significance of the census reconstruction exercise and that call on the agency to delay and change its plan....

Ruggles, meanwhile, has spent a lot of time thinking about the kinds of problems differential privacy might create. His Minnesota institute, for instance, disseminates data from the Census Bureau and 105 other national statistical agencies to 176,000 users. And he fears differential privacy will put a serious crimp in that flow of information…

There are also questions of capacity and accessibility. The centers require users to do all their work onsite, so researchers would have to travel, and the centers offer fewer than 300 workstations in total....

Abowd has said, “The deployment of differential privacy within the Census Bureau marks a sea change for the way that official statistics are produced and published.” And Ruggles agrees. But he says the agency hasn’t done enough to equip researchers with the maps and tools needed to navigate the uncharted waters….(More)”.

The Paradox of Police Data


Stacy Wood in KULA: knowledge creation, dissemination, and preservation studies: “This paper considers the history and politics of ‘police data.’ Police data, I contend, is a category of endangered data reliant on voluntary and inconsistent reporting by law enforcement agencies; it is also inconsistently described and routinely housed in systems that were not designed with long-term strategies for data preservation, curation or management in mind. Moreover, whereas US law enforcement agencies have, for over a century, produced and published a great deal of data about crime, data about the ways in which police officers spend their time and make decisions about resources—as well as information about patterns of individual officer behavior, use of force, and in-custody deaths—is difficult to find. This presents a paradoxical situation wherein vast stores of extant data are completely inaccessible to the public. This paradoxical state is not new, but the continuation of a long history co-constituted by technologies, epistemologies and context….(More)”.

Data Policy in the Fourth Industrial Revolution: Insights on personal data


Report by the World Economic Forum: “Development of comprehensive data policy necessarily involves trade-offs. Cross-border data flows are crucial to the digital economy. The use of data is critical to innovation and technology. However, to engender trust, we need to have appropriate levels of protection in place to ensure privacy, security and safety. Over 120 laws in effect across the globe today provide differing levels of protection for data but few anticipated 

Data Policy in the Fourth Industrial Revolution: Insights on personal data, a paper by the World Economic Forum in collaboration with the Ministry of Cabinet Affairs and the Future, United Arab Emirates, examines the relationship between risk and benefit, recognizing the impact of culture, values and social norms This work is a start toward developing a comprehensive data policy toolkit and knowledge repository of case studies for policy makers and data policy leaders globally….(More)”.