Using Data Sharing Agreements as Tools of Indigenous Data Governance: Current Uses and Future Options


Paper by Martinez, A. and Rainie, S. C.: “Indigenous communities and scholars have been influencing a shift in participation and inclusion in academic and agency research over the past two decades. As a response, Indigenous peoples are increasingly asking research questions and developing their own studies rooted in their cultural values. They use the study results to rebuild their communities and to protect their lands. This process of Indigenous-driven research has led to partnering with academic institutions, establishing research review boards, and entering into data sharing agreements to protect environmental data, community information, and local and traditional knowledges.

Data sharing agreements provide insight into how Indigenous nations are addressing the key areas of data collection, ownership, application, storage, and the potential for data reuse in the future. By understanding this mainstream data governance mechanism, how they have been applied, and how they have been used in the past, we aim to describe how Indigenous nations and communities negotiate data protection and control with researchers.

The project described here reviewed publicly available data sharing agreements that focus on research with Indigenous nations and communities in the United States. We utilized qualitative analysis methods to identify specific areas of focus in the data sharing agreements, whether or not traditional or cultural values were included in the language of the data sharing agreements, and how the agreements defined data. The results detail how Indigenous peoples currently use data sharing agreements and potential areas of expansion for language to include in data sharing agreements as Indigenous peoples address the research needs of their communities and the protection of community and cultural data….(More)”.

Rescuing Human Rights: A Radically Moderate Approach


Book by Hurst Hannum: “The development of human rights norms is one of the most significant achievements in international relations and law since 1945, but the continuing influence of human rights is increasingly being questioned by authoritarian governments, nationalists, and pundits. Unfortunately, the proliferation of new rights, linking rights to other issues such as international crimes or the activities of business, and attempting to address every social problem from a human rights perspective risk undermining their credibility.

Rescuing Human Rights calls for understanding ‘human rights’ as international human rights law and maintaining the distinctions between binding legal obligations on governments and broader issues of ethics, politics, and social change. Resolving complex social problems requires more than simplistic appeals to rights, and adopting a ‘radically moderate’ approach that recognizes both the potential and the limits of international human rights law, offers the best hope of preserving the principle that we all have rights, simply because we are human….(More)”.

Shutting down the internet doesn’t work – but governments keep doing it


George Ogola in The Conversation: “As the internet continues to gain considerable power and agency around the world, many governments have moved to regulate it. And where regulation fails, some states resort to internet shutdowns or deliberate disruptions.

The statistics are staggering. In India alone, there were 154 internet shutdowns between January 2016 and May 2018. This is the most of any country in the world.

But similar shutdowns are becoming common on the African continent. Already in 2019 there have been shutdowns in Cameroon, the Democratic Republic of Congo, Republic of Congo, Chad, Sudan and Zimbabwe. Last year there were 21 such shutdowns on the continent. This was the case in Togo, Sierra Leone, Sudan and Ethiopia, among others.

The justifications for such shutdowns are usually relatively predictable. Governments often claim that internet access is blocked in the interest of public security and order. In some instances, however, their reasoning borders on the curious if not downright absurd, like the case of Ethiopia in 2017 and Algeria in 2018 when the internet was shut down apparently to curb cheating in national examinations.

Whatever their reasons, governments have three general approaches to controlling citzens’ access to the web.

How they do it

Internet shutdowns or disruptions usually take three forms. The first and probably the most serious is where the state completely blocks access to the internet on all platforms. It’s arguably the most punitive, with significant socialeconomic and political costs.

The financial costs can run into millions of dollars for each day the internet is blocked. A Deloitte report on the issue estimates that a country with average connectivity could lose at least 1.9% of its daily GDP for each day all internet services are shut down.

For countries with average to medium level connectivity the loss is 1% of daily GDP, and for countries with average to low connectivity it’s 0.4%. It’s estimated that Ethiopia, for example, could lose up to US$500,000 a day whenever there is a shutdown. These shutdowns, then, damage businesses, discourage investments, and hinder economic growth.

The second way that governments restrict internet access is by applying content blocking techniques. They restrict access to particular sites or applications. This is the most common strategy and it’s usually targeted at social media platforms. The idea is to stop or limit conversations on these platforms.

Online spaces have become the platform for various forms of political expression that many states especially those with authoritarian leanings consider subversive. Governments argue, for example, that social media platforms encourage the spread of rumours which can trigger public unrest.

This was the case in 2016 in Uganda during the country’s presidential elections. The government restricted access to social media, describing the shutdown as a “security measure to avert lies … intended to incite violence and illegal declaration of election results”.

In Zimbabwe, the government blocked social media following demonstrations over an increase in fuel prices. It argued that the January 2019 ban was because the platforms were being “used to coordinate the violence”.

The third strategy, done almost by stealth, is the use of what is generally known as “bandwidth throttling”. In this case telecom operators or internet service providers are forced to lower the quality of their cell signals or internet speed. This makes the internet too slow to use. “Throttling” can also target particular online destinations such as social media sites….(More)”

Mapping the challenges and opportunities of artificial intelligence for the conduct of diplomacy


DiploFoundation: “This report provides an overview of the evolution of diplomacy in the context of artificial intelligence (AI). AI has emerged as a very hot topic on the international agenda impacting numerous aspects of our political, social, and economic lives. It is clear that AI will remain a permanent feature of international debates and will continue to shape societies and international relations.

It is impossible to ignore the challenges – and opportunities – AI is bringing to the diplomatic realm. Its relevance as a topic for diplomats and others working in international relations will only increase….(More)”.

Research Handbook on Human Rights and Digital Technology


Book edited by Ben Wagner, Matthias C. Kettemann and Kilian Vieth: “In a digitally connected world, the question of how to respect, protect and implement human rights has become unavoidable. This contemporary Research Handbook offers new insights into well-established debates by framing them in terms of human rights. It examines the issues posed by the management of key Internet resources, the governance of its architecture, the role of different stakeholders, the legitimacy of rule making and rule-enforcement, and the exercise of international public authority over users. Highly interdisciplinary, its contributions draw on law, political science, international relations and even computer science and science and technology studies…(More)”.

Crowdsourced mapping in crisis zones: collaboration, organisation and impact


Amelia Hunt and Doug Specht in the Journal of International Humanitarian Action:  “Crowdsourced mapping has become an integral part of humanitarian response, with high profile deployments of platforms following the Haiti and Nepal earthquakes, and the multiple projects initiated during the Ebola outbreak in North West Africa in 2014, being prominent examples. There have also been hundreds of deployments of crowdsourced mapping projects across the globe that did not have a high profile.

This paper, through an analysis of 51 mapping deployments between 2010 and 2016, complimented with expert interviews, seeks to explore the organisational structures that create the conditions for effective mapping actions, and the relationship between the commissioning body, often a non-governmental organisation (NGO) and the volunteers who regularly make up the team charged with producing the map.

The research suggests that there are three distinct areas that need to be improved in order to provide appropriate assistance through mapping in humanitarian crisis: regionalise, prepare and research. The paper concludes, based on the case studies, how each of these areas can be handled more effectively, concluding that failure to implement one area sufficiently can lead to overall project failure….(More)”

The Everyday Life of an Algorithm


Book by Daniel Neyland: “This open access book begins with an algorithm–a set of IF…THEN rules used in the development of a new, ethical, video surveillance architecture for transport hubs. Readers are invited to follow the algorithm over three years, charting its everyday life. Questions of ethics, transparency, accountability and market value must be grasped by the algorithm in a series of ever more demanding forms of experimentation. Here the algorithm must prove its ability to get a grip on everyday life if it is to become an ordinary feature of the settings where it is being put to work. Through investigating the everyday life of the algorithm, the book opens a conversation with existing social science research that tends to focus on the power and opacity of algorithms. In this book we have unique access to the algorithm’s design, development and testing, but can also bear witness to its fragility and dependency on others….(More)”.

To Reduce Privacy Risks, the Census Plans to Report Less Accurate Data


Mark Hansen at the New York Times: “When the Census Bureau gathered data in 2010, it made two promises. The form would be “quick and easy,” it said. And “your answers are protected by law.”

But mathematical breakthroughs, easy access to more powerful computing, and widespread availability of large and varied public data sets have made the bureau reconsider whether the protection it offers Americans is strong enough. To preserve confidentiality, the bureau’s directors have determined they need to adopt a “formal privacy” approach, one that adds uncertainty to census data before it is published and achieves privacy assurances that are provable mathematically.

The census has always added some uncertainty to its data, but a key innovation of this new framework, known as “differential privacy,” is a numerical value describing how much privacy loss a person will experience. It determines the amount of randomness — “noise” — that needs to be added to a data set before it is released, and sets up a balancing act between accuracy and privacy. Too much noise would mean the data would not be accurate enough to be useful — in redistricting, in enforcing the Voting Rights Act or in conducting academic research. But too little, and someone’s personal data could be revealed.

On Thursday, the bureau will announce the trade-off it has chosen for data publications from the 2018 End-to-End Census Test it conducted in Rhode Island, the only dress rehearsal before the actual census in 2020. The bureau has decided to enforce stronger privacy protections than companies like Apple or Google had when they each first took up differential privacy….

In presentation materials for Thursday’s announcement, special attention is paid to lessening any problems with redistricting: the potential complications of using noisy counts of voting-age people to draw district lines. (By contrast, in 2000 and 2010 the swapping mechanism produced exact counts of potential voters down to the block level.)

The Census Bureau has been an early adopter of differential privacy. Still, instituting the framework on such a large scale is not an easy task, and even some of the big technology firms have had difficulties. For example, shortly after Apple’s announcement in 2016 that it would use differential privacy for data collected from its macOS and iOS operating systems, it was revealed that the actual privacy loss of their systems was much higher than advertised.

Some scholars question the bureau’s abandonment of techniques like swapping in favor of differential privacy. Steven Ruggles, Regents Professor of history and population studies at the University of Minnesota, has relied on census data for decades. Through the Integrated Public Use Microdata Series, he and his team have regularized census data dating to 1850, providing consistency between questionnaires as the forms have changed, and enabling researchers to analyze data across years.

“All of the sudden, Title 13 gets equated with differential privacy — it’s not,” he said, adding that if you make a guess about someone’s identity from looking at census data, you are probably wrong. “That has been regarded in the past as protection of privacy. They want to make it so that you can’t even guess.”

“There is a trade-off between usability and risk,” he added. “I am concerned they may go far too far on privileging an absolutist standard of risk.”

In a working paper published Friday, he said that with the number of private services offering personal data, a prospective hacker would have little incentive to turn to public data such as the census “in an attempt to uncover uncertain, imprecise and outdated information about a particular individual.”…(More)”.

The Constitution of Knowledge


Jonathan Rauch at National Affairs: “America has faced many challenges to its political culture, but this is the first time we have seen a national-level epistemic attack: a systematic attack, emanating from the very highest reaches of power, on our collective ability to distinguish truth from falsehood. “These are truly uncharted waters for the country,” wrote Michael Hayden, former CIA director, in the Washington Post in April. “We have in the past argued over the values to be applied to objective reality, or occasionally over what constituted objective reality, but never the existence or relevance of objective reality itself.” To make the point another way: Trump and his troll armies seek to undermine the constitution of knowledge….

The attack, Hayden noted, is on “the existence or relevance of objective reality itself.” But what is objective reality?

In everyday vernacular, reality often refers to the world out there: things as they really are, independent of human perception and error. Reality also often describes those things that we feel certain about, things that we believe no amount of wishful thinking could change. But, of course, humans have no direct access to an objective world independent of our minds and senses, and subjective certainty is in no way a guarantee of truth. Philosophers have wrestled with these problems for centuries, and today they have a pretty good working definition of objective reality. It is a set of propositions: propositions that have been validated in some way, and have thereby been shown to be at least conditionally true — true, that is, unless debunked. Some of these propositions reflect the world as we perceive it (e.g., “The sky is blue”). Others, like claims made by quantum physicists and abstract mathematicians, appear completely removed from the world of everyday experience.

It is worth noting, however, that the locution “validated in some way” hides a cheat. In what way? Some Americans believe Elvis Presley is alive. Should we send him a Social Security check? Many people believe that vaccines cause autism, or that Barack Obama was born in Africa, or that the murder rate has risen. Who should decide who is right? And who should decide who gets to decide?

This is the problem of social epistemology, which concerns itself with how societies come to some kind of public understanding about truth. It is a fundamental problem for every culture and country, and the attempts to resolve it go back at least to Plato, who concluded that a philosopher king (presumably someone like Plato himself) should rule over reality. Traditional tribal communities frequently use oracles to settle questions about reality. Religious communities use holy texts as interpreted by priests. Totalitarian states put the government in charge of objectivity.

There are many other ways to settle questions about reality. Most of them are terrible because they rely on authoritarianism, violence, or, usually, both. As the great American philosopher Charles Sanders Peirce said in 1877, “When complete agreement could not otherwise be reached, a general massacre of all who have not thought in a certain way has proved a very effective means of settling opinion in a country.”

As Peirce implied, one way to avoid a massacre would be to attain unanimity, at least on certain core issues. No wonder we hanker for consensus. Something you often hear today is that, as Senator Ben Sasse put it in an interview on CNN, “[W]e have a risk of getting to a place where we don’t have shared public facts. A republic will not work if we don’t have shared facts.”

But that is not quite the right answer, either. Disagreement about core issues and even core facts is inherent in human nature and essential in a free society. If unanimity on core propositions is not possible or even desirable, what is necessary to have a functional social reality? The answer is that we need an elite consensus, and hopefully also something approaching a public consensus, on the method of validating propositions. We needn’t and can’t all agree that the same things are true, but a critical mass needs to agree on what it is we do that distinguishes truth from falsehood, and more important, on who does it.

Who can be trusted to resolve questions about objective truth? The best answer turns out to be no one in particular….(More)”.

What difference does data make? Data management and social change


Paper by Morgan E. Currie and Joan M. Donovan: “The purpose of this paper is to expand on emergent data activism literature to draw distinctions between different types of data management practices undertaken by groups of data activists.

The authors offer three case studies that illuminate the data management strategies of these groups. Each group discussed in the case studies is devoted to representing a contentious political issue through data, but their data management practices differ in meaningful ways. The project Making Sense produces their own data on pollution in Kosovo. Fatal Encounters collects “missing data” on police homicides in the USA. The Environmental Data Governance Initiative hopes to keep vulnerable US data on climate change and environmental injustices in the public domain.

In analysing our three case studies, the authors surface how temporal dimensions, geographic scale and sociotechnical politics influence their differing data management strategies….(More)”.