Stop Surveillance Humanitarianism


Mark Latonero at The New York Times: “A standoff between the United Nations World Food Program and Houthi rebels in control of the capital region is threatening the lives of hundreds of thousands of civilians in Yemen.

Alarmed by reports that food is being diverted to support the rebels, the aid program is demanding that Houthi officials allow them to deploy biometric technologies like iris scans and digital fingerprints to monitor suspected fraud during food distribution.

The Houthis have reportedly blocked food delivery, painting the biometric effort as an intelligence operation, and have demanded access to the personal data on beneficiaries of the aid. The impasse led the aid organization to the decision last month to suspend food aid to parts of the starving population — once thought of as a last resort — unless the Houthis allow biometrics.

With program officials saying their staff is prevented from doing its essential jobs, turning to a technological solution is tempting. But biometrics deployed in crises can lead to a form of surveillance humanitarianism that can exacerbate risks to privacy and security.

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need….(More)”.

The Governance Turn in Information Privacy Law


Paper by Jane K. Winn: “The governance turn in information privacy law is a turn away from a model of bureaucratic administration of individual control rights and toward a model of collaborative governance of shared interests in information. Collaborative information governance has roots in the American pragmatic philosophy of Peirce, James and Dewey and the 1973 HEW Report that rejected unilateral individual control rights, recognizing instead the essential characteristic of mutuality of shared purposes that are mediated through information governance. America’s current information privacy law regime consists of market mechanisms supplemented by sector-specific, risk-based laws designed to foster a culture of compliance. Prior to the GDPR, data protection law compliance in Europe was more honored in the breach than the observance, so the EU’s strengthening of its bureaucratic individual control rights model reveals more about the EU’s democratic deficit than a commitment to compliance.

The conventional “Europe good, America bad” wisdom about information privacy law obscures a paradox: if the focus shifts from what “law in the books” says to what “law in action” does, it quickly becomes apparent that American businesses lead the world with their efforts to comply with information privacy law, so “America good, Europe bad” might be more accurate. Creating a federal legislative interface through which regulators and voluntary, consensus standards organizations can collaborate could break the current political stalemate triggered by California’s 2018 EU-style information privacy law. Such a pragmatic approach to information governance can safeguard Americans’ continued access to the benefits of innovation and economic growth as well as providing risk-based protection from harm. America can preserve its leadership of the global information economy by rejecting EU-style information privacy laws and building instead a flexible, dynamic framework of information governance capable of addressing both privacy and disclosure issues simultaneously….(More)”.

Betting on biometrics to boost child vaccination rates


Ben Parker at The New Humanitarian: “Thousands of children between the ages of one and five are due to be fingerprinted in Bangladesh and Tanzania in the largest biometric scheme of its kind ever attempted, the Geneva-based vaccine agency, Gavi, announced recently.

Although the scheme includes data protection safeguards – and its sponsors are cautious not to promise immediate benefits – it is emerging during a widening debate on data protection, technology ethics, and the risks and benefits of biometric ID in development and humanitarian aid.

Gavi, a global vaccine provider, is teaming up with Japanese and British partners in the venture. It is the first time such a trial has been done on this scale, according to Gavi spokesperson James Fulker.

Being able to track a child’s attendance at vaccination centres, and replace “very unreliable” paper-based records, can help target the 20 million children who are estimated to miss key vaccinations, most in poor or remote communities, Fulker said.

Up to 20,000 children will have their fingerprints taken and linked to their records in existing health projects. That collection effort will be managed by Simprints, a UK-based not-for-profit enterprise specialising in biometric technology in international development, according to Christine Kim, the company’s head of strategic partnerships….

Ethics and legal safeguards

Kim said Simprints would apply data protection standards equivalent to the EU’s General Directive on Privacy Regulation (GDPR), even if national legislation did not demand it. Families could opt out without any penalties, and informed consent would apply to any data gathering. She added that the fieldwork would be approved by national governments, and oversight would also come from institutional review boards at universities in the two countries.

Fulker said Gavi had also commissioned a third-party review to verify Simprints’ data protection and security methods.

For critics of biometrics use in humanitarian settings, however, any such plan raises red flags….

Data protection analysts have long been arguing that gathering digital ID and biometric data carries particular risks for vulnerable groups who face conflict or oppression: their data could be shared or leaked to hostile parties who could use it to target them.

In a recent commentary on biometrics and aid, Linda Raftree told The New Humanitarian that “the greatest burden and risk lies with the most vulnerable, whereas the benefits accrue to [aid] agencies.”

And during a panel discussion on “Digital Do No Harm” held last year in Berlin, humanitarian professionals and data experts discussed a range of threats and unintended consequences of new technologies, noting that they are as yet hard to predict….(More)”.

Blockchain and Public Record Keeping: Of Temples, Prisons, and the (Re)Configuration of Power


Paper by Victoria L. Lemieux: “This paper discusses blockchain technology as a public record keeping system, linking record keeping to power of authority, veneration (temples), and control (prisons) that configure and reconfigure social, economic, and political relations. It discusses blockchain technology as being constructed as a mechanism to counter institutions and social actors that currently hold power, but whom are nowadays often viewed with mistrust. It explores claims for blockchain as a record keeping force of resistance to those powers using an archival theoretic analytic lens. The paper evaluates claims that blockchain technology can support the creation and preservation of trustworthy records able to serve as alternative sources of evidence of rights, entitlements and actions with the potential to unseat the institutional power of the nation-state….(More)”.

Secrecy, Privacy and Accountability: Challenges for Social Research


Book by Mike Sheaff: “Public mistrust of those in authority and failings of public organisations frame disputes over attribution of responsibility between individuals and systems. Exemplified with examples, including the Aberfan disaster, the death of Baby P, and Mid Staffs Hospital, this book explores parallel conflicts over access to information and privacy.

The Freedom of Information Act (FOIA) allows access to information about public organisations but can be in conflict with the Data Protection Act, protecting personal information. Exploring the use of the FOIA as a research tool, Sheaff offers a unique contribution to the development of sociological research methods, and debates connected to privacy and secrecy in the information age. This book will provide sociologists and social scientists with a fresh perspective on contemporary issues of power and control….(More)”.

How can Indigenous Data Sovereignty (IDS) be promoted and mainstreamed within open data movements?


OD Mekong Blog: “Considering Indigenous rights in the open data and technology space is a relatively new concept. Called “Indigenous Data Sovereignty” (IDS), it is defined as “the right of Indigenous peoples to govern the collection, ownership, and application of data about Indigenous communities, peoples, lands, and resources”, regardless of where the data is held or by whom. By default, this broad and all-encompassing framework bucks fundamental concepts of open data, and asks traditional open data practitioners to critically consider how open data can be used as a tool of transparency that also upholds equal rights for all…

Four main areas of concern and relevant barriers identified by participants were:

Self-determination to identify their membership

  • National governments in many states, particularly across Asia and South America, still do not allow for self-determination under the law. Even when legislation offers some recognition these are scarcely enforced, and mainstream discourse demonises Indigenous self-determination.
  • However, because Indigenous and ethnic minorities frequently face hardships and persecution on a daily basis, there were concerns about the applicability of data sovereignty at the local levels.

Intellectual Property Protocols

  • It has become the norm in the everyday lives of people for big tech companies to extract data in excessive amounts. How do disenfranchised communities combat this?
  • Indigenous data is often misappropriated to the detriment of Indigenous peoples.
  • Intellectual property concepts, such as copyright, are not an ideal approach for protecting Indigenous knowledge and intellectual property rights because they are rooted in commercialistic ideals that are difficult to apply to Indigenous contexts. This is especially so because many groups do not practice commercialization in the globalized context. Also, as a concept based on exclusivity (i.e., when licenses expire knowledge gets transferred over as public goods), it doesn’t take into account the collectivist ideals of Indigenous peoples.

Data Governance

  • Ultimately, data protection is about protecting lives. Having the ability to use data to direct decisions on Indigenous development places greater control in the hands of Indigenous peoples.
  • National governments are barriers due to conflicts in sovereignty interests. Nation-state legal systems are often contradictory to customary laws, and thus don’t often reflect rights-based approaches.

Consent — Free Prior and Informed Consent (FPIC)

  • FPIC, referring to a set of principles that define the process and mechanisms that apply specifically to Indigenous peoples in relation to the exercise of their collective rights, is a well-known phrase. They are intended to ensure that Indigenous peoples are treated as sovereign peoples with their own decision-making power, customary governance systems, and collective decision-making processes, but it is questionable as to what level one can ensure true FPIC in the Indigenous context.²
  • It remains a question as too how effectively due diligence can be applied to research protocols, so as to ensure that the rights associated with FPIC and the UNDRIP framework are upheld….(More)”.

Beyond Open Data Hackathons: Exploring Digital Innovation Success


Paper by Fotis Kitsios and Maria Kamariotou: “Previous researchers have examined the motivations of developers to participate in hackathons events and the challenges of open data hackathons, but limited studies have focused on the preparation and evaluation of these contests. Thus, the purpose of this paper is to examine factors that lead to the effective implementation and success of open data hackathons and innovation contests.

Six case studies of open data hackathons and innovation contests held between 2014 and 2018 in Thessaloniki were studied in order to identify the factors leading to the success of hackathon contests using criteria from the existing literature. The results show that the most significant factors were clear problem definition, mentors’ participation to the contest, level of support to participants by mentors in order to launch their applications to the market, jury members’ knowledge and experience, the entry requirements of the competition, and the participation of companies, data providers, and academics. Furthermore, organizers should take team members’ competences and skills, as well as the support of post-launch activities for applications, into consideration. This paper can be of interest to organizers of hackathon events because they could be knowledgeable about the factors that should take into consideration for the successful implementation of these events….(More)”.

Proposal for an International Taxonomy on the Various Forms of the ‘Right to Be Forgotten’: A Study on the Convergence of Norms


Paper by W. Gregory Voss and Céline Castets-Renard: “The term “right to be forgotten” is used today to represent a multitude of rights, and this fact causes difficulties in interpretation, analysis, and comprehension of such rights. These rights have become of utmost importance due to the increased risks to the privacy of individuals on the Internet, where social media, blogs, fora, and other outlets have entered into common use as part of human expression. Search engines, as Internet intermediaries, have been enrolled to assist in the attempt to regulate the Internet, and the rights falling under the moniker of the “right to be forgotten,” without truly knowing the extent of the related rights. In part to alleviate such problems, and focusing on digital technology and media, this paper proposes a taxonomy to identify various rights from different countries, which today are often regrouped under the banner “right to be forgotten,” and to do so in an understandable and coherent way. As an integral part of this exercise, this study aims to measure the extent to which there is a convergence of legal rules internationally in order to regulate private life on the Internet and to elucidate the impact that the important Google Spain “right to be forgotten” ruling of the Court of Justice of the European Union has had on law in other jurisdictions on this matter.

This paper will first introduce the definition and context of the “right to be forgotten.” Second, it will trace some of the sources of the rights discussed around the world to survey various forms of the “right to be forgotten” internationally and propose a taxonomy. This work will allow for a determination on whether there is a convergence of norms regarding the “right to be forgotten” and, more generally, with respect to privacy and personal data protection laws. Finally, this paper will provide certain criteria for the relevant rights and organize them into a proposed analytical grid to establish more precisely the proposed taxonomy of the “right to be forgotten” for the use of scholars, practitioners, policymakers, and students alike….(More)”.

How an AI Utopia Would Work


Sami Mahroum at Project Syndicate: “…It is more than 500 years since Sir Thomas More found inspiration for the “Kingdom of Utopia” while strolling the streets of Antwerp. So, when I traveled there from Dubai in May to speak about artificial intelligence (AI), I couldn’t help but draw parallels to Raphael Hythloday, the character in Utopia who regales sixteenth-century Englanders with tales of a better world.

As home to the world’s first Minister of AI, as well as museumsacademies, and foundations dedicated to studying the future, Dubai is on its own Hythloday-esque voyage. Whereas Europe, in general, has grown increasingly anxious about technological threats to employment, the United Arab Emirates has enthusiastically embraced the labor-saving potential of AI and automation.

There are practical reasons for this. The ratio of indigenous-to-foreign labor in the Gulf states is highly imbalanced, ranging from a high of 67% in Saudi Arabia to a low of 11% in the UAE. And because the region’s desert environment cannot support further population growth, the prospect of replacing people with machines has become increasingly attractive.

But there is also a deeper cultural difference between the two regions. Unlike Western Europe, the birthplace of both the Industrial Revolution and the “Protestant work ethic,” Arab societies generally do not “live to work,” but rather “work to live,” placing a greater value on leisure time. Such attitudes are not particularly compatible with economic systems that require squeezing ever more productivity out of labor, but they are well suited for an age of AI and automation….

Fortunately, AI and data-driven innovation could offer a way forward. In what could be perceived as a kind of AI utopia, the paradox of a bigger state with a smaller budget could be reconciled, because the government would have the tools to expand public goods and services at a very small cost.

The biggest hurdle would be cultural: As early as 1948, the German philosopher Joseph Pieper warned against the “proletarianization” of people and called for leisure to be the basis for culture. Westerners would have to abandon their obsession with the work ethic, as well as their deep-seated resentment toward “free riders.” They would have to start differentiating between work that is necessary for a dignified existence, and work that is geared toward amassing wealth and achieving status. The former could potentially be all but eliminated.

With the right mindset, all societies could start to forge a new AI-driven social contract, wherein the state would capture a larger share of the return on assets, and distribute the surplus generated by AI and automation to residents. Publicly-owned machines would produce a wide range of goods and services, from generic drugs, food, clothes, and housing, to basic research, security, and transportation….(More)”.

How I Learned to Stop Worrying and Love the GDPR


Ariane Adam at DataStewards.net: “The General Data Protection Regulation (GDPR) was approved by the EU Parliament on 14 April 2016 and came into force on 25 May 2018….

The coming into force of this important regulation has created confusion and concern about penalties, particularly in the private sector….There is also apprehension about how the GDPR will affect the opening and sharing of valuable databases. At a time when open data is increasingly shaping the choices we make, from finding the fastest route home to choosing the best medical or education provider, misinformation about data protection principles leads to concerns that ‘privacy’ will be used as a smokescreen to not publish important information. Allaying the concerns of private organisations and businesses in this area is particularly important as often the datasets that most matter, and that could have the most impact if they were open, do not belong to governments.

Looking at the regulation and its effects about one year on, this paper advances a positive case for the GDPR and aims to demonstrate that a proper understanding of its underlying principles can not only assist in promoting consumer confidence and therefore business growth, but also enable organisations to safely open and share important and valuable datasets….(More)”.