Smart Cities in Application: Healthcare, Policy, and Innovation


Book edited by Stan McClellan: “This book explores categories of applications and driving factors surrounding the Smart City phenomenon. The contributing authors provide perspective on the Smart Cities, covering numerous applications and classes of applications. The book uses a top-down exploration of the driving factors in Smart Cities, by including focal areas including “Smart Healthcare,” “Public Safety & Policy Issues,” and “Science, Technology, & Innovation.”  Contributors have direct and substantive experience with important aspects of Smart Cities and discuss issues with technologies & standards, roadblocks to implementation, innovations that create new opportunities, and other factors relevant to emerging Smart City infrastructures….(More)”.

The Power of Global Performance Indicators


Introduction to Special Issue of International Organization by
Judith G. Kelley and Beth A. Simmons: “In recent decades, IGOs, NGOs, private firms and even states have begun to regularly package and distribute information on the relative performance of states. From the World Bank’s Ease of Doing Business Index to the Financial Action Task Force blacklist, global performance indicators (GPIs) are increasingly deployed to influence governance globally. We argue that GPIs derive influence from their ability to frame issues, extend the authority of the creator, and — most importantly — to invoke recurrent comparison that stimulates governments’ concerns for their own and their country’s reputation. Their public and ongoing ratings and rankings of states are particularly adept at capturing attention not only at elite policy levels but also among other domestic and transnational actors. GPIs thus raise new questions for research on politics and governance globally. What are the social and political effects of this form of information on discourse, policies and behavior? What types of actors can effectively wield GPIs and on what types of issues? In this symposium introduction, we define GPIs, describe their rise, and theorize and discuss these questions in light of the findings of the symposium contributions…(More)”.

Stop Surveillance Humanitarianism


Mark Latonero at The New York Times: “A standoff between the United Nations World Food Program and Houthi rebels in control of the capital region is threatening the lives of hundreds of thousands of civilians in Yemen.

Alarmed by reports that food is being diverted to support the rebels, the aid program is demanding that Houthi officials allow them to deploy biometric technologies like iris scans and digital fingerprints to monitor suspected fraud during food distribution.

The Houthis have reportedly blocked food delivery, painting the biometric effort as an intelligence operation, and have demanded access to the personal data on beneficiaries of the aid. The impasse led the aid organization to the decision last month to suspend food aid to parts of the starving population — once thought of as a last resort — unless the Houthis allow biometrics.

With program officials saying their staff is prevented from doing its essential jobs, turning to a technological solution is tempting. But biometrics deployed in crises can lead to a form of surveillance humanitarianism that can exacerbate risks to privacy and security.

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need….(More)”.

Betting on biometrics to boost child vaccination rates


Ben Parker at The New Humanitarian: “Thousands of children between the ages of one and five are due to be fingerprinted in Bangladesh and Tanzania in the largest biometric scheme of its kind ever attempted, the Geneva-based vaccine agency, Gavi, announced recently.

Although the scheme includes data protection safeguards – and its sponsors are cautious not to promise immediate benefits – it is emerging during a widening debate on data protection, technology ethics, and the risks and benefits of biometric ID in development and humanitarian aid.

Gavi, a global vaccine provider, is teaming up with Japanese and British partners in the venture. It is the first time such a trial has been done on this scale, according to Gavi spokesperson James Fulker.

Being able to track a child’s attendance at vaccination centres, and replace “very unreliable” paper-based records, can help target the 20 million children who are estimated to miss key vaccinations, most in poor or remote communities, Fulker said.

Up to 20,000 children will have their fingerprints taken and linked to their records in existing health projects. That collection effort will be managed by Simprints, a UK-based not-for-profit enterprise specialising in biometric technology in international development, according to Christine Kim, the company’s head of strategic partnerships….

Ethics and legal safeguards

Kim said Simprints would apply data protection standards equivalent to the EU’s General Directive on Privacy Regulation (GDPR), even if national legislation did not demand it. Families could opt out without any penalties, and informed consent would apply to any data gathering. She added that the fieldwork would be approved by national governments, and oversight would also come from institutional review boards at universities in the two countries.

Fulker said Gavi had also commissioned a third-party review to verify Simprints’ data protection and security methods.

For critics of biometrics use in humanitarian settings, however, any such plan raises red flags….

Data protection analysts have long been arguing that gathering digital ID and biometric data carries particular risks for vulnerable groups who face conflict or oppression: their data could be shared or leaked to hostile parties who could use it to target them.

In a recent commentary on biometrics and aid, Linda Raftree told The New Humanitarian that “the greatest burden and risk lies with the most vulnerable, whereas the benefits accrue to [aid] agencies.”

And during a panel discussion on “Digital Do No Harm” held last year in Berlin, humanitarian professionals and data experts discussed a range of threats and unintended consequences of new technologies, noting that they are as yet hard to predict….(More)”.

Blockchain and Public Record Keeping: Of Temples, Prisons, and the (Re)Configuration of Power


Paper by Victoria L. Lemieux: “This paper discusses blockchain technology as a public record keeping system, linking record keeping to power of authority, veneration (temples), and control (prisons) that configure and reconfigure social, economic, and political relations. It discusses blockchain technology as being constructed as a mechanism to counter institutions and social actors that currently hold power, but whom are nowadays often viewed with mistrust. It explores claims for blockchain as a record keeping force of resistance to those powers using an archival theoretic analytic lens. The paper evaluates claims that blockchain technology can support the creation and preservation of trustworthy records able to serve as alternative sources of evidence of rights, entitlements and actions with the potential to unseat the institutional power of the nation-state….(More)”.

In the Mood for Democracy? Democratic Support as Thermostatic Opinion


Paper by Christopher Claassen: “Public support has long been thought crucial for the survival of democracy. Existing research has argued that democracy moreover appears to create its own demand: the presence of a democratic system coupled with the passage of time produces a public who supports democracy. Using new panel measures of democratic mood varying over 135 countries and up to 30 years, this paper finds little evidence for such a positive feedback effect of democracy on support. Instead, it demonstrates a thermostatic effect: increases in democracy depress democratic mood, while decreases cheer it. Moreover, it is increases in the liberal, counter-majoritarian aspects of democracy, not the majoritarian, electoral aspects that provoke this backlash from citizens. These novel results challenge existing research on support for democracy, but also reconcile this research with the literature on macro-opinion….(More)”.

Truth and Consequences


Sophia Rosenfeld at The Hedgehog Review: “Conventional wisdom has it that for democracy to work, it is essential that we—the citizens—agree in some minimal way about what reality looks like. We are not, of course, all required to think the same way about big questions, or believe the same things, or hold the same values; in fact, it is expected that we won’t. But somehow or other, we need to have acquired some very basic, shared understanding about what causes what, what’s broadly desirable, what’s dangerous, and how to characterize what’s already happened.

Some social scientists call this “public knowledge.” Some, more cynically, call it “serviceable truth” to emphasize its contingent, socially constructed quality. Either way, it is the foundation on which democratic politics—in which no one person or institution has sole authority to determine what’s what and all claims are ultimately revisable—is supposed to rest. It is also imagined to be one of the most exalted products of the democratic process. And to a certain degree, this peculiar, messy version of truth has held its own in modern liberal democracies, including the United States, for most of their history.

Lately, though, even this low-level kind of consensus has come to seem elusive. The issue is not just professional spinners talking about “alternative facts” or the current US president bending the truth and spreading conspiracy theories at every turn, from mass rallies to Twitter rants. The deeper problem stems from the growing sense we all have that, today, even hard evidence of the kind that used to settle arguments about factual questions won’t persuade people whose political commitments have already led them to the opposite conclusion. Rather, citizens now belong to “epistemic tribes”: One person’s truth is another’s hoax or lie. Just look at how differently those of different political leanings interpret the evidence of global warming or the conclusions of the Mueller Report on Russian involvement in the 2016 Trump presidential campaign. Moreover, many of those same people are also now convinced that the boundaries between truth and untruth are, in the end, as subjective as everything else. It is all a matter of perception and spin; nothing is immune, and it doesn’t really matter.

Headed for a Cliff

So what’s happened? Why has assent on even basic factual claims (beyond logically demonstrable ones, like 2 + 2 = 4) become so hard to achieve? Or, to put it slightly differently, why are we—meaning people of varied political persuasions—having so much trouble lately arriving at any broadly shared sense of the world beyond ourselves, and, even more, any consensus on which institutions, methods, or people to trust to get us there? And why, ultimately, do so many of us seem simply to have given up on the possibility of finding some truths in common?

These are questions that seem especially loaded precisely because of the traditionally close conceptual and historical relationship between truth and democracy as social values….(More)”.

Proposal for an International Taxonomy on the Various Forms of the ‘Right to Be Forgotten’: A Study on the Convergence of Norms


Paper by W. Gregory Voss and Céline Castets-Renard: “The term “right to be forgotten” is used today to represent a multitude of rights, and this fact causes difficulties in interpretation, analysis, and comprehension of such rights. These rights have become of utmost importance due to the increased risks to the privacy of individuals on the Internet, where social media, blogs, fora, and other outlets have entered into common use as part of human expression. Search engines, as Internet intermediaries, have been enrolled to assist in the attempt to regulate the Internet, and the rights falling under the moniker of the “right to be forgotten,” without truly knowing the extent of the related rights. In part to alleviate such problems, and focusing on digital technology and media, this paper proposes a taxonomy to identify various rights from different countries, which today are often regrouped under the banner “right to be forgotten,” and to do so in an understandable and coherent way. As an integral part of this exercise, this study aims to measure the extent to which there is a convergence of legal rules internationally in order to regulate private life on the Internet and to elucidate the impact that the important Google Spain “right to be forgotten” ruling of the Court of Justice of the European Union has had on law in other jurisdictions on this matter.

This paper will first introduce the definition and context of the “right to be forgotten.” Second, it will trace some of the sources of the rights discussed around the world to survey various forms of the “right to be forgotten” internationally and propose a taxonomy. This work will allow for a determination on whether there is a convergence of norms regarding the “right to be forgotten” and, more generally, with respect to privacy and personal data protection laws. Finally, this paper will provide certain criteria for the relevant rights and organize them into a proposed analytical grid to establish more precisely the proposed taxonomy of the “right to be forgotten” for the use of scholars, practitioners, policymakers, and students alike….(More)”.

How an AI Utopia Would Work


Sami Mahroum at Project Syndicate: “…It is more than 500 years since Sir Thomas More found inspiration for the “Kingdom of Utopia” while strolling the streets of Antwerp. So, when I traveled there from Dubai in May to speak about artificial intelligence (AI), I couldn’t help but draw parallels to Raphael Hythloday, the character in Utopia who regales sixteenth-century Englanders with tales of a better world.

As home to the world’s first Minister of AI, as well as museumsacademies, and foundations dedicated to studying the future, Dubai is on its own Hythloday-esque voyage. Whereas Europe, in general, has grown increasingly anxious about technological threats to employment, the United Arab Emirates has enthusiastically embraced the labor-saving potential of AI and automation.

There are practical reasons for this. The ratio of indigenous-to-foreign labor in the Gulf states is highly imbalanced, ranging from a high of 67% in Saudi Arabia to a low of 11% in the UAE. And because the region’s desert environment cannot support further population growth, the prospect of replacing people with machines has become increasingly attractive.

But there is also a deeper cultural difference between the two regions. Unlike Western Europe, the birthplace of both the Industrial Revolution and the “Protestant work ethic,” Arab societies generally do not “live to work,” but rather “work to live,” placing a greater value on leisure time. Such attitudes are not particularly compatible with economic systems that require squeezing ever more productivity out of labor, but they are well suited for an age of AI and automation….

Fortunately, AI and data-driven innovation could offer a way forward. In what could be perceived as a kind of AI utopia, the paradox of a bigger state with a smaller budget could be reconciled, because the government would have the tools to expand public goods and services at a very small cost.

The biggest hurdle would be cultural: As early as 1948, the German philosopher Joseph Pieper warned against the “proletarianization” of people and called for leisure to be the basis for culture. Westerners would have to abandon their obsession with the work ethic, as well as their deep-seated resentment toward “free riders.” They would have to start differentiating between work that is necessary for a dignified existence, and work that is geared toward amassing wealth and achieving status. The former could potentially be all but eliminated.

With the right mindset, all societies could start to forge a new AI-driven social contract, wherein the state would capture a larger share of the return on assets, and distribute the surplus generated by AI and automation to residents. Publicly-owned machines would produce a wide range of goods and services, from generic drugs, food, clothes, and housing, to basic research, security, and transportation….(More)”.

How I Learned to Stop Worrying and Love the GDPR


Ariane Adam at DataStewards.net: “The General Data Protection Regulation (GDPR) was approved by the EU Parliament on 14 April 2016 and came into force on 25 May 2018….

The coming into force of this important regulation has created confusion and concern about penalties, particularly in the private sector….There is also apprehension about how the GDPR will affect the opening and sharing of valuable databases. At a time when open data is increasingly shaping the choices we make, from finding the fastest route home to choosing the best medical or education provider, misinformation about data protection principles leads to concerns that ‘privacy’ will be used as a smokescreen to not publish important information. Allaying the concerns of private organisations and businesses in this area is particularly important as often the datasets that most matter, and that could have the most impact if they were open, do not belong to governments.

Looking at the regulation and its effects about one year on, this paper advances a positive case for the GDPR and aims to demonstrate that a proper understanding of its underlying principles can not only assist in promoting consumer confidence and therefore business growth, but also enable organisations to safely open and share important and valuable datasets….(More)”.