Betting on biometrics to boost child vaccination rates


Ben Parker at The New Humanitarian: “Thousands of children between the ages of one and five are due to be fingerprinted in Bangladesh and Tanzania in the largest biometric scheme of its kind ever attempted, the Geneva-based vaccine agency, Gavi, announced recently.

Although the scheme includes data protection safeguards – and its sponsors are cautious not to promise immediate benefits – it is emerging during a widening debate on data protection, technology ethics, and the risks and benefits of biometric ID in development and humanitarian aid.

Gavi, a global vaccine provider, is teaming up with Japanese and British partners in the venture. It is the first time such a trial has been done on this scale, according to Gavi spokesperson James Fulker.

Being able to track a child’s attendance at vaccination centres, and replace “very unreliable” paper-based records, can help target the 20 million children who are estimated to miss key vaccinations, most in poor or remote communities, Fulker said.

Up to 20,000 children will have their fingerprints taken and linked to their records in existing health projects. That collection effort will be managed by Simprints, a UK-based not-for-profit enterprise specialising in biometric technology in international development, according to Christine Kim, the company’s head of strategic partnerships….

Ethics and legal safeguards

Kim said Simprints would apply data protection standards equivalent to the EU’s General Directive on Privacy Regulation (GDPR), even if national legislation did not demand it. Families could opt out without any penalties, and informed consent would apply to any data gathering. She added that the fieldwork would be approved by national governments, and oversight would also come from institutional review boards at universities in the two countries.

Fulker said Gavi had also commissioned a third-party review to verify Simprints’ data protection and security methods.

For critics of biometrics use in humanitarian settings, however, any such plan raises red flags….

Data protection analysts have long been arguing that gathering digital ID and biometric data carries particular risks for vulnerable groups who face conflict or oppression: their data could be shared or leaked to hostile parties who could use it to target them.

In a recent commentary on biometrics and aid, Linda Raftree told The New Humanitarian that “the greatest burden and risk lies with the most vulnerable, whereas the benefits accrue to [aid] agencies.”

And during a panel discussion on “Digital Do No Harm” held last year in Berlin, humanitarian professionals and data experts discussed a range of threats and unintended consequences of new technologies, noting that they are as yet hard to predict….(More)”.

New App Uses Crowdsourcing to Find You an EpiPen in an Emergency


Article by Shaunacy Ferro: “Many people at risk for severe allergic reactions to things like peanuts and bee stings carry EpiPens. These tools inject the medication epinephrine into one’s bloodstream to control immune responses immediately. But exposure can turn into life-threatening situations in a flash: Without EpiPens, people could suffer anaphylactic shock in less than 15 minutes as they wait for an ambulance. Being without an EpiPen or other auto-injector can have deadly consequences.

EPIMADA, a new app created by researchers at Israel’s Bar-Ilan University, is designed to save the lives of people who go into anaphylactic shock when they don’t have EpiPens handy. The app uses the same type of algorithms that ride-hailing services use to match drivers and riders by location—in this case, EPIMADA matches people in distress with nearby strangers carrying EpiPens. David Schwartz, director of the university’s Social Intelligence Lab and one of the app’s co-creators, told The Jerusalem Post that the app currently has hundreds of users….

EPIMADA serves as a way to crowdsource medication from fellow patients who might be close by and able to help. While it may seem unlikely that people would rush to give up their own expensive life-saving tool for a stranger, EPIMADA co-creator Michal Gaziel Yablowitz, a doctoral student in the Social Intelligence Lab, explained in a press release that “preliminary research results show that allergy patients are highly motivated to give their personal EpiPen to patient-peers in immediate need.”…(More)”.

How can Indigenous Data Sovereignty (IDS) be promoted and mainstreamed within open data movements?


OD Mekong Blog: “Considering Indigenous rights in the open data and technology space is a relatively new concept. Called “Indigenous Data Sovereignty” (IDS), it is defined as “the right of Indigenous peoples to govern the collection, ownership, and application of data about Indigenous communities, peoples, lands, and resources”, regardless of where the data is held or by whom. By default, this broad and all-encompassing framework bucks fundamental concepts of open data, and asks traditional open data practitioners to critically consider how open data can be used as a tool of transparency that also upholds equal rights for all…

Four main areas of concern and relevant barriers identified by participants were:

Self-determination to identify their membership

  • National governments in many states, particularly across Asia and South America, still do not allow for self-determination under the law. Even when legislation offers some recognition these are scarcely enforced, and mainstream discourse demonises Indigenous self-determination.
  • However, because Indigenous and ethnic minorities frequently face hardships and persecution on a daily basis, there were concerns about the applicability of data sovereignty at the local levels.

Intellectual Property Protocols

  • It has become the norm in the everyday lives of people for big tech companies to extract data in excessive amounts. How do disenfranchised communities combat this?
  • Indigenous data is often misappropriated to the detriment of Indigenous peoples.
  • Intellectual property concepts, such as copyright, are not an ideal approach for protecting Indigenous knowledge and intellectual property rights because they are rooted in commercialistic ideals that are difficult to apply to Indigenous contexts. This is especially so because many groups do not practice commercialization in the globalized context. Also, as a concept based on exclusivity (i.e., when licenses expire knowledge gets transferred over as public goods), it doesn’t take into account the collectivist ideals of Indigenous peoples.

Data Governance

  • Ultimately, data protection is about protecting lives. Having the ability to use data to direct decisions on Indigenous development places greater control in the hands of Indigenous peoples.
  • National governments are barriers due to conflicts in sovereignty interests. Nation-state legal systems are often contradictory to customary laws, and thus don’t often reflect rights-based approaches.

Consent — Free Prior and Informed Consent (FPIC)

  • FPIC, referring to a set of principles that define the process and mechanisms that apply specifically to Indigenous peoples in relation to the exercise of their collective rights, is a well-known phrase. They are intended to ensure that Indigenous peoples are treated as sovereign peoples with their own decision-making power, customary governance systems, and collective decision-making processes, but it is questionable as to what level one can ensure true FPIC in the Indigenous context.²
  • It remains a question as too how effectively due diligence can be applied to research protocols, so as to ensure that the rights associated with FPIC and the UNDRIP framework are upheld….(More)”.

How an AI Utopia Would Work


Sami Mahroum at Project Syndicate: “…It is more than 500 years since Sir Thomas More found inspiration for the “Kingdom of Utopia” while strolling the streets of Antwerp. So, when I traveled there from Dubai in May to speak about artificial intelligence (AI), I couldn’t help but draw parallels to Raphael Hythloday, the character in Utopia who regales sixteenth-century Englanders with tales of a better world.

As home to the world’s first Minister of AI, as well as museumsacademies, and foundations dedicated to studying the future, Dubai is on its own Hythloday-esque voyage. Whereas Europe, in general, has grown increasingly anxious about technological threats to employment, the United Arab Emirates has enthusiastically embraced the labor-saving potential of AI and automation.

There are practical reasons for this. The ratio of indigenous-to-foreign labor in the Gulf states is highly imbalanced, ranging from a high of 67% in Saudi Arabia to a low of 11% in the UAE. And because the region’s desert environment cannot support further population growth, the prospect of replacing people with machines has become increasingly attractive.

But there is also a deeper cultural difference between the two regions. Unlike Western Europe, the birthplace of both the Industrial Revolution and the “Protestant work ethic,” Arab societies generally do not “live to work,” but rather “work to live,” placing a greater value on leisure time. Such attitudes are not particularly compatible with economic systems that require squeezing ever more productivity out of labor, but they are well suited for an age of AI and automation….

Fortunately, AI and data-driven innovation could offer a way forward. In what could be perceived as a kind of AI utopia, the paradox of a bigger state with a smaller budget could be reconciled, because the government would have the tools to expand public goods and services at a very small cost.

The biggest hurdle would be cultural: As early as 1948, the German philosopher Joseph Pieper warned against the “proletarianization” of people and called for leisure to be the basis for culture. Westerners would have to abandon their obsession with the work ethic, as well as their deep-seated resentment toward “free riders.” They would have to start differentiating between work that is necessary for a dignified existence, and work that is geared toward amassing wealth and achieving status. The former could potentially be all but eliminated.

With the right mindset, all societies could start to forge a new AI-driven social contract, wherein the state would capture a larger share of the return on assets, and distribute the surplus generated by AI and automation to residents. Publicly-owned machines would produce a wide range of goods and services, from generic drugs, food, clothes, and housing, to basic research, security, and transportation….(More)”.

Open Verification


Article by Eyal Weizman: “More than a decade ago, I would have found the idea of a forensic institute to be rather abhorrent. Coming from the field of left activism and critical spatial practice, I felt instinctively oriented against the authority of established truths. Forensics relies on technical expertise in normative and legal frameworks, and smacks full of institutional authority. It is, after all, one of the fundamental arts of the state, the privilege of its agencies: the police, the secret services, or the military. Today, counter-intuitively perhaps, I find myself running Forensic Architecture, a group of architects, filmmakers, coders, and journalists which operates as a forensic agency and makes evidence public in different forums such as the media, courts, truth commissions, and cultural venues.

This reorientation of my thought practice was a response to changes in the texture of our present and to the nature of contemporary conflict. An evolving information and media environment enables authoritarian states to manipulate and distort facts about their crimes, but it also offers new techniques with which civil society groups can invert the forensic gaze and monitor them. This is what we call counter-forensics.

We do not yet have a satisfactory name for the new reactionary forces—a combination of digital racism, ultra-nationalism, self-victimhood, and conspiracism—that have taken hold across the world and become manifest in countries such as Russia, Poland, Hungary, Britain, Italy, Brazil, the US, and Israel, where I most closely experienced them. These forces have made the obscuring, blurring, manipulation, and distortion of facts their trademark. Whatever form of reality-denial “post truth” is, it is not simply about lying. Lying in politics is sometimes necessary. Deception, after all, has always been part of the toolbox of statecraft, and there might not be more of it now than in previous times.  The defining characteristics of our era might thus not be an extraordinary dissemination of untruths, but rather, ongoing attacks against the institutional authorities that buttress facts: government experts, universities, science laboratories, mainstream media, and the judiciary.

Because questioning the authority of state institutions is also what counter-forensics is about—we seek to expose police and military cover-ups, government lies, and instances in which the legal system has been aligned against state victims—we must distinguish it from the tactics of those political forces mentioned above.

Dark Epistemology

While “post truth” is a seemingly new phenomenon, for those working to expose state crimes at the frontiers of contemporary conflicts, it has long been the constant condition of our work. As a set of operations, this form of denial compounds the traditional roles of propaganda and censorship. It is propaganda because it is concerned with statements released by states to affect the thoughts and conducts of publics. It is not the traditional form of propaganda though, framed in the context of a confrontation between blocs and ideologies. It does not aim to persuade or tell you anything, nor does it seek to promote the assumed merits of one system over the other—equality vs. freedom or east vs. west—but rather to blur perception so that nobody knows what is real anymore. The aim is that when people no longer know what to think, how to establish facts, or when to trust them, those in power can fill this void by whatever they want to fill it with.

“Post truth” also functions as a new form of censorship because it blocks one’s ability to evaluate and debate facts. In the face of governments’ increasing difficulties in cutting data out of circulation and in suppressing political discourse, it adds rather than subtracts, augmenting the level of noise in a deliberate maneuver to divert attention….(More)”.

An open platform centric approach for scalable government service delivery to the poor: The Aadhaar case


Paper by Sandip Mukhopadhyay, Harry Bouwman and Mahadeo PrasadJaiswal: “The efficient delivery of government services to the poor, or Bottom of the Pyramid (BOP), faces many challenges. While a core problem is the lack of scalability, that could be solved by the rapid proliferation of platforms and associated ecosystems. Existing research involving platforms focus on modularity, openness, ecosystem leadership and governance, as well as on their impact on innovation, scale and agility. However, existing studies fail to explore the role of platform in scalable e-government services delivery on an empirical level. Based on an in-depth case study of the world’s largest biometric identity platform, used by millions of the poor in India, we develop a set of propositions connecting the attributes of a digital platform ecosystem to different indicators for the scalability of government service delivery. We found that modular architecture, combined with limited functionality in core modules, and open standards combined with controlled access and ecosystem governance enabled by keystone behaviour, have a positive impact on scalability. The research provides insights to policy-makers and government officials alike, particularly those in nations struggling to provide basic services to poor and marginalised. …(More)”.

Public Entrepreneurship: How to train 21st century leaders


Beth Noveck at apolitical: “So how do we develop these better ways of working in government? How do we create a more effective public service?

Governments, universities and philanthropies are beginning to invest in training those inside and outside of government in new kinds of public entrepreneurial skills. They are also innovating in how they teach.

Canada has created a new Digital Academy to teach digital literacy to all 250,000 public servants. Among other approaches, they have created a 15 minute podcast series called bus rides to enable public servants to learn on their commute.

The better programs, like Canada’s, combine online and face-to-face methods. This is what Israel does in its Digital Leaders program. This nine-month program alternates between web- and live meetings as well as connecting learners to a global, online network of digital innovators.

Many countries have started to teach human-centred design to public servants, instructing officials in how to design services with, not simply for the public, as WeGov does in Brazil. in Chile, the UAI University has just begun teaching quantitative skills, offering three day intensives in data science for public servants.

The GovLab also offers a nifty, free online program called Solving Public Problems with Data.

The Public sector learning

To ensure that learning translates into practice, Australia’s BizLab Academy, turns students into teachers by using alumni of their human-centred design training as mentors for new students.

The Cities of Orlando and Sao Paulo go beyond training public servantsOrlando includes members of the public in its training program for city officials. Because they are learning to redesign services with citizens, the public participates in the training.

The Sao Paulo Abierta program uses citizens as trainers for the city’s public servants. Over 23,000 of them have studied with these lay trainers, who possess the innovation skills that are in short supply in government. In fact, public officials are prohibited from teaching in the program altogether.

Image from the ten recommendations for training public entrepreneurs. Read all the recommendations here. 

Recognising that it is not enough to train only a lone innovator or data scientist in a unit, governments are scaling their programs across the public sector.

Argentina’s LabGob has already trained 30,000 people since 2016 in its Design Academy for Public Policy with plans to expand. For every class taken, a public servant earns points, which are a prerequisite for promotions and pay raises in the Argentinian civil service.

Rather than going broad, some training programs are going deep by teaching sector-specific innovation skills. The NHS Digital Academy done in collaboration with Imperial College is a series of six online and four live sessions designed to produce leaders in health innovation.

Innovating in a bureaucracy

In my own work at the GovLab at New York University, we are helping public entrepreneurs take their public interest projects from idea to implementation using coaching, rather than training.

Training classes may be wonderful but leave people feeling abandoned when they return to their desks to face the challenge of innovating within a bureaucracy.

With hands-on mentoring from global leaders and peer-to-peer support, the GovLab Academycoaching programs try to ensure that public servants are getting the help they need to advance innovative projects.

Knowing what innovation skills to teach and how to teach them, however, should depend on asking people what they want. That’s why the Australia New Zealand School of Government is administering a survey asking these questions for public servants there….(More)”.

Virtuous and vicious circles in the data life-cycle


Paper by Elizabeth Yakel, Ixchel M. Faniel, and Zachary J. Maiorana: “In June 2014, ‘Data sharing reveals complexity in the westward spread of domestic animals across Neolithic Turkey’, was published in PLoS One (Arbuckle et al. 2014). In this article, twenty-three authors, all zooarchaeologists, representing seventeen different archaeological sites in Turkey investigated the domestication of animals across Neolithic southwest Asia, a pivotal era of change in the region’s economy. The PLoS One article originated in a unique data sharing, curation, and reuse project in which a majority of the authors agreed to share their data and perform analyses across the aggregated datasets. The extent of data sharing and the breadth of data reuse and collaboration were previously unprecedented in archaeology. In the present article, we conduct a case study of the collaboration leading to the development of the PLoS One article. In particular, we focus on the data sharing, data curation, and data reuse practices exercised during the project in order to investigate how different phases in the data life-cycle affected each other.

Studies of data practices have generally engaged issues from the singular perspective of data producers, sharers, curators, or reusers. Furthermore, past studies have tended to focus on one aspect of the life-cycle (production, sharing, curation, reuse, etc.). A notable exception is Carlson and Anderson’s (2007) comparative case study of four research projects which discusses the life-cycle of data from production through sharing with an eye towards reuse. However, that study primarily addresses the process of data sharing. While we see from their research that data producers’ and curators’ decisions and actions regarding data are tightly coupled and have future consequences, those consequences are not fully explicated since the authors do not discuss reuse in depth.

Taking a perspective that captures the trajectory of data, our case study discusses actions and their consequences throughout the data life-cycle. Our research theme explores how different stakeholders and their work practices positively and/or negatively affected other phases of the life-cycle. More specifically, we focus on data production practices and data selection decisions made during data sharing as these have frequent and diverse consequences for other life-cycle phases in our case study. We address the following research questions:

  1. How do different aspects of data production positively and negatively impact other phases in the life-cycle?
  2. How do data selection decisions during sharing positively and negatively impact other phases in the life-cycle?
  3. How can the work of data curators intervene to reinforce positive actions or mitigate negative actions?…(More)”

Bringing Truth to the Internet


Article by Karen Kornbluh and Ellen P. Goodman: “The first volume of Special Counsel Robert Mueller’s report notes that “sweeping” and “systemic” social media disinformation was a key element of Russian interference in the 2016 election. No sooner were Mueller’s findings public than Twitter suspended a host of bots who had been promoting a “Russiagate hoax.”

Since at least 2016, conspiracy theories like Pizzagate and QAnon have flourished online and bled into mainstream debate. Earlier this year, a British member of Parliament called social media companies “accessories to radicalization” for their role in hosting and amplifying radical hate groups after the New Zealand mosque shooter cited and attempted to fuel more of these groups. In Myanmar, anti-Rohingya forces used Facebook to spread rumors that spurred ethnic cleansing, according to a UN special rapporteur. These platforms are vulnerable to those who aim to prey on intolerance, peer pressure, and social disaffection. Our democracies are being compromised. They work only if the information ecosystem has integrity—if it privileges truth and channels difference into nonviolent discourse. But the ecosystem is increasingly polluted.

Around the world, a growing sense of urgency about the need to address online radicalization is leading countries to embrace ever more draconian solutions: After the Easter bombings in Sri Lanka, the government shut down access to Facebook, WhatsApp, and other social media platforms. And a number of countries are considering adopting laws requiring social media companies to remove unlawful hate speech or face hefty penalties. According to Freedom House, “In the past year, at least 17 countries approved or proposed laws that would restrict online media in the name of fighting ‘fake news’ and online manipulation.”

The flaw with these censorious remedies is this: They focus on the content that the user sees—hate speech, violent videos, conspiracy theories—and not on the structural characteristics of social media design that create vulnerabilities. Content moderation requirements that cannot scale are not only doomed to be ineffective exercises in whack-a-mole, but they also create free expression concerns, by turning either governments or platforms into arbiters of acceptable speech. In some countries, such as Saudi Arabia, content moderation has become justification for shutting down dissident speech.

When countries pressure platforms to root out vaguely defined harmful content and disregard the design vulnerabilities that promote that content’s amplification, they are treating a symptom and ignoring the disease. The question isn’t “How do we moderate?” Instead, it is “How do we promote design change that optimizes for citizen control, transparency, and privacy online?”—exactly the values that the early Internet promised to embody….(More)”.

Number of fact-checking outlets surges to 188 in more than 60 countries


Mark Stencel at Poynter: “The number of fact-checking outlets around the world has grown to 188 in more than 60 countries amid global concerns about the spread of misinformation, according to the latest tally by the Duke Reporters’ Lab.

Since the last annual fact-checking census in February 2018, we’ve added 39 more outlets that actively assess claims from politicians and social media, a 26% increase. The new total is also more than four times the 44 fact-checkers we counted when we launched our global database and map in 2014.

Globally, the largest growth came in Asia, which went from 22 to 35 outlets in the past year. Nine of the 27 fact-checking outlets that launched since the start of 2018 were in Asia, including six in India. Latin American fact-checking also saw a growth spurt in that same period, with two new outlets in Costa Rica, and others in Mexico, Panama and Venezuela.

The actual worldwide total is likely much higher than our current tally. That’s because more than a half-dozen of the fact-checkers we’ve added to the database since the start of 2018 began as election-related partnerships that involved the collaboration of multiple organizations. And some those election partners are discussing ways to continue or reactivate that work— either together or on their own.

Over the past 12 months, five separate multimedia partnerships enlisted more than 60 different fact-checking organizations and other news companies to help debunk claims and verify information for voters in MexicoBrazilSweden,Nigeria and the Philippines. And the Poynter Institute’s International Fact-Checking Network assembled a separate team of 19 media outlets from 13 countries to consolidate and share their reporting during the run-up to last month’s elections for the European Parliament. Our database includes each of these partnerships, along with several others— but not each of the individual partners. And because they were intentionally short-run projects, three of these big partnerships appear among the 74 inactive projects we also document in our database.

Politics isn’t the only driver for fact-checkers. Many outlets in our database are concentrating efforts on viral hoaxes and other forms of online misinformation — often in coordination with the big digital platforms on which that misinformation spreads.

We also continue to see new topic-specific fact-checkers such as Metafact in Australia and Health Feedback in France— both of which launched in 2018 to focus on claims about health and medicine for a worldwide audience….(More)”.