The Signal Code


The Signal Code: “Humanitarian action adheres to the core humanitarian principles of impartiality, neutrality, independence, and humanity, as well as respect for international humanitarian and human rights law. These foundational principles are enshrined within core humanitarian doctrine, particularly the Red Cross/NGO Code of Conduct5 and the Humanitarian Charter.6 Together, these principles establish a duty of care for populations affected by the actions of humanitarian actors and impose adherence to a standard of reasonable care for those engaged in humanitarian action.

Engagement in HIAs, including the use of data and ICTs, must be consistent with these foundational principles and respect the human rights of crisis-affected people to be considered “humanitarian.” In addition to offering potential benefits to those affected by crisis, HIAs, including the use of ICTs, can cause harm to the safety, wellbeing, and the realization of the human rights of crisis-affected people. Absent a clear understanding of which rights apply to this context, the utilization of new technologies, and in particular experimental applications of these technologies, may be more likely to harm communities and violate the fundamental human rights of individuals.

The Signal Code is based on the application of the UDHR, the Nuremberg Code, the Geneva Convention, and other instruments of customary international law related to HIAs and the use of ICTs by crisis affected-populations and by humanitarians on their behalf. The fundamental human rights undergirding this Code are the rights to life, liberty, and security; the protection of privacy; freedom of expression; and the right to share in scientific advancement and its benefits as expressed in Articles 3, 12, 19, and 27 of the UDHR.7

The Signal Code asserts that all people have fundamental rights to access, transmit, and benefit from information as a basic humanitarian need; to be protected from harms that may result from the provision of information during crisis; to have a reasonable expectation of privacy and data security; to have agency over how their data is collected and used; and to seek redress and rectification when data pertaining to them causes harm or is inaccurate.

These rights are found to apply specifically to the access, collection, generation, processing, use, treatment, and transmission of information, including data, during humanitarian crises. These rights are also found herein to be interrelated and interdependent. To realize any of these rights individually requires realization of all of these rights in concert.

These rights are found to apply to all phases of the data lifecycle—before, during, and after the collection, processing, transmission, storage, or release of data. These rights are also found to be elastic, meaning that they apply to new technologies and scenarios that have not yet been identified or encountered by current practice and theory.

Data is, formally, a collection of symbols which function as a representation of information or knowledge. The term raw data is often used with two different meanings, the first being uncleaned data, that is, data that has been collected in an uncontrolled environment, and unprocessed data, which is collected data that has not been processed in such a way as to make it suitable for decision making. Colloquially, and in the humanitarian context, data is usually thought of solely in the machine readable or digital sense. For the purposes of the Signal Code, we use the term data to encompass information both in its analog and digital representations. Where it is necessary to address data solely in its digital representation, we refer to it as digital data.

No right herein may be used to abridge any other right. Nothing in this code may be interpreted as giving any state, group, or person the right to engage in any activity or perform any act that destroys the rights described herein.

The five human rights that exist specific to information and HIAs during humanitarian crises are the following:

The Right to Information
The Right to Protection
The Right to Data Security and Privacy
The Right to Data Agency
The Right to Redress and Rectification…(More)”

Technology tools in human rights


Engine Room: “Over the past few years, we have been witnessing a wave of new technology tools for human rights documentation. Along with the arrival of the new tools, human rights defenders are facing  new tools, new possibilities, new challenges, and new expectations of human rights documentation initiatives.

Produced with support from the Oak Foundation, this report is designed as a first attempt to detail available technologies that are designed for human rights documentation, understand the various perspectives on the challenges human rights documentation initiatives face when adopting new tools and practices, and analyse what is working and what is not for human rights documentation initiatives seeking to integrate new tools in their work….

Primary takeaways:

  • Traditional methods still apply: The environment in which HRDs are working has not dramatically inherently changed due to technology and data.
  • Unreliability and unknown risks provide huge barriers to engagement with technology: In high-pressured situations such as that of HRDs, methodologies used need to be concrete and reliable.
  • Priorities of HRDs centre around their particular issue: Digital technologies often come as an afterthought, rather than integrated into established strategies for communication or campaigning.
  • The lifespan of technology tools is a big barrier to longterm use: Sustainability of tools and maintenance is a big barrier to engaging with them and can cause fatigue among users having to change their practices often.
  • Past failed attempts at using tools makes future attempts more difficult: After having invested time and energy into changing a workflow or process only for it not to work, people are often reluctant to do the same again.
  • HRDs understand their context best: Tools recommendations coming from external parties sometimes do more harm than good.
  • There is a lack of technical capacity within HRD initiatives: As a result, when tools are introduced, groups become reliant on external parties for technical troubleshooting and support.

(Download the report)

 

Open or Closed? Open Licensing of Real-Time Public Sector Transit Data


Teresa Scassa and Alexandra Diebel in Journal of e-Democracy: “This paper explores how real-time data are made available as “open data” using municipal transit data as a case study. Many transit authorities in North America and elsewhere have installed technology to gather GPS data in real-time from transit vehicles. These data are in high demand in app developer communities because of their use in communicating predicted, rather than scheduled, transit vehicle arrival times. While many municipalities have chosen to treat real-time GPS data as “open data,” the particular nature of real-time GPS data requires a different mode of access for developers than what is needed for static data files. This, in turn, has created a conflict between the “openness” of the underlying data and the sometimes restrictive terms of use which govern access to the real-time data through transit authority Application Program Interfaces (APIs). This paper explores the implications of these terms of use and considers whether real-time data require a separate standard for openness. While the focus is on the transit data context, the lessons from this area will have broader implications, particularly for open real-time data in the emerging smart cities environment….(More)”

Why We Misjudge the Nudge


Paper by Adam Hill: “Critics frequently argue that nudges are more covert, less transparent, and more difficult to monitor than traditional regulatory tools. Edward Glaeser, for example, argues that “[p]ublic monitoring of soft paternalism is much more difficult than public monitoring of hard paternalism.” As one of the leading proponents of soft paternalism, Cass Sunstein, acknowledges, while “[m]andates and commands are highly visible,” soft paternalism, “and some nudges in particular[,] may be invisible.” In response to this challenge, proponents of nudging argue that invisibility for any given individual in a particular choice environment is compatible with “careful public scrutiny” of the nudge. This paper offers first of its kind experimental evidence that tests whether nudges are, in fact, compatible with careful public scrutiny. Using two sets of experiments, the paper argues that, even when made visible, nudges attract less scrutiny than their “hard law” counterparts….(More)”

Rethinking how we collect, share, and use development results data


Development Gateway: “The international development community spends a great deal of time, effort, and money gathering data on thousands of indicators embedded in various levels of Results Frameworks. These data comprise outputs (school enrollment, immunization figures), program outcomes (educational attainment, disease prevalence), and, in some cases, impacts (changes in key outcomes over time).

Ostensibly, we use results data to allocate resources to the places, partners, and programs most likely to achieve lasting success. But is this data good enough – and is it used well enough – to genuinely increase development impact in priority areas?

Experience suggests that decision-makers at all levels may often face inadequate, incorrect, late, or incomplete results data. At the same time, a figurative “Tower of Babel” of both project-level M&E and program-level outcome data can make it difficult for agencies and organizations to share and use data effectively. Further, potential users may not have the skills, resources, or enabling environment to meaningfully analyze and apply results data to decisions. With these challenges in mind, the development community needs to re-think its investments in results data, making sure that the right users are able to collect, share, and use this information to maximum effect.

Our Initiative

To this end, Development Gateway (DG), with the support of the Bill & Melinda Gates Foundation, aims to “diagnose” the results data ecosystem in three countries, identifying ways to improve data quality, sharing, and use in the health and agriculture sectors. Some of our important questions include:

  • Quality: Who collects data and how? Is data quality adequate? Does the data meet actual needs? How much time does data collection demand? How can data collection, quality, and reporting be improved?
  • Sharing: How can we compare results data from different donors, governments, and implementers? Is there demand for comparability? Should data be shared more freely? If so, how?
  • Use: How is results data analyzed and used to inform actual policies and plans? Does (or can) access to results data improve decision-making? Do the right people have the right data? How else can (or should) we promote data use?…(More)”

Open Government: The Global Context and the Way Forward


Report by the OECD: “…provides an in-depth, evidence-based analysis of open government initiatives and the challenges countries face in implementing and co-ordinating them. It also explores new trends in OECD member countries as well as a selection of countries from Latin America, MENA and South East Asia regions. Based on the 2015 Survey on Open Government and Citizen Participation in the Policy Cycle, the report identifies future areas of work, including the effort to mobilise and engage all branches and all levels of government in order to move from open governments to open states; how open government principles and practices can help achieve the UN Sustainable Development Goals; the role of the Media to create an enabling environment for open government initiatives to thrive; and the growing importance of subnational institutions to implement successful open government reforms….(More)”

Towards Scalable Governance: Sensemaking and Cooperation in the Age of Social Media


Iyad Rahwan in Philosophy & Technology: “Cybernetics, or self-governance of animal and machine, requires the ability to sense the world and to act on it in an appropriate manner. Likewise, self-governance of a human society requires groups of people to collectively sense and act on their environment. I argue that the evolution of political systems is characterized by a series of innovations that attempt to solve (among others) two ‘scalability’ problems: scaling up a group’s ability to make sense of an increasingly complex world, and to cooperate in increasingly larger groups. I then explore some recent efforts toward using the Internet and social media to provide alternative means for addressing these scalability challenges, under the banners of crowdsourcing and computer-supported argumentation. I present some lessons from those efforts about the limits of technology, and the research directions more likely to bear fruit….(More)”

Radical thinking reveals the secrets of making change happen


Extract from his new book in The Guardian where “Duncan Green explores how change actually occurs – and what that means: Political and economic earthquakes are often sudden and unforeseeable, despite the false pundits who pop up later to claim they predicted them all along – take the fall of the Berlin Wall, the 2008 global financial crisis, or the Arab Spring (and ensuing winter). Even at a personal level, change is largely unpredictable: how many of us can say our lives have gone according to the plans we had as 16-year-olds?

The essential mystery of the future poses a huge challenge to activists. If change is only explicable in the rear-view mirror, how can we accurately envision the future changes we seek, let alone achieve them? How can we be sure our proposals will make things better, and not fall victim to unintended consequences? People employ many concepts to grapple with such questions. I find “systems” and “complexity” two of the most helpful.

A “system” is an interconnected set of elements coherently organised in a way that achieves something. It is more than the sum of its parts: a body is more than an aggregate of individual cells; a university is not merely an agglomeration of individual students, professors, and buildings; an ecosystem is not just a set of individual plants and animals.

A defining property of human systems is complexity: because of the sheer number of relationships and feedback loops among their many elements, they cannot be reduced to simple chains of cause and effect. Think of a crowd on a city street, or a flock of starlings wheeling in the sky at dusk. Even with supercomputers, it is impossible to predict the movement of any given person or starling, but there is order; amazingly few collisions occur even on the most crowded streets.

In complex systems, change results from the interplay of many diverse and apparently unrelated factors. Those of us engaged in seeking change need to identify which elements are important and understand how they interact.

My interest in systems thinking began when collecting stories for my book FromPoverty to Power. The light-bulb moment came on a visit to India’s Bundelkhandregion, where the poor fishing communities of Tikamgarh had won rights to more than 150 large ponds. In that struggle numerous factors interacted to create change. First, a technological shift triggered changes in behaviour: the introduction of new varieties of fish, which made the ponds more profitable,induced landlords to seize ponds that had been communal. Conflict then built pressure for government action: a group of 12 brave young fishers in one village fought back, prompting a series of violent clashes that radicalized and inspired other communities; women’s groups were organized for the first time, taking control of nine ponds. Enlightened politicians and non-governmental organizations (NGOs) helped pass new laws and the police amazed everyone by enforcing them.

The fishing communities were the real heroes of the story. They tenaciously faced down a violent campaign of intimidation, moved from direct action to advocacy, and ended up winning not only access to the ponds but a series of legal and policy changes that benefited all fishing families.

The neat narrative sequence of cause and effect I’ve just written, of course, is only possible in hindsight. In the thick of the action, no-one could have said why the various actors acted as they did, or what transformed the relative power of each. Tikamgarh’s experience highlights how unpredictable is the interaction between structures (such as state institutions), agency (by communities and individuals), and the broader context (characterized by shifts in technology,environment, demography, or norms).

Unfortunately, the way we commonly think about change projects onto the future the neat narratives we draw from the past. Many of the mental models we use are linear plans – “if A,then B” – with profound consequences in terms of failure, frustration, and missed opportunities. AsMike Tyson memorably said, “Everyone has a plan ’til they get punched in the mouth”….(More)

See also http://how-change-happens.com/

Teaching an Algorithm to Understand Right and Wrong


Greg Satell at Harvard Business Review: “In his Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide what that entails.

Since Aristotle’s time, the questions he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham, andRawls to modern-day cocktail parties and late-night dorm room bull sessions, the issues are endlessly mulled over and argued about but never come to a satisfying conclusion.

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Designing a Learning Environment

Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd at school? We try not to overly shelter our kids because we want them to learn about the world, but we don’t want to expose them to too much before they have the maturity to process it.

In artificial intelligence, these influences are called a “machine learning corpus.”For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats and things that are not cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much as with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, aTwitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual (“Humans are super cool”) to downright scary (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Francesca Rossi, an AI researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as what age a child needs to be to watch an R-rated movie or whether they should learn evolution in school. “We need to decide to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.

However, in some cases algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes. “We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.”…

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context….

As pervasive as artificial intelligence is set to become in the near future, the responsibility rests with society as a whole. Put simply, we need to take the standards by which artificial intelligences will operate just as seriously as those that govern how our political systems operate and how are children are educated.

It is a responsibility that we cannot shirk….(More)

Crowd-sourcing pollution control in India


Springwise: “Following orders by the national government to improve the air quality of the New Delhi region by reducing air pollution, the Environment Pollution (Prevention and Control) Authority created the Hawa Badlo app. Designed for citizens to report cases of air pollution, each complaint is sent to the appropriate official for resolution.

Free to use, the app is available for both iOS and Android. Complaints are geo-tagged, and there are two different versions available – one for citizens and one for government officials. Officials must provide photographic evidence to close a case. The app itself produces weekly reports listings the numbers and status of complaints, along with any actions taken to resolve the problem. Currently focusing on pollution from construction, unpaved roads and the burning of garbage, the team behind the app plans to expand its use to cover other types of pollution as well.

From providing free wi-fi when the air is clean enough to mapping air-quality in real-time, air pollution solutions are increasingly involving citizens….(More)”