‘Collective intelligence’ is not necessarily present in virtual groups


Jordan B. Barlow and Alan R. Dennis at LSE: “Do groups of smart people perform better than groups of less intelligent people?

Research published in Science magazine in 2010 reported that groups, like individuals, have a certain level of “collective intelligence,” such that some groups perform consistently well across many different types of tasks, while other groups perform consistently poorly. Collective intelligence is similar to individual intelligence, but at the group level.

Interestingly, the Science study found that collective intelligence was not related to the individual intelligence of group members; groups of people with higher intelligence did not perform better than groups with lower intelligence. Instead, the study found that high performing teams had members with higher social sensitivity – the ability to read the emotions of others using visual facial cues.

Social sensitivity is important when we sit across a table from each other. But what about online, when we exchange emails or text messages? Does social sensitivity matter when I can’t see your face?

We examined the collective intelligence in an online environment in which groups used text-based computer-mediated communication. We followed the same procedures as the original Science study, which used the approach typically used to measure individual intelligence. In individual intelligence tests, a person completes several small “tasks” or problems. An analysis of task scores typically demonstrates that task scores are correlated, meaning that if a person does well on one problem, it is likely that they did well on other problems….

The results were not what we expected. The correlations between our groups’ performance scores were either not statistically significant or significantly negative, as shown in Table 1. The average correlation between any two tasks was -0.05, indicating that performance on one task was not correlated with performance on other tasks. In other words, groups who performed well on one of the tasks were unlikely to perform well on the other tasks…

Our findings challenge the conclusion reported in Science that groups have a general collective intelligence analogous to individual intelligence. Our study shows that no collective intelligence factor emerged when groups used a popular commercial text-based online tool. That is, when using tools with limited visual cues, groups that performed well on one task were no more likely to perform well on a different task. Thus the “collective intelligence” factor related to social sensitivity that was reported in Science is not collective intelligence; it is instead a factor associated with the ability to work well using face-to-face communication, and does not transcend media….(More)”

Selected Readings on Algorithmic Scrutiny


By Prianka Srinivasan, Andrew Young and Stefaan Verhulst

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of algorithmic scrutiny was originally published in 2017.

Introduction

From government policy, to criminal justice, to our news feeds; to business and consumer practices, the processes that shape our lives both online and off are more and more driven by data and the complex algorithms used to form rulings or predictions. In most cases, these algorithms have created “black boxes” of decision making, where models remain inscrutable and inaccessible. It should therefore come as no surprise that several observers and policymakers are calling for more scrutiny of how algorithms are designed and work, particularly when their outcomes convey intrinsic biases or defy existing ethical standards.

While the concern about values in technology design is not new, recent developments in machine learning, artificial intelligence and the Internet of Things have increased the urgency to establish processes and develop tools to scrutinize algorithms.

In what follows, we have curated several readings covering the impact of algorithms on:

  • Information Intermediaries;
  • Governance
  • Finance
  • Justice

In addition we have selected a few readings that provide insight on possible processes and tools to establish algorithmic scrutiny.

Selected Reading List

Information Intermediaries

Governance

Consumer Finance

Justice

Tools & Process Toward Algorithmic Scrutiny

Annotated Selected Reading List

Information Intermediaries

Diakopoulos, Nicholas. “Algorithmic accountability: Journalistic investigation of computational power structures.” Digital Journalism 3.3 (2015): 398-415. http://bit.ly/.

  • This paper attempts to substantiate the notion of accountability for algorithms, particularly how they relate to media and journalism. It puts forward the notion of “algorithmic power,” analyzing the framework of influence such systems exert, and also introduces some of the challenges in the practice of algorithmic accountability, particularly for computational journalists.
  • Offers a basis for how algorithms can be analyzed, built in terms of the types of decisions algorithms make in prioritizing, classifying, associating, and filtering information.

Diakopoulos, Nicholas, and Michael Koliska. “Algorithmic transparency in the news media.” Digital Journalism (2016): 1-20. http://bit.ly/2hMvXdE.

  • This paper analyzes the increased use of “computational journalism,” and argues that though transparency remains a key tenet of journalism, the use of algorithms in gathering, producing and disseminating news undermines this principle.
  • It first analyzes what the ethical principle of transparency means to journalists and the media. It then highlights the findings from a focus-group study, where 50 participants from the news media and academia were invited to discuss three different case studies related to the use of algorithms in journalism.
  • They find two key barriers to algorithmic transparency in the media: “(1) a lack of business incentives for disclosure, and (2) the concern of overwhelming end-users with too much information.”
  • The study also finds a variety of opportunities for transparency across the “data, model, inference, and interface” components of an algorithmic system.

Napoli, Philip M. “The algorithm as institution: Toward a theoretical framework for automated media production and consumption.” Fordham University Schools of Business Research Paper (2013). http://bit.ly/2hKBHqo

  • This paper puts forward an analytical framework to discuss the algorithmic content creation of media and journalism in an attempt to “close the gap” on theory related to automated media production.
  • By borrowing concepts from institutional theory, the paper finds that algorithms are distinct forms of media institutions, and the cultural and political implications of this interpretation.
  • It urges further study in the field of “media sociology” to further unpack the influence of algorithms, and their role in institutionalizing certain norms, cultures and ways of thinking.

Introna, Lucas D., and Helen Nissenbaum. “Shaping the Web: Why the politics of search engines matters.” The Information Society 16.3 (2000): 169-185. http://bit.ly/2ijzsrg.

  • This paper, published 16 years ago, provides an in-depth account of some of the risks related to search engine optimizations, and the biases and harms these can introduce, particularly on the nature of politics.
  • Suggests search engines can be designed to account for these political dimensions, and better correlate with the ideal of the World Wide Web as being a place that is open, accessible and democratic.
  • According to the paper, policy (and not the free market) is the only way to spur change in this field, though the current technical solutions we have introduce further challenges.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media
technologies: Essays on communication, materiality, and society (2014): 167. http://bit.ly/2h6ASEu.

  • This paper suggests that the extended use of algorithms, to the extent that they undercut many aspects of our lives, (Tarleton calls this public relevance algorithms) are fundamentally “producing and certifying knowledge.” In this ability to create a particular “knowledge logic,” algorithms are a primary feature of our information ecosystem.
  • The paper goes on to map 6 dimensions of these public relevance algorithms:
    • Patterns of inclusion
    • Cycles of anticipation
    • The evaluation of relevance
    • The promise of algorithmic objectivity
    • Entanglement with practice
    • The production of calculated publics
  • The paper concludes by highlighting the need for a sociological inquiry into the function, implications and contexts of algorithms, and to “soberly  recognize their flaws and fragilities,” despite the fact that much of their inner workings remain hidden.

Rainie, Lee and Janna Anderson. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center. February 8, 2017. http://bit.ly/2kwnvCo.

  • This Pew Research Center report examines the benefits and negative impacts of algorithms as they become more influential in different sectors and aspects of daily life.
  • Through a scan of the research and practice, with a particular focus on the research of experts in the field, Rainie and Anderson identify seven key themes of the burgeoning Algorithm Age:
    • Algorithms will continue to spread everywhere
    • Good things lie ahead
    • Humanity and human judgment are lost when data and predictive modeling become paramount
    • Biases exist in algorithmically-organized systems
    • Algorithmic categorizations deepen divides
    • Unemployment will rise; and
    • The need grows for algorithmic literacy, transparency and oversight

Tufekci, Zeynep. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency.” Journal on Telecommunications & High Technology Law 13 (2015): 203. http://bit.ly/1JdvCGo.

  • This paper establishes some of the risks and harms in regard to algorithmic computation, particularly in their filtering abilities as seen in Facebook and other social media algorithms.
  • Suggests that the editorial decisions performed by algorithms can have significant influence on our political and cultural realms, and categorizes the types of harms that algorithms may have on individuals and their society.
  • Takes two case studies–one from the social media coverage of the Ferguson protests, the other on how social media can influence election turnouts–to analyze the influence of algorithms. In doing so, this paper lays out the “tip of the iceberg” in terms of some of the challenges and ethical concerns introduced by algorithmic computing.

Mittelstadt, Brent, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. “The Ethics of Algorithms: Mapping the Debate.” Big Data & Society (2016): 3(2). http://bit.ly/2kWNwL6

  • This paper provides significant background and analysis of the ethical context of algorithmic decision-making. It primarily seeks to map the ethical consequences of algorithms, which have adopted the role of a mediator between data and action within societies.
  • Develops a conceptual map of 6 ethical concerns:
      • Inconclusive Evidence
      • Inscrutable Evidence
      • Misguided Evidence
      • Unfair Outcomes
      • Transformative Effects
    • Traceability
  • The paper then reviews existing literature, which together with the map creates a structure to inform future debate.

Governance

Janssen, Marijn, and George Kuk. “The challenges and limits of big data algorithms in technocratic governance.” Government Information Quarterly 33.3 (2016): 371-377. http://bit.ly/2hMq4z6.

  • In regarding the centrality of algorithms in enforcing policy and extending governance, this paper analyzes the “technocratic governance” that has emerged by the removal of humans from decision making processes, and the inclusion of algorithmic automation.
  • The paper argues that the belief in technocratic governance producing neutral and unbiased results, since their decision-making processes are uninfluenced by human thought processes, is at odds with studies that reveal the inherent discriminatory practices that exist within algorithms.
  • Suggests that algorithms are still bound by the biases of designers and policy-makers, and that accountability is needed to improve the functioning of an algorithm. In order to do so, we must acknowledge the “intersecting dynamics of algorithm as a sociotechnical materiality system involving technologies, data and people using code to shape opinion and make certain actions more likely than others.”

Just, Natascha, and Michael Latzer. “Governance by algorithms: reality construction by algorithmic selection on the Internet.” Media, Culture & Society (2016): 0163443716643157. http://bit.ly/2h6B1Yv.

  • This paper provides a conceptual framework on how to assess the governance potential of algorithms, asking how technology and software governs individuals and societies.
  • By understanding algorithms as institutions, the paper suggests that algorithmic governance puts in place more evidence-based and data-driven systems than traditional governance methods. The result is a form of governance that cares more about effects than causes.
  • The paper concludes by suggesting that algorithmic selection on the Internet tends to shape individuals’ realities and social orders by “increasing individualization, commercialization, inequalities, deterritorialization, and decreasing transparency, controllability, predictability.”

Consumer Finance

Hildebrandt, Mireille. “The dawn of a critical transparency right for the profiling era.” Digital Enlightenment Yearbook 2012 (2012): 41-56. http://bit.ly/2igJcGM.

  • Analyzes the use of consumer profiling by online businesses in order to target marketing and services to their needs. By establishing how this profiling relates to identification, the author also offers some of the threats to democracy and the right of autonomy posed by these profiling algorithms.
  • The paper concludes by suggesting that cross-disciplinary transparency is necessary to design more accountable profiling techniques that can match the extension of “smart environments” that capture ever more data and information from users.

Reddix-Smalls, Brenda. “Credit Scoring and Trade Secrecy: An Algorithmic Quagmire or How the Lack of Transparency in Complex Financial Models Scuttled the Finance Market.” UC Davis Business Law Journal 12 (2011): 87. http://bit.ly/2he52ch

  • Analyzes the creation of predictive risk models in financial markets through algorithmic systems, particularly in regard to credit scoring. It suggests that these models were corrupted in order to maintain a competitive market advantage: “The lack of transparency and the legal environment led to the use of these risk models as predatory credit pricing instruments as opposed to accurate credit scoring predictive instruments.”
  • The paper suggests that without greater transparency of these financial risk model, and greater regulation over their abuse, another financial crisis like that in 2008 is highly likely.

Justice

Aas, Katja Franko. “Sentencing Transparency in the Information Age.” Journal of Scandinavian Studies in Criminology and Crime Prevention 5.1 (2004): 48-61. http://bit.ly/2igGssK.

  • This paper questions the use of predetermined sentencing in the US judicial system through the application of computer technology and sentencing information systems (SIS). By assessing the use of these systems between the English speaking world and Norway, the author suggests that such technological approaches to sentencing attempt to overcome accusations of mistrust, uncertainty and arbitrariness often leveled against the judicial system.
  • However, in their attempt to rebuild trust, such technological solutions can be seen as an attempt to remedy a flawed view of judges by the public. Therefore, the political and social climate must be taken into account when trying to reform these sentencing systems: “The use of the various sentencing technologies is not only, and not primarily, a matter of technological development. It is a matter of a political and cultural climate and the relations of trust in a society.”

Cui, Gregory. “Evidence-Based Sentencing and the Taint of Dangerousness.” Yale Law Journal Forum 125 (2016): 315-315. http://bit.ly/1XLAvhL.

  • This short essay submitted on the Yale Law Journal Forum calls for greater scrutiny of “evidence based sentencing,” where past data is computed and used to predict future criminal behavior of a defendant. The author suggests that these risk models may undermine the Constitution’s prohibition of bills of attainder, and also are unlawful for inflicting punishment without a judicial trial.

Tools & Processes Toward Algorithmic Scrutiny

Ananny, Mike and Crawford, Kate. “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.” New Media & Society. SAGE Publications. 2016. http://bit.ly/2hvKc5x.

  • This paper attempts to critically analyze calls to improve the transparency of algorithms, asking how historically we are able to confront the limitations of the transparency ideal in computing.
  • By establishing “transparency as an ideal” the paper tracks the philosophical and historical lineage of this principle, attempting to establish what laws and provisions were put in place across the world to keep up with and enforce this ideal.
  • The paper goes on to detail the limits of transparency as an ideal, arguing, amongst other things, that it does not necessarily build trust, it privileges a certain function (seeing) over others (say, understanding) and that it has numerous technical limitations.
  • The paper ends by concluding that transparency is an inadequate way to govern algorithmic systems, and that accountability must acknowledge the ability to govern across systems.

Datta, Anupam, Shayak Sen, and Yair Zick. “Algorithmic Transparency via Quantitative Input Influence.Proceedings of 37th IEEE Symposium on Security and Privacy. 2016. http://bit.ly/2hgyLTp.

  • This paper develops what is called a family of Quantitative Input Influence (QII) measures “that capture the degree of influence of inputs on outputs of systems.” The attempt is to theorize a transparency report that is to accompany any algorithmic decisions made, in order to explain any decisions and detect algorithmic discrimination.
  • QII works by breaking “correlations between inputs to allow causal reasoning, and computes the marginal influence of inputs in situations where inputs cannot affect outcomes alone.”
  • Finds that these QII measures are useful in scrutinizing algorithms when “black box” access is available.

Goodman, Bryce, and Seth Flaxman. “European Union regulations on algorithmic decision-making and a right to explanationarXiv preprint arXiv:1606.08813 (2016). http://bit.ly/2h6xpWi.

  • This paper analyzes the implications of a new EU law, to be enacted in 2018, that calls to “restrict automated individual decision-making (that is, algorithms that make decisions based on user level predictors) which ‘significantly affect’ users.” The law will also allow for a “right to explanation” where users can ask for an explanation behind automated decision made about them.
  • The paper, while acknowledging the challenges in implementing such laws, suggests that such regulations can spur computer scientists to create algorithms and decision making systems that are more accountable, can provide explanations, and do not produce discriminatory results.
  • The paper concludes by stating algorithms and computer systems should not aim to be simply efficient, but also fair and accountable. It is optimistic about the ability to put in place interventions to account for and correct discrimination.

Kizilcec, René F. “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface.” Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2016. http://bit.ly/2hMjFUR.

  • This paper studies how transparency of algorithms affects our impression of trust by conducting an online field experiment, where participants enrolled in a MOOC a given different explanations for the computer generated grade given in their class.
  • The study found that “Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust.”
  • In conclusion, the study found that a balance of transparency was needed to maintain trust amongst the participants, suggesting that pure transparency of algorithmic processes and results may not correlate with high feelings of trust amongst users.

Kroll, Joshua A., et al. “Accountable Algorithms.” University of Pennsylvania Law Review 165 (2016). http://bit.ly/2i6ipcO.

  • This paper suggests that policy and legal standards need to be updated given the increased use of algorithms to perform tasks and make decisions in arenas that people once did. An “accountability mechanism” is lacking in many of these automated decision making processes.
  • The paper argues that mere transparency through the divulsion of source code is inadequate when confronting questions of accountability. Rather, technology itself provides a key to create algorithms and decision making apparatuses more inline with our existing political and legal frameworks.
  • The paper assesses some computational techniques that may provide possibilities to create accountable software and reform specific cases of automated decisionmaking. For example, diversity and anti-discrimination orders can be built into technology to ensure fidelity to policy choices.

Open-Sourcing Google Earth Enterprise


Geo Developers Blog: “We are excited to announce that we are open-sourcing Google Earth Enterprise (GEE), the enterprise product that allows developers to build and host their own private maps and 3D globes. With this release, GEE Fusion, GEE Server, and GEE Portable Server source code (all 470,000+ lines!) will be published on GitHub under the Apache2 license in March.

Originally launched in 2006, Google Earth Enterprise provides customers the ability to build and host private, on-premise versions of Google Earth and Google Maps. In March 2015, we announced the deprecation of the product and the end of all sales. To provide ample time for customers to transition, we have provided a two year maintenance period ending on March 22, 2017. During this maintenance period, product updates have been regularly shipped and technical support has been available to licensed customers….

GCP is increasingly used as a source for geospatial data. Google’s Earth Engine has made available over a petabyte of raster datasets which are readily accessible and available to the public on Google Cloud Storage. Additionally, Google uses Cloud Storage to provide data to customers who purchase Google Imagerytoday. Having access to massive amounts of geospatial data, on the same platform as your flexible compute and storage, makes generating high quality Google Earth Enterprise Databases and Portables easier and faster than ever.

We will be sharing a series of white papers and other technical resources to make it as frictionless as possible to get open source GEE up and running on Google Cloud Platform. We are excited about the possibilities that open-sourcing enables, and we trust this is good news for our community. We will be sharing more information when we launch the code in March on GitHub. For general product information, visit the Google Earth Enterprise Help Center. Review the essential and advanced training for how to use Google Earth Enterprise, or learn more about the benefits of Google Cloud Platform….(More)”

What Does Big Data Mean For Sustainability?


Saurabh Tyagi at Sustainable Brands: “Everything around us is impacted by big data today. The phenomenon took shape earlier in this decade and there are now a growing number of compelling ways in which big data analytics is being applied to solve real-world problems….Out of the many promises of big data, environment sustainability is one of the most important ones to implement and maintain. Why so?

Climate change has moved to the top of the list of global risks, affecting every country and disrupting economies. While a major part of this damage is irreversible, it is still possible with use of a wide range of technological measures to control the global increase in temperature. Big data can generate useful insights that can be as relevant towards fostering environment sustainability as they have been to other sectors such as healthcare.

Understanding operations

Big data’s usefulness is in its ability to help businesses understand and act on the environmental impacts of their operations. Some of these are within their boundaries while others are outside their direct control. Previously, this information was dispersed across different formats, locations and sites. However, now businesses are trying to make out the end-to-end impact of their operations throughout the value chain. This includes things that are outside of their direct control, including raw material sourcing, employee travels, product disposal, and the like.

Assessing environmental risks

Big data is also useful in assessing environmental risks. For example, Aqueduct is an interactive water-risk mapping tool from the World Resources Institute that monitors and calculates water risk anywhere in the world based on various parameters related to the water’s quantity, quality and other changing regulatory issue in that area. With this free online, users can choose the factors on which they want to focus and also zoom in at a particular location.

Big data is also enabling environmental sustainability by helping us to understand the demand for energy and food as the world population increases and climate change reduces these resources by every passing year.

Optimizing resource usage

Another big contribution of big data to the corporate world is its ability to help them optimize usage of resources. At the Initiative for Global Environment Leadership (IGEL) conference in 2014, David Parker, VP of Big Data for SAP, discussed how Italian tire company Pirelli uses SAP’s big data management system, HANA, to optimize its inventory. The company uses data generated by sensors in its tires globally to reduce waste, increase profits and reduce the number of defective tires going to landfills, thus doing its bit for environment. Similarly, Dutch energy company Alliander uses HANA to maintain the grid’s peak efficiency, which in turn increases profits and reduces environmental impact. While at one time it used to take 10 weeks for the company to optimize the grid, now it takes only three days to accomplish the same; a task which Alliander used to do once in a year now can be accomplished once every month….

Big data helps better regulation

Big data can also be integrated into government policies to ensure better environmental regulation. Governments can now implement the latest sensor technology and adopt real-time reporting of environmental quality data. This data can be used monitor the emissions of large utility facilities and if required put some regulatory framework in place to regularize the emissions. The firms are given complete freedom to experiment and chose the best possible mean of achieving the required result….(More)”

Quantifying scenic areas using crowdsourced data


Chanuki Illushka Seresinhe, Helen Susannah Moat and Tobias Preis in Environment and Planning B: Urban Analytics and City Science: “For centuries, philosophers, policy-makers and urban planners have debated whether aesthetically pleasing surroundings can improve our wellbeing. To date, quantifying how scenic an area is has proved challenging, due to the difficulty of gathering large-scale measurements of scenicness. In this study we ask whether images uploaded to the website Flickr, combined with crowdsourced geographic data from OpenStreetMap, can help us estimate how scenic people consider an area to be. We validate our findings using crowdsourced data from Scenic-Or-Not, a website where users rate the scenicness of photos from all around Great Britain. We find that models including crowdsourced data from Flickr and OpenStreetMap can generate more accurate estimates of scenicness than models that consider only basic census measurements such as population density or whether an area is urban or rural. Our results provide evidence that by exploiting the vast quantity of data generated on the Internet, scientists and policy-makers may be able to develop a better understanding of people’s subjective experience of the environment in which they live….(More)”

Forged Through Fire


Book by John Ferejohn and Frances McCall Rosenbluth: “Peace, many would agree, is a goal that democratic nations should strive to achieve. But is democracy, in fact, dependent on war to survive?

Having spent their celebrated careers exploring this provocative question, John Ferejohn and Frances McCall Rosenbluth trace the surprising ways in which governments have mobilized armies since antiquity, discovering that our modern form of democracy not only evolved in a brutally competitive environment but also quickly disintegrated when the powerful elite no longer needed their citizenry to defend against existential threats.?

Bringing to vivid life the major battles that shaped our current political landscape, the authors begin with the fierce warrior states of Athens and the Roman Republic. While these experiments in “mixed government” would serve as a basis for the bargain between politics and protection at the heart of modern democracy, Ferejohn and Rosenbluth brilliantly chronicle the generations of bloodshed that it would take for the world’s dominant states to hand over power to the people. In fact, for over a thousand years, even as medieval empires gave way to feudal Europe, the king still ruled. Not even the advancements of gunpowder—which decisively tipped the balance away from the cavalry-dominated militaries and in favor of mass armies—could threaten the reign of monarchs and “landed elites” of yore.?

The incredibly wealthy, however, were not well equipped to handle the massive labor classes produced by industrialization. As we learn, the Napoleonic Wars stoked genuine, bottom-up nationalism and pulled splintered societies back together as “commoners” stepped up to fight for their freedom. Soon after, Hitler and Stalin perfectly illustrated the military limitations of dictatorships, a style of governance that might be effective for mobilizing an army but not for winning a world war. This was a lesson quickly heeded by the American military, who would begin to reinforce their ranks with minorities in exchange for greater civil liberties at home.?

Like Francis Fukuyama and Jared Diamond’s most acclaimed works, Forged Through Fire concludes in the modern world, where the “tug of war” between the powerful and the powerless continues to play out in profound ways. Indeed, in the covert battlefields of today, drones have begun to erode the need for manpower, giving politicians even less incentive than before to listen to the demands of their constituency. With American democracy’s flanks now exposed, this urgent examination explores the conditions under which war has promoted one of the most cherished human inventions: a government of the people, by the people, for the people. The result promises to become one of the most important history books to emerge in our time….(More)”

Urban Exposures: How Cell Phone Data Helps Us Better Understand Human Exposure To Air Pollution


Senseable City Lab: “Global urbanization has led to one of the world’s most pressing environmental health concerns: the increasing number of people contributing to and being affected by air pollution, leading to 7 million early deaths each year. The key issue is human exposure to pollution within cities and the consequential effects on human health.

With new research conducted at MIT’s Senseable City Lab, human exposure to air pollution can now be accurately quantified at an unprecedented scale. Researchers mapped the movements of several million people using ubiquitous cell phone data, and intersected this information with neighborhood air pollution measures. Covering the expanse of New York City and its 8.5 million inhabitants, the study reveals where and when New Yorkers are most at risk of exposure to air pollution – with major implications for environment and public health policy… (More)”

The Hackable City: Citymaking in a Platform Society Authors


Martijn de Waal, Michiel de Lange, and Matthijs Bouw in Special Issue on 4D Hyperlocal: A Cultural Toolkit for the Open-Source City of Architectural Design: ” Can computer hacking have positive parallels in the shaping of the built environment? The Hackable City research project was set up with this question in mind, to investigate the potential of digital platforms to open up the citymaking process. Its cofounders Martijn de Waal, Michiel de Lange and Matthijs Bouw here outline the tendencies that their studies of collaborative urban development initiatives around the world have revealed, and ask whether knowledge sharing and incremental change might be a better way forward than top-down masterplans….(More)”

The Signal Code


The Signal Code: “Humanitarian action adheres to the core humanitarian principles of impartiality, neutrality, independence, and humanity, as well as respect for international humanitarian and human rights law. These foundational principles are enshrined within core humanitarian doctrine, particularly the Red Cross/NGO Code of Conduct5 and the Humanitarian Charter.6 Together, these principles establish a duty of care for populations affected by the actions of humanitarian actors and impose adherence to a standard of reasonable care for those engaged in humanitarian action.

Engagement in HIAs, including the use of data and ICTs, must be consistent with these foundational principles and respect the human rights of crisis-affected people to be considered “humanitarian.” In addition to offering potential benefits to those affected by crisis, HIAs, including the use of ICTs, can cause harm to the safety, wellbeing, and the realization of the human rights of crisis-affected people. Absent a clear understanding of which rights apply to this context, the utilization of new technologies, and in particular experimental applications of these technologies, may be more likely to harm communities and violate the fundamental human rights of individuals.

The Signal Code is based on the application of the UDHR, the Nuremberg Code, the Geneva Convention, and other instruments of customary international law related to HIAs and the use of ICTs by crisis affected-populations and by humanitarians on their behalf. The fundamental human rights undergirding this Code are the rights to life, liberty, and security; the protection of privacy; freedom of expression; and the right to share in scientific advancement and its benefits as expressed in Articles 3, 12, 19, and 27 of the UDHR.7

The Signal Code asserts that all people have fundamental rights to access, transmit, and benefit from information as a basic humanitarian need; to be protected from harms that may result from the provision of information during crisis; to have a reasonable expectation of privacy and data security; to have agency over how their data is collected and used; and to seek redress and rectification when data pertaining to them causes harm or is inaccurate.

These rights are found to apply specifically to the access, collection, generation, processing, use, treatment, and transmission of information, including data, during humanitarian crises. These rights are also found herein to be interrelated and interdependent. To realize any of these rights individually requires realization of all of these rights in concert.

These rights are found to apply to all phases of the data lifecycle—before, during, and after the collection, processing, transmission, storage, or release of data. These rights are also found to be elastic, meaning that they apply to new technologies and scenarios that have not yet been identified or encountered by current practice and theory.

Data is, formally, a collection of symbols which function as a representation of information or knowledge. The term raw data is often used with two different meanings, the first being uncleaned data, that is, data that has been collected in an uncontrolled environment, and unprocessed data, which is collected data that has not been processed in such a way as to make it suitable for decision making. Colloquially, and in the humanitarian context, data is usually thought of solely in the machine readable or digital sense. For the purposes of the Signal Code, we use the term data to encompass information both in its analog and digital representations. Where it is necessary to address data solely in its digital representation, we refer to it as digital data.

No right herein may be used to abridge any other right. Nothing in this code may be interpreted as giving any state, group, or person the right to engage in any activity or perform any act that destroys the rights described herein.

The five human rights that exist specific to information and HIAs during humanitarian crises are the following:

The Right to Information
The Right to Protection
The Right to Data Security and Privacy
The Right to Data Agency
The Right to Redress and Rectification…(More)”

Technology tools in human rights


Engine Room: “Over the past few years, we have been witnessing a wave of new technology tools for human rights documentation. Along with the arrival of the new tools, human rights defenders are facing  new tools, new possibilities, new challenges, and new expectations of human rights documentation initiatives.

Produced with support from the Oak Foundation, this report is designed as a first attempt to detail available technologies that are designed for human rights documentation, understand the various perspectives on the challenges human rights documentation initiatives face when adopting new tools and practices, and analyse what is working and what is not for human rights documentation initiatives seeking to integrate new tools in their work….

Primary takeaways:

  • Traditional methods still apply: The environment in which HRDs are working has not dramatically inherently changed due to technology and data.
  • Unreliability and unknown risks provide huge barriers to engagement with technology: In high-pressured situations such as that of HRDs, methodologies used need to be concrete and reliable.
  • Priorities of HRDs centre around their particular issue: Digital technologies often come as an afterthought, rather than integrated into established strategies for communication or campaigning.
  • The lifespan of technology tools is a big barrier to longterm use: Sustainability of tools and maintenance is a big barrier to engaging with them and can cause fatigue among users having to change their practices often.
  • Past failed attempts at using tools makes future attempts more difficult: After having invested time and energy into changing a workflow or process only for it not to work, people are often reluctant to do the same again.
  • HRDs understand their context best: Tools recommendations coming from external parties sometimes do more harm than good.
  • There is a lack of technical capacity within HRD initiatives: As a result, when tools are introduced, groups become reliant on external parties for technical troubleshooting and support.

(Download the report)