Selected Readings on Data and Humanitarian Response


By Prianka Srinivasan and Stefaan G. Verhulst *

The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of data and humanitarian response was originally published in 2016.

Data, when used well in a trusted manner, allows humanitarian organizations to innovate how to respond to emergency events, including better coordination of post-disaster relief efforts, the ability to harness local knowledge to create more targeted relief strategies, and tools to predict and monitor disasters in real time. Consequently, in recent years both multinational groups and community-based advocates have begun to integrate data collection and evaluation strategies into their humanitarian operations, to better and more quickly respond to emergencies. However, this movement poses a number of challenges. Compared to the private sector, humanitarian organizations are often less equipped to successfully analyze and manage big data, which pose a number of risks related to the security of victims’ data. Furthermore, complex power dynamics which exist within humanitarian spaces may be further exacerbated through the introduction of new technologies and big data collection mechanisms. In the below we share:

  • Selected Reading List (summaries and hyperlinks)
  • Annotated Selected Reading List
  • Additional Readings

Selected Reading List  (summaries in alphabetical order)

Data and Humanitarian Response

Risks of Using Big Data in Humanitarian Context

Annotated Selected Reading List (in alphabetical order)

Karlsrud, John. “Peacekeeping 4.0: Harnessing the Potential of Big Data, Social Media, and Cyber Technologies.” Cyberspace and International Relations, 2013. http://bit.ly/235Qb3e

  • This chapter from the book “Cyberspace and International Relations” suggests that advances in big data give humanitarian organizations unprecedented opportunities to prevent and mitigate natural disasters and humanitarian crises. However, the sheer amount of unstructured data necessitates effective “data mining” strategies for multinational organizations to make the most use of this data.
  • By profiling some civil-society organizations who use big data in their peacekeeping efforts, Karlsrud suggests that these community-focused initiatives are leading the movement toward analyzing and using big data in countries vulnerable to crisis.
  • The chapter concludes by offering ten recommendations to UN peacekeeping forces to best realize the potential of big data and new technology in supporting their operations.

Mancini, Fancesco. “New Technology and the prevention of Violence and Conflict.” International Peace Institute, 2013. http://bit.ly/1ltLfNV

  • This report from the International Peace Institute looks at five case studies to assess how information and communications technologies (ICTs) can help prevent humanitarian conflicts and violence. Their findings suggest that context has a significant impact on the ability for these ICTs for conflict prevention, and any strategies must take into account the specific contingencies of the region to be successful.
  • The report suggests seven lessons gleaned from the five case studies:
    • New technologies are just one in a variety of tools to combat violence. Consequently, organizations must investigate a variety of complementary strategies to prevent conflicts, and not simply rely on ICTs.
    • Not every community or social group will have the same relationship to technology, and their ability to adopt new technologies are similarly influenced by their context. Therefore, a detailed needs assessment must take place before any new technologies are implemented.
    • New technologies may be co-opted by violent groups seeking to maintain conflict in the region. Consequently, humanitarian groups must be sensitive to existing political actors and be aware of possible negative consequences these new technologies may spark.
    • Local input is integral to support conflict prevention measures, and there exists need for collaboration and awareness-raising with communities to ensure new technologies are sustainable and effective.
    • Information shared between civil-society has more potential to develop early-warning systems. This horizontal distribution of information can also allow communities to maintain the accountability of local leaders.

Meier, Patrick. “Digital humanitarians: how big data is changing the face of humanitarian response.” Crc Press, 2015. http://amzn.to/1RQ4ozc

  • This book traces the emergence of “Digital Humanitarians”—people who harness new digital tools and technologies to support humanitarian action. Meier suggests that this has created a “nervous system” to connect people from disparate parts of the world, revolutionizing the way we respond to humanitarian crises.
  • Meier argues that such technology is reconfiguring the structure of the humanitarian space, where victims are not simply passive recipients of aid but can contribute with other global citizens. This in turn makes us more humane and engaged people.

Robertson, Andrew and Olson, Steve. “Using Data Sharing to Improve Coordination in Peacebuilding.” United States Institute for Peace, 2012. http://bit.ly/235QuLm

  • This report functions as an overview of a roundtable workshop on Technology, Science and Peace Building held at the United States Institute of Peace. The workshop aimed to investigate how data-sharing techniques can be developed for use in peace building or conflict management.
  • Four main themes emerged from discussions during the workshop:
    • “Data sharing requires working across a technology-culture divide”—Data sharing needs the foundation of a strong relationship, which can depend on sociocultural, rather than technological, factors.
    • “Information sharing requires building and maintaining trust”—These relationships are often built on trust, which can include both technological and social perspectives.
    • “Information sharing requires linking civilian-military policy discussions to technology”—Even when sophisticated data-sharing technologies exist, continuous engagement between different stakeholders is necessary. Therefore, procedures used to maintain civil-military engagement should be broadened to include technology.
    • “Collaboration software needs to be aligned with user needs”—technology providers need to keep in mind the needs of its users, in this case peacebuilders, in order to ensure sustainability.

United Nations Independent Expert Advisory Group on a Data Revolution for Sustainable Development. “A World That Counts, Mobilizing the Data Revolution.” 2014. https://bit.ly/2Cb3lXq

  • This report focuses on the potential benefits and risks data holds for sustainable development. Included in this is a strategic framework for using and managing data for humanitarian purposes. It describes a need for a multinational consensus to be developed to ensure data is shared effectively and efficiently.
  • It suggests that “people who are counted”—i.e., those who are included in data collection processes—have better development outcomes and a better chance for humanitarian response in emergency or conflict situations.

Katie Whipkey and Andrej Verity. “Guidance for Incorporating Big Data into Humanitarian Operations.” Digital Humanitarian Network, 2015. http://bit.ly/1Y2BMkQ

  • This report produced by the Digital Humanitarian Network provides an overview of big data, and how humanitarian organizations can integrate this technology into their humanitarian response. It primarily functions as a guide for organizations, and provides concise, brief outlines of what big data is, and how it can benefit humanitarian groups.
  • The report puts forward four main benefits acquired through the use of big data by humanitarian organizations: 1) the ability to leverage real-time information; 2) the ability to make more informed decisions; 3) the ability to learn new insights; 4) the ability for organizations to be more prepared.
  • It goes on to assess seven challenges big data poses for humanitarian organizations: 1) geography, and the unequal access to technology across regions; 2) the potential for user error when processing data; 3) limited technology; 4) questionable validity of data; 5) underdeveloped policies and ethics relating to data management; 6) limitations relating to staff knowledge.

Risks of Using Big Data in Humanitarian Context
Crawford, Kate, and Megan Finn. “The limits of crisis data: analytical and ethical challenges of using social and mobile data to understand disasters.” GeoJournal 80.4, 2015. http://bit.ly/1X0F7AI

  • Crawford & Finn present a critical analysis of the use of big data in disaster management, taking a more skeptical tone to the data revolution facing humanitarian response.
  • They argue that though social and mobile data analysis can yield important insights and tools in crisis events, it also presents a number of limitations which can lead to oversights being made by researchers or humanitarian response teams.
  • Crawford & Finn explore the ethical concerns the use of big data in disaster events introduces, including issues of power, privacy, and consent.
  • The paper concludes by recommending that critical data studies, such as those presented in the paper, be integrated into crisis event research in order to analyze some of the assumptions which underlie mobile and social data.

Jacobsen, Katja Lindskov (2010) Making design safe for citizens: A hidden history of humanitarian experimentation. Citizenship Studies 14.1: 89-103. http://bit.ly/1YaRTwG

  • This paper explores the phenomenon of “humanitarian experimentation,” where victims of disaster or conflict are the subjects of experiments to test the application of technologies before they are administered in greater civilian populations.
  • By analyzing the particular use of iris recognition technology during the repatriation of Afghan refugees to Pakistan in 2002 to 2007, Jacobsen suggests that this “humanitarian experimentation” compromises the security of already vulnerable refugees in order to better deliver biometric product to the rest of the world.

Responsible Data Forum. “Responsible Data Reflection Stories: An Overview.” http://bit.ly/1Rszrz1

  • This piece from the Responsible Data forum is primarily a compilation of “war stories” which follow some of the challenges in using big data for social good. By drawing on these crowdsourced cases, the Forum also presents an overview which makes key recommendations to overcome some of the challenges associated with big data in humanitarian organizations.
  • It finds that most of these challenges occur when organizations are ill-equipped to manage data and new technologies, or are unaware about how different groups interact in digital spaces in different ways.

Sandvik, Kristin Bergtora. “The humanitarian cyberspace: shrinking space or an expanding frontier?” Third World Quarterly 37:1, 17-32, 2016. http://bit.ly/1PIiACK

  • This paper analyzes the shift toward more technology-driven humanitarian work, where humanitarian work increasingly takes place online in cyberspace, reshaping the definition and application of aid. This has occurred along with what many suggest is a shrinking of the humanitarian space.
  • Sandvik provides three interpretations of this phenomena:
    • First, traditional threats remain in the humanitarian space, which are both modified and reinforced by technology.
    • Second, new threats are introduced by the increasing use of technology in humanitarianism, and consequently the humanitarian space may be broadening, not shrinking.
    • Finally, if the shrinking humanitarian space theory holds, cyberspace offers one example of this, where the increasing use of digital technology to manage disasters leads to a contraction of space through the proliferation of remote services.

Additional Readings on Data and Humanitarian Response

* Thanks to: Kristen B. Sandvik; Zara Rahman; Jennifer Schulte; Sean McDonald; Paul Currion; Dinorah Cantú-Pedraza and the Responsible Data Listserve for valuable input.

Evaluating e-Participation: Frameworks, Practice, Evidence


Book edited by Georg Aichholzer, Herbert Kubicek and Lourdes Torres: “There is a widely acknowledged evaluation gap in the field of e-participation practice and research, a lack of systematic evaluation with regard to process organization, outcome and impacts. This book addresses the state of the art of e-participation research and the existing evaluation gap by reviewing various evaluation approaches and providing a multidisciplinary concept for evaluating the output, outcome and impact of citizen participation via the Internet as well as via traditional media. It offers new knowledge based on empirical results of its application (tailored to different forms and levels of e-participation) in an international comparative perspective. The book will advance the academic study and practical application of e-participation through fresh insights, largely drawing on theoretical arguments and empirical research results gained in the European collaborative project “e2democracy”. It applies the same research instruments to a set of similar citizen participation processes in seven local communities in three countries (Austria, Germany and Spain). The generic evaluation framework has been tailored to a tested toolset, and the presentation and discussion of related evaluation results aims at clarifying to what extent these tools can be applied to other consultation and collaboration processes, making the book of interest to policymakers and scholars alike….(More)”

Mapping a flood of new data


Rebecca Lipman at Economist Intelligence Unit Perspectives on “One city tweets to stay dry: From drones to old-fashioned phone calls, data come from many unlikely sources. In a disaster, such as a flood or earthquake, responders will take whatever information they can get to visualise the crisis and best direct their resources. Increasingly, cities prone to natural disasters are learning to better aid their citizens by empowering their local agencies and responders with sophisticated tools to cut through the large volume and velocity of disaster-related data and synthesise actionable information.

Consider the plight of the metro area of Jakarta, Indonesia, home to some 28m people, 13 rivers and 1,100 km of canals. With 40% of the city below sea level (and sinking), and regularly subject to extreme weather events including torrential downpours in monsoon season, Jakarta’s residents face far-too-frequent, life-threatening floods. Despite the unpredictability of flooding conditions, citizens have long taken a passive approach that depended on government entities to manage the response. But the information Jakarta’s responders had on the flooding conditions was patchy at best. So in the last few years, the government began to turn to the local population for help. It helped.

Today, Jakarta’s municipal government is relying on the web-based PetaJakarta.org project and a handful of other crowdsourcing mobile apps such as Qlue and CROP to collect data and respond to floods and other disasters. Through these programmes, crowdsourced, time-sensitive data derived from citizens’ social-media inputs have made it possible for city agencies to more precisely map the locations of rising floods and help the residents at risk. In January 2015, for example, the web-based Peta Jakarta received 5,209 reports on floods via tweets with detailed text and photos. Anytime there’s a flood, Peta Jakarta’s data from the tweets are mapped and updated every minute, and often cross-checked by Jakarta Disaster Management Agency (BPBD) officials through calls with community leaders to assess the information and guide responders.

But in any city Twitter is only one piece of a very large puzzle. …

Even with such life-and-death examples, government agencies remain deeply protective of data because of issues of security, data ownership and citizen privacy. They are also concerned about liability issues if incorrect data lead to an activity that has unsuccessful outcomes. These concerns encumber the combination of crowdsourced data with operational systems of record, and impede the fast progress needed in disaster situations….Download the case study .”

The creative citizen unbound


The creative citizen unbound

Book by Ian Hargreaves and John Hartley on “How social media and DIY culture contribute to democracy, communities and the creative economy”: “The creative citizen unbound introduces the concept of ‘creative citizenship’ to explore the potential of civic-minded creative individuals in the era of social media and in the context of an expanding creative economy. Drawing on the findings of a 30-month study of communities supported by the UK research funding councils, multidisciplinary contributors examine the value and nature of creative citizenship, not only in terms of its contribution to civic life and social capital but also to more contested notions of value, both economic and cultural. This original book will be beneficial to researchers and students across a range of disciplines including media and communication, political science, economics, planning and economic geography, and the creative and performing arts….(More)”

Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet


Paper by Natascha Just & Michael Latzer in Media, Culture & Society (fortcoming): “This paper explores the governance by algorithms in information societies. Theoretically, it builds on (co-)evolutionary innovation studies in order to adequately grasp the interplay of technological and societal change, and combines these with institutional approaches to incorporate governance by technology or rather software as institutions. Methodologically it draws from an empirical survey of Internet-based services that rely on automated algorithmic selection, a functional typology derived from it, and an analysis of associated potential social risks. It shows how algorithmic selection has become a growing source of social order, of a shared social reality in information societies. It argues that – similar to the construction of realities by traditional mass media – automated algorithmic selection applications shape daily lives and realities, affect the perception of the world, and influence behavior. However, the co-evolutionary perspective on algorithms as institutions, ideologies, intermediaries and actors highlights differences that are to be found first in the growing personalization of constructed realities, and second in the constellation of involved actors. Altogether, compared to reality construction by traditional mass media, algorithmic reality construction tends to increase individualization, commercialization, inequalities and deterritorialization, and to decrease transparency, controllability and predictability…(Full Paper)”

Accountable machines: bureaucratic cybernetics?


Alison Powell at LSE Media Policy Project Blog: “Algorithms are everywhere, or so we are told, and the black boxes of algorithmic decision-making make oversight of processes that regulators and activists argue ought to be transparent more difficult than in the past. But when, and where, and which machines do we wish to make accountable, and for what purpose? In this post I discuss how algorithms discussed by scholars are most commonly those at work on media platforms whose main products are the social networks and attention of individuals. Algorithms, in this case, construct individual identities through patterns of behaviour, and provide the opportunity for finely targeted products and services. While there are serious concerns about, for instance, price discrimination, algorithmic systems for communicating and consuming are, in my view, less inherently problematic than processes that impact on our collective participation and belonging as citizenship. In this second sphere, algorithmic processes – especially machine learning – combine with processes of governance that focus on individual identity performance to profoundly transform how citizenship is understood and undertaken.

Communicating and consuming

In the communications sphere, algorithms are what makes it possible to make money from the web for example through advertising brokerage platforms that help companies bid for ads on major newspaper websites. IP address monitoring, which tracks clicks and web activity, creates detailed consumer profiles and transform the everyday experience of communication into a constantly-updated production of consumer information. This process of personal profiling is at the heart of many of the concerns about algorithmic accountability. The consequence of perpetual production of data by individuals and the increasing capacity to analyse it even when it doesn’t appear to relate has certainly revolutionalised advertising by allowing more precise targeting, but what has it done for areas of public interest?

John Cheney-Lippold identifies how the categories of identity are now developed algorithmically, since a category like gender is not based on self-discloure, but instead on patterns of behaviour that fit with expectations set by previous alignment to a norm. In assessing ‘algorithmic identities’, he notes that these produce identity profiles which are narrower and more behaviour-based than the identities that we perform. This is a result of the fact that many of the systems that inspired the design of algorithmic systems were based on using behaviour and other markers to optimise consumption. Algorithmic identity construction has spread from the world of marketing to the broader world of citizenship – as evidenced by the Citizen Ex experiment shown at the Web We Want Festival in 2015.

Individual consumer-citizens

What’s really at stake is that the expansion of algorithmic assessment of commercially derived big data has extended the frame of the individual consumer into all kinds of other areas of experience. In a supposed ‘age of austerity’ when governments believe it’s important to cut costs, this connects with the view of citizens as primarily consumers of services, and furthermore, with the idea that a citizen is an individual subject whose relation to a state can be disintermediated given enough technology. So, with sensors on your garbage bins you don’t need to even remember to take them out. With pothole reporting platforms like FixMyStreet, a city government can be responsive to an aggregate of individual reports. But what aspects of our citizenship are collective? When, in the algorithmic state, can we expect to be together?

Put another way, is there any algorithmic process to value the long term education, inclusion, and sustenance of a whole community for example through library services?…

Seeing algorithms – machine learning in particular – as supporting decision-making for broad collective benefit rather than as part of ever more specific individual targeting and segmentation might make them more accountable. But more importantly, this would help algorithms support society – not just individual consumers….(More)”

Data Mining Reveals the Four Urban Conditions That Create Vibrant City Life


Emerging Technology from the arXiv: “Lack of evidence to city planning has ruined cities all over the world. But data-mining techniques are finally revealing the rules that make cities successful, vibrant places to live. …Back in 1961, the gradual decline of many city centers in the U.S. began to puzzle urban planners and activists alike. One of them, the urban sociologist Jane Jacobs, began a widespread and detailed investigation of the causes and published her conclusions in The Death and Life of Great American Cities, a controversial book that proposed four conditions that are essential for vibrant city life.

Jacobs’s conclusions have become hugely influential. Her ideas have had a significant impact on the development of many modern cities such as Toronto and New York City’s Greenwich Village. However, her ideas have also attracted criticism because of the lack of empirical evidence to back them up, a problem that is widespread in urban planning.
Today, that looks set to change thanks to the work of Marco De Nadai at the University of Trento and a few pals, who have developed a way to gather urban data that they use to test Jacobs’s conditions and how they relate to the vitality of city life. The new approach heralds a new age of city planning in which planners have an objective way of assessing city life and working out how it can be improved.
In her book, Jacobs argues that vibrant activity can only flourish in cities when the physical environment is diverse. This diversity, she says, requires four conditions. The first is that city districts must serve more than two functions so that they attract people with different purposes at different times of the day and night. Second, city blocks must be small with dense intersections that give pedestrians many opportunities to interact. The third condition is that buildings must be diverse in terms of age and form to support a mix of low-rent and high-rent tenants. By contrast, an area with exclusively new buildings can only attract businesses and tenants wealthy enough to support the cost of new building. Finally, a district must have a sufficient density of people and buildings.

While Jacobs’s arguments are persuasive, her critics say there is little evidence to show that these factors are linked with vibrant city life. That changed last year when urban scientists in Seoul, South Korea, published the result of a 10-year study of pedestrian activity in the city at unprecedented resolution. This work successfully tested Jacobs’s ideas for the first time.
However, the data was gathered largely through pedestrian surveys, a process that is time-consuming, costly, and generally impractical for use in most modern cities.
De Nadai and co have come up with a much cheaper and quicker alternative using a new generation of city databases and the way people use social media and mobile phones. The new databases include OpenStreetMap, the collaborative mapping tool; census data, which records populations and building use; land use data, which uses satellite images to classify land use according to various categories; Foursquare data, which records geographic details about personal activity; and mobile-phone records showing the number and frequency of calls in an area.
De Nadai and co gathered this data for six cities in Italy—Rome, Naples, Florence, Bologna, Milan, and Palermo.
Their analysis is straightforward. The team used mobile-phone activity as a measure of urban vitality and land-use records, census data, and Foursquare activity as a measure of urban diversity. Their goal was to see how vitality and diversity are correlated in the cities they studied. The results make for interesting reading….(More)

How to Crowdsource the Syrian Cease-Fire


Colum Lynch at Foreign Policy: “Can the wizards of Silicon Valley develop a set of killer apps to monitor the fragile Syria cease-fire without putting foreign boots on the ground in one of the world’s most dangerous countries?

They’re certainly going to try. The “cessation of hostilities” in Syria brokered by the United States and Russia last month has sharply reduced the levels of violence in the war-torn country and sparked a rare burst of optimism that it could lead to a broader cease-fire. But if the two sides lay down their weapons, the international community will face the challenge of monitoring the battlefield to ensure compliance without deploying peacekeepers or foreign troops. The emerging solution: using crowdsourcing, drones, satellite imaging, and other high-tech tools.

The high-level interest in finding a technological solution to the monitoring challenge was on full display last month at a closed-door meeting convened by the White House that brought together U.N. officials, diplomats, digital cartographers, and representatives of Google, DigitalGlobe, and other technology companies. Their assignment was to brainstorm ways of using high-tech tools to keep track of any future cease-fires from Syria to Libya and Yemen.

The off-the-record event came as the United States, the U.N., and other key powers struggle to find ways of enforcing cease-fires from Syria at a time when there is little political will to run the risk of sending foreign forces or monitors to such dangerous places. The United States has turned to high-tech weapons like armed drones as weapons of war; it now wants to use similar systems to help enforce peace.

Take the Syria Conflict Mapping Project, a geomapping program developed by the Atlanta-based Carter Center, a nonprofit founded by former U.S. President Jimmy Carter and his wife, Rosalynn, to resolve conflict and promote human rights. The project has developed an interactive digital map that tracks military formations by government forces, Islamist extremists, and more moderate armed rebels in virtually every disputed Syrian town. It is now updating its technology to monitor cease-fires.

The project began in January 2012 because of a single 25-year-old intern, Christopher McNaboe. McNaboe realized it was possible to track the state of the conflict by compiling disparate strands of publicly available information — including the shelling and aerial bombardment of towns and rebel positions — from YouTube, Twitter, and other social media sites. It has since developed a mapping program using software provided by Palantir Technologies, a Palo Alto-based big data company that does contract work for U.S. intelligence and defense agencies, from the CIA to the FBI….

Walter Dorn, an expert on technology in U.N. peace operations who attended the White House event, said he had promoted what he calls a “coalition of the connected.”

The U.N. or other outside powers could start by tracking social media sites, including Twitter and YouTube, for reports of possible cease-fire violations. That information could then be verified by “seeded crowdsourcing” — that is, reaching out to networks of known advocates on the ground — and technological monitoring through satellite imagery or drones.

Matthew McNabb, the founder of First Mile Geo, a start-up which develops geolocation technology that can be used to gather data in conflict zones, has another idea. McNabb, who also attended the White House event, believes “on-demand” technologies like SurveyMonkey, which provides users a form to create their own surveys, can be applied in conflict zones to collect data on cease-fire violations….(More)

Technology and politics: The signal and the noise


Special Issue of The Economist: “…The way these candidates are fighting their campaigns,each in his own way, is proof that politics as usual is no longer an option. The internet and the availability of huge piles of data on everyone and everything are transforming the democratic process, just as they are upending many industries. They are becoming a force in all kinds of things,from running election campaigns and organising protest movements to improving public policy and the delivery of services. This special report will argue that, as a result, the relationship between citizens and those who govern them is changing fundamentally.

Incongruous though it may seem, the forces that are now powering the campaign of Mr Trump—as well as that ofBernie Sanders, the surprise candidate on the Democratic side (Hillary Clinton is less of a success online)—were first seen in full cry during the Arab spring in 2011. The revolution in Egypt and other Arab countries was not instigated by Twitter, Facebook and other social-media services, but they certainly help edit gain momentum. “The internet is an intensifier,” says Marc Lynch of GeorgeWashington University, a noted scholar of the protest movements in the region…..

However, this special report will argue that, in the longer term, online crusading and organising will turn out to matter less to politics in the digital age than harnessing those ever-growing piles of data. The internet and related technologies, such as smart phones and cloud computing, make it cheap and easy not only to communicate but also to collect, store and analyse immense quantities of information. This is becoming ever more important in influencing political outcomes.

America’s elections are a case in point. Mr Cruz with his data savvy is merely following in the footsteps of Barack Obama, who won his first presidential term with the clever application of digital know-how. Campaigners are hoovering up more and more digital information about every voting-age citizen and stashing it away in enormous databases.With the aid of complex algorithms, these data allow campaigners to decide, say, who needs to be reminded to make the trip to the polling station and who may be persuaded to vote for a particular candidate.

No hiding place

In the case of protest movements, the waves of collective action leave a big digital footprint. Using ever more sophisticated algorithms, governments can mine these data.That is changing the balance of power. In the event of another Arab spring, autocrats would not be caught off guard again because they are now able to monitor protests and intervene when they consider it necessary. They can also identify and neutralise the most influential activists. Governments that were digitally blind when the internet first took off in the mid-1990s now have both a telescope and a microscope.

But data are not just changing campaigns and political movements; they affect how policy is made and public services are offered. This is most visible at local-government level. Cities have begun to use them for everything from smoothing traffic flows to identifying fire hazards. Having all this information at their fingertips is bound to change the way these bureaucracies work, and how they interact with citizens. This will not only make cities more efficient, but provide them with data and tools that could help them involve their citizens more.

This report will look at electoral campaigns, protest movements and local government in turn. Readers will note that most of the examples quoted are American and that most of the people quoted are academics. That is because the study of the interrelationship between data and politics is relatively new and most developed in America. But it is beginning to spill out from the ivory towers, and is gradually spreading to other countries.

The growing role of technology in politics raises many questions. How much of a difference, for instance, do digitally enabled protest surges really make? Many seem to emerge from nowhere, then crash almost as suddenly, defeated by hard political realities and entrenched institutions. The Arab spring uprising in Egypt is one example. Once the incumbent president, Hosni Mubarak, was toppled, the coalition that brought him down fell apart, leaving the stage to the old powers, first the Muslim Brotherhood and then the armed forces.

In party politics, some worry that the digital targeting of voters might end up reducing the democratic process to a marketing exercise. Ever more data and better algorithms, they fret, could lead politicians to ignore those unlikely to vote for them. And in cities it is no tclear that more data will ensure that citizens become more engaged….(More)

See also:

The Social Intranet: Insights on Managing and Sharing Knowledge Internally


Paper by Ines Mergel for IBM Center for the Business of Government: “While much of the federal government lags behind, some agencies are pioneers in the internal use of social media tools.  What lessons and effective practices do they have to offer other agencies?

Social intranets,” Dr. Mergel writes, “are in-house social networks that use technologies – such as automated newsfeeds, wikis, chats, or blogs – to create engagement opportunities among employees.”  They also include the use of internal profile pages that help people identify expertise and interest (similar to Facebook or LinkedIn profiles), and that are used in combination with other social Intranet tools such as on-line communities or newsfeeds.

The report documents four case studies of government use of social intranets – two federal government agencies (the Department of State and the National Aeronautics and Space Administration) and two cross-agency networks (the U.S. Intelligence Community and the Government of Canada).

The author observes: “Most enterprise social networking platforms fail,” but identifies what causes these failures and how successful social intranet initiatives can avoid that fate and thrive.  She offers a series of insights for successfully implementing social intranets in the public sector, based on her observations and case studies. …(More)”