Off the map


The Economist: “Rich countries are deluged with data; developing ones are suffering from drought…
AFRICA is the continent of missing data. Fewer than half of births are recorded; some countries have not taken a census in several decades. On maps only big cities and main streets are identified; the rest looks as empty as the Sahara. Lack of data afflicts other developing regions, too. The self-built slums that ring many Latin American cities are poorly mapped, and even estimates of their population are vague. Afghanistan is still using census figures from 1979—and that count was cut short after census-takers were killed by mujahideen.
As rich countries collect and analyse data from as many objects and activities as possible—including thermostats, fitness trackers and location-based services such as Foursquare—a data divide has opened up. The lack of reliable data in poor countries thwarts both development and disaster-relief. When Médecins Sans Frontières (MSF), a charity, moved into Liberia to combat Ebola earlier this year, maps of the capital, Monrovia, fell far short of what was needed to provide aid or track the disease’s spread. Major roads were marked, but not minor ones or individual buildings.
Poor data afflict even the highest-profile international development effort: the Millennium Development Goals (MDGs). The targets, which include ending extreme poverty, cutting infant mortality and getting all children into primary school, were set by UN members in 2000, to be achieved by 2015. But, according to a report by an independent UN advisory group published on November 6th, as the deadline approaches, the figures used to track progress are shaky. The availability of data on 55 core indicators for 157 countries has never exceeded 70%, it found (see chart)….
Some of the data gaps are now starting to be filled from non-government sources. A volunteer effort called Humanitarian OpenStreetMap Team (HOT) improves maps with information from locals and hosts “mapathons” to identify objects shown in satellite images. Spurred by pleas from those fighting Ebola, the group has intensified its efforts in Monrovia since August; most of the city’s roads and many buildings have now been filled in (see maps). Identifying individual buildings is essential, since in dense slums without formal roads they are the landmarks by which outbreaks can be tracked and assistance targeted.
On November 7th a group of charities including MSF, Red Cross and HOT unveiled MissingMaps.org, a joint initiative to produce free, detailed maps of cities across the developing world—before humanitarian crises erupt, not during them. The co-ordinated effort is needed, says Ivan Gayton of MSF: aid workers will not use a map with too little detail, and are unlikely, without a reason, to put work into improving a map they do not use. The hope is that the backing of large charities means the locals they work with will help.
In Kenya and Namibia mobile-phone operators have made call-data records available to researchers, who have used them to combat malaria. By comparing users’ movements with data on outbreaks, epidemiologists are better able to predict where the disease might spread. mTrac, a Ugandan programme that replaces paper reports from health workers with texts sent from their mobile phones, has made data on medical cases and supplies more complete and timely. The share of facilities that have run out of malaria treatments has fallen from 80% to 15% since it was introduced.
Private-sector data are also being used to spot trends before official sources become aware of them. Premise, a startup in Silicon Valley that compiles economics data in emerging markets, has found that as the number of cases of Ebola rose in Liberia, the price of staple foods soared: a health crisis risked becoming a hunger crisis. In recent weeks, as the number of new cases fell, prices did, too. The authorities already knew that travel restrictions and closed borders would push up food prices; they now have a way to measure and track price shifts as they happen….”

Stories of Innovative Democracy at Local Level


Special Issue of Field Actions Science Reports published in partnership with CIVICUS, coordinated by Dorothée Guénéheux, Clara Bosco, Agnès Chamayou and Henri Rouillé d’Orfeuil: “This special issue presents many and varied field actions, such as the promotion of the rights of young people, the resolution of the conflicts of agropastoral activities, or the process of participatory decisionmaking on community budgetary allocations, among many others. It addresses projects developed all over the world, on five continents, and covering both the northern and southern hemispheres. The legitimate initial queries and doubts that assailed those who started this publication as regards its feasibility, have been swept away by the enthusiasm and the large number of papers that have been sent in….”

 

Logged On: Smart Government Solutions from South Asia


Book by , and : “Logged On looks at mobile and smart phone technology through the lens of good government management. How will developing governments deliver goods and services that citizens care about? How will government in these countries leapfrog over traditional public management reforms to help reach out to and collaborate directly with the citizen? This book provides example after example where this has happened and how mobile technology has helped provide solutions to old problems. Our astounding revelation that mobile technology is helping to fight corruption in Pakistan, improve health delivery in Bangladesh, provide access to government by the ordinary citizen in India, and help monitor elections in Afghanistan. If this Is possible in some place in poor South Asian countries considered the most poor in the world, then how can these examples be spread to further in these counties or in other countries? Logged on Government provides a look back on conventional solutions that have mostly not worked and why mobile solutions are taking hold. The book offers a model called Smart Proactive Government based on a Feedback model being used in Punjab, Pakistan. The book also offers five solutions that are present in every successful mobile and smart phone example that the authors reviewed. (Book PDF (20.18 MB) | View Chapters).”

Ebola and big data: Call for help


The Economist: “WITH at least 4,500 people dead, public-health authorities in west Africa and worldwide are struggling to contain Ebola. Borders have been closed, air passengers screened, schools suspended. But a promising tool for epidemiologists lies unused: mobile-phone data.
When people make mobile-phone calls, the network generates a call data record (CDR) containing such information as the phone numbers of the caller and receiver, the time of the call and the tower that handled it—which gives a rough indication of the device’s location. This information provides researchers with an insight into mobility patterns. Indeed phone companies use these data to decide where to build base stations and thus improve their networks, and city planners use them to identify places to extend public transport.
But perhaps the most exciting use of CDRs is in the field of epidemiology. Until recently the standard way to model the spread of a disease relied on extrapolating trends from census data and surveys. CDRs, by contrast, are empirical, immediate and updated in real time. You do not have to guess where people will flee to or move. Researchers have used them to map malaria outbreaks in Kenya and Namibia and to monitor the public response to government health warnings during Mexico’s swine-flu epidemic in 2009. Models of population movements during a cholera outbreak in Haiti following the earthquake in 2010 used CDRs and provided the best estimates of where aid was most needed.
Doing the same with Ebola would be hard: in west Africa most people do not own a phone. But CDRs are nevertheless better than simulations based on stale, unreliable statistics. If researchers could track population flows from an area where an outbreak had occurred, they could see where it would be likeliest to break out next—and therefore where they should deploy their limited resources. Yet despite months of talks, and the efforts of the mobile-network operators’ trade association and several smaller UN agencies, telecoms firms have not let researchers use the data (see article).
One excuse is privacy, which is certainly a legitimate worry, particularly in countries fresh from civil war, or where tribal tensions exist. But the phone data can be anonymised and aggregated in a way that alleviates these concerns. A bigger problem is institutional inertia. Big data is a new field. The people who grasp the benefits of examining mobile-phone usage tend to be young, and lack the clout to free them for research use.”

Ebola: Can big data analytics help contain its spread?


at BBC News: “While emergency response teams, medical charities and non-governmental organisations struggle to contain the virus, could big data analytics help?
A growing number of data scientists believe so….
Mobile phones, widely owned in even the poorest countries in Africa, are proving to be a rich source of data in a region where other reliable sources are sorely lacking.
Orange Telecom in Senegal handed over anonymised voice and text data from 150,000 mobile phones to Flowminder, a Swedish non-profit organisation, which was then able to draw up detailed maps of typical population movements in the region.
Authorities could then see where the best places were to set up treatment centres, and more controversially, the most effective ways to restrict travel in an attempt to contain the disease.
The drawback with this data was that it was historic, when authorities really need to be able to map movements in real time. People’s movements tend to change during an epidemic.
This is why the US Centers for Disease Control and Prevention (CDC) is also collecting mobile phone mast activity data from mobile operators and mapping where calls to helplines are mostly coming from.

Population movement map of West AfricaMobile phone data from West Africa is being used to map population movements and predict how the Ebola virus might spread

A sharp increase in calls to a helpline from one particular area would suggest an outbreak and alert authorities to direct more resources there.
Mapping software company Esri is helping CDC to visualise this data and overlay other existing sources of data from censuses to build up a richer picture.
The level of activity at each mobile phone mast also gives a kind of heatmap of where people are and crucially, where and how far they are moving.

“We’ve never had this large-scale, anonymised mobile phone data before as a species,” says Nuria Oliver, a scientific director at mobile phone company Telefonica.

“The most positive impact we can have is to help emergency relief organisations and governments anticipate how a disease is likely to spread.
“Until now they had to rely on anecdotal information, on-the-ground surveys, police and hospital reports.”…

Killer Apps in the Gigabit Age


New Pew report By , and : “The age of gigabit connectivity is dawning and will advance in coming years. The only question is how quickly it might become widespread. A gigabit connection can deliver 1,000 megabits of information per second (Mbps). Globally, cloud service provider Akamai reports that the average global connection speed in quarter one of 2014 was 3.9 Mbps, with South Korea reporting the highest average connection speed, 23.6 Mbps and the US at 10.5 Mbps.1
In some respects, gigabit connectivity is not a new development. The US scientific community has been using hyper-fast networks for several years, changing the pace of data sharing and enabling levels of collaboration in scientific disciplines that were unimaginable a generation ago.
Gigabit speeds for the “average Internet user” are just arriving in select areas of the world. In the US, Google ran a competition in 2010 for communities to pitch themselves for the construction of the first Google Fiber network running at 1 gigabit per second—Internet speeds 50-100 times faster than the majority of Americans now enjoy. Kansas City was chosen among 1,100 entrants and residents are now signing up for the service. The firm has announced plans to build a gigabit network in Austin, Texas, and perhaps 34 other communities. In response, AT&T has said it expects to begin building gigabit networks in up to 100 US cities.2 The cities of Chattanooga, Tennessee; Lafayette, Louisiana; and Bristol, Virginia, have super speedy networks, and pockets of gigabit connectivity are in use in parts of Las Vegas, Omaha, Santa Monica, and several Vermont communities.3 There are also other regional efforts: Falcon Broadband in Colorado Springs, Colorado; Brooklyn Fiber in New York; Monkey Brains in San Francisco; MINET Fiber in Oregon; Wicked Fiber in Lawrence, Kansas; and Sonic.net in California, among others.4 NewWave expects to launch gigabit connections in 2015 in Poplar Bluff, Missouri Monroe, Rayville, Delhi; and Tallulah, Louisiana, and Suddenlink Communications has launched Operation GigaSpeed.5
In 2014, Google and Verizon were among the innovators announcing that they are testing the capabilities for currently installed fiber networks to carry data even more efficiently—at 10 gigabits per second—to businesses that handle large amounts of Internet traffic.
To explore the possibilities of the next leap in connectivity we asked thousands of experts and Internet builders to share their thoughts about likely new Internet activities and applications that might emerge in the gigabit age. We call this a canvassing because it is not a representative, randomized survey. Its findings emerge from an “opt in” invitation to experts, many of whom play active roles in Internet evolution as technology builders, researchers, managers, policymakers, marketers, and analysts. We also invited comments from those who have made insightful predictions to our previous queries about the future of the Internet. (For more details, please see the section “About this Canvassing of Experts.”)…”

New Technology and the Prevention of Violence and Conflict


Report edited by Francesco Mancini for the International Peace Institute: “In an era of unprecedented interconnectivity, this report explores the ways in which new technologies can assist international actors, governments, and civil society organizations to more effectively prevent violence and conflict. It examines the contributions that cell phones, social media, crowdsourcing, crisis mapping, blogging, and big data analytics can make to short-term efforts to forestall crises and to long-term initiatives to address the root causes of violence.
Five case studies assess the use of such tools in a variety of regions (Africa, Asia, Latin America) experiencing different types of violence (criminal violence, election-related violence, armed conflict, short-term crisis) in different political contexts (restrictive and collaborative governments).
Drawing on lessons and insights from across the cases, the authors outline a how-to guide for leveraging new technology in conflict-prevention efforts:
1. Examine all tools.
2. Consider the context.
3. Do no harm.
4. Integrate local input.
5. Help information flow horizontally.
6. Establish consensus regarding data use.
7. Foster partnerships for better results.”

Francis Fukuyama’s ‘Political Order and Political Decay’


Book Review by David Runciman of  “Political Order and Political Decay: From the Industrial Revolution to the Globalisation of Democracy”, by Francis Fukuyama in the Financial TImes: “It is not often that a 600-page work of political science ends with a cliffhanger. But the first volume of Francis Fukuyama’s epic two-part account of what makes political societies work, published three years ago, left the big question unanswered. That book took the story of political order from prehistoric times to the dawn of modern democracy in the aftermath of the French Revolution. Fukuyama is still best known as the man who announced in 1989 that the birth of liberal democracy represented the end of history: there were simply no better ideas available. But here he hinted that liberal democracies were not immune to the pattern of stagnation and decay that afflicted all other political societies. They too might need to be replaced by something better. So which was it: are our current political arrangements part of the solution, or part of the problem?
Political Order and Political Decay is his answer. He squares the circle by insisting that democratic institutions are only ever one component of political stability. In the wrong circumstances they can be a destabilising force as well. His core argument is that three building blocks are required for a well-ordered society: you need a strong state, the rule of law and democratic accountability. And you need them all together. The arrival of democracy at the end of the 18th century opened up that possibility but by no means guaranteed it. The mere fact of modernity does not solve anything in the domain of politics (which is why Fukuyama is disdainful of the easy mantra that failing states just need to “modernise”).
The explosive growth in industrial capacity and wealth that the world has experienced in the past 200 years has vastly expanded the range of political possibilities available, for better and for worse (just look at the terrifying gap between the world’s best functioning societies – such as Denmark – and the worst – such as the Democratic Republic of Congo). There are now multiple different ways state capacity, legal systems and forms of government can interact with each other, and in an age of globalisation multiple different ways states can interact with each other as well. Modernity has speeded up the process of political development and it has complicated it. It has just not made it any easier. What matters most of all is getting the sequence right. Democracy doesn’t come first. A strong state does. …”

Forget GMOs. The Future of Food Is Data—Mountains of It


Cade Metz at Wired: “… Led by Dan Zigmond—who previously served as chief data scientist for YouTube, then Google Maps—this ambitious project aims to accelerate the work of all the biochemists, food scientists, and chefs on the first floor, providing a computer-generated shortcut to what Hampton Creek sees as the future of food. “We’re looking at the whole process,” Zigmond says of his data team, “trying to figure out what it all means and make better predictions about what is going to happen next.”

The project highlights a movement, spreading through many industries, that seeks to supercharge research and development using the kind of data analysis and manipulation pioneered in the world of computer science, particularly at places like Google and Facebook. Several projects already are using such techniques to feed the development of new industrial materials and medicines. Others hope the latest data analytics and machine learning techniques can help diagnosis disease. “This kind of approach is going to allow a whole new type of scientific experimentation,” says Jeremy Howard, who as the president of Kaggle once oversaw the leading online community of data scientists and is now applying tricks of the data trade to healthcare as the founder of Enlitic.
Zigmond’s project is the first major effort to apply “big data” to the development of food, and though it’s only just getting started—with some experts questioning how effective it will be—it could spur additional research in the field. The company may license its database to others, and Hampton Creek founder and CEO Josh Tetrick says it may even open source the data, so to speak, freely sharing it with everyone. “We’ll see,” says Tetrick, a former college football linebacker who founded Hampton Creek after working on economic and social campaigns in Liberia and Kenya. “That would be in line with who we are as a company.”…
Initially, Zigmond and his team will model protein interactions on individual machines, using tools like the R programming language (a common means of crunching data) and machine learning algorithms much like those that recommend products on Amazon.com. As the database expands, they plan to arrange for much larger and more complex models that run across enormous clusters of computer servers, using the sort of sweeping data-analysis software systems employed by the likes of Google. “Even as we start to get into the tens and hundreds of thousands and millions of proteins,” Zigmond says, “it starts to be more than you can handle with traditional database techniques.”
In particular, Zigmond is exploring the use of deep learning, a form of artificial intelligence that goes beyond ordinary machine learning. Google is using deep learning to drive the speech recognition system in Android phones. Microsoft is using it to translate Skype calls from one language to another. Zigmond believes it can help model the creation of new foods….”

Mapping the Next Frontier of Open Data: Corporate Data Sharing


Stefaan Verhulst at the GovLab (cross-posted at the UN Global Pulse Blog): “When it comes to data, we are living in the Cambrian Age. About ninety percent of the data that exists today has been generated within the last two years. We create 2.5 quintillion bytes of data on a daily basis—equivalent to a “new Google every four days.”
All of this means that we are certain to witness a rapid intensification in the process of “datafication”– already well underway. Use of data will grow increasingly critical. Data will confer strategic advantages; it will become essential to addressing many of our most important social, economic and political challenges.
This explains–at least in large part–why the Open Data movement has grown so rapidly in recent years. More and more, it has become evident that questions surrounding data access and use are emerging as one of the transformational opportunities of our time.
Today, it is estimated that over one million datasets have been made open or public. The vast majority of this open data is government data—information collected by agencies and departments in countries as varied as India, Uganda and the United States. But what of the terabyte after terabyte of data that is collected and stored by corporations? This data is also quite valuable, but it has been harder to access.
The topic of private sector data sharing was the focus of a recent conference organized by the Responsible Data Forum, Data and Society Research Institute and Global Pulse (see event summary). Participants at the conference, which was hosted by The Rockefeller Foundation in New York City, included representatives from a variety of sectors who converged to discuss ways to improve access to private data; the data held by private entities and corporations. The purpose for that access was rooted in a broad recognition that private data has the potential to foster much public good. At the same time, a variety of constraints—notably privacy and security, but also proprietary interests and data protectionism on the part of some companies—hold back this potential.
The framing for issues surrounding sharing private data has been broadly referred to under the rubric of “corporate data philanthropy.” The term refers to an emerging trend whereby companies have started sharing anonymized and aggregated data with third-party users who can then look for patterns or otherwise analyze the data in ways that lead to policy insights and other public good. The term was coined at the World Economic Forum meeting in Davos, in 2011, and has gained wider currency through Global Pulse, a United Nations data project that has popularized the notion of a global “data commons.”
Although still far from prevalent, some examples of corporate data sharing exist….

Help us map the field

A more comprehensive mapping of the field of corporate data sharing would draw on a wide range of case studies and examples to identify opportunities and gaps, and to inspire more corporations to allow access to their data (consider, for instance, the GovLab Open Data 500 mapping for open government data) . From a research point of view, the following questions would be important to ask:

  • What types of data sharing have proven most successful, and which ones least?
  • Who are the users of corporate shared data, and for what purposes?
  • What conditions encourage companies to share, and what are the concerns that prevent sharing?
  • What incentives can be created (economic, regulatory, etc.) to encourage corporate data philanthropy?
  • What differences (if any) exist between shared government data and shared private sector data?
  • What steps need to be taken to minimize potential harms (e.g., to privacy and security) when sharing data?
  • What’s the value created from using shared private data?

We (the GovLab; Global Pulse; and Data & Society) welcome your input to add to this list of questions, or to help us answer them by providing case studies and examples of corporate data philanthropy. Please add your examples below, use our Google Form or email them to us at corporatedata@thegovlab.org”