Measuring the Impact of Public Innovation in the Wild


Beth Noveck at Governing: “With complex, seemingly intractable problems such as inequality, climate change and affordable access to health care plaguing contemporary society, traditional institutions such as government agencies and nonprofit organizations often lack strategies for tackling them effectively and legitimately. For this reason, this year the MacArthur Foundation launched its Research Network on Opening Governance.
The Network, which I chair and which also is supported by Google.org, is what MacArthur calls a “research institution without walls.” It brings together a dozen researchers across universities and disciplines, with an advisory network of academics, technologists, and current and former government officials, to study new ways of addressing public problems using advances in science and technology.
Through regular meetings and collaborative projects, the Network is exploring, for example, the latest techniques for more open and transparent decision-making, the uses of data to transform how we govern, and the identification of an individual’s skills and experiences to improve collaborative problem-solving between government and citizen.
One of the central questions we are grappling with is how to accelerate the pace of research so we can learn better and faster when an innovation in governance works — for whom, in which contexts and under which conditions. With better methods for doing fast-cycle research in collaboration with government — in the wild, not in the lab — our hope is to be able to predict with accuracy, not just know after the fact, whether innovations such as opening up an agency’s data or consulting with citizens using a crowdsourcing platform are likely to result in real improvements in people’s lives.
An example of such an experiment is the work that members of the Network are undertaking with the Food and Drug Administration. As one of its duties, the FDA manages the process of pre-market approval of medical devices to ensure that patients and providers have timely access to safe, effective and high-quality technology, as well as the post-market review of medical devices to ensure that unsafe ones are identified and recalled from the market. In both of these contexts, the FDA seeks to provide the medical-device industry with productive, consistent, transparent and efficient regulatory pathways.
With thousands of devices, many of them employing cutting-edge technology, to examine each year, the FDA is faced with the challenge of finding the right internal and external expertise to help it quickly study a device’s safety and efficacy. Done right, lives can be saved and companies can prosper from bringing innovations quickly to market. Done wrong, bad devices can kill…”

Cities Find Rewards in Cheap Technologies


Nanette Byrnes at MIT Technology Review: “Cities around the globe, whether rich or poor, are in the midst of a technology experiment. Urban planners are pulling data from inexpensive sensors mounted on traffic lights and park benches, and from mobile apps on citizens’ smartphones, to analyze how their cities really operate. They hope the data will reveal how to run their cities better and improve urban life. City leaders and technology experts say that managing the growing challenges of cities well and affordably will be close to impossible without smart technology.
Fifty-four percent of humanity lives in urban centers, and almost all of the world’s projected population growth over the next three decades will take place in cities, including many very poor cities. Because of their density and often strained infrastructure, cities have an outsize impact on the environment, consuming two-thirds of the globe’s energy and contributing 70 percent of its greenhouse-gas emissions. Urban water systems are leaky. Pollution levels are often extreme.
But cities also contribute most of the world’s economic production. Thirty percent of the world’s economy and most of its innovation are concentrated in just 100 cities. Can technology help manage rapid population expansion while also nurturing cities’ all-important role as an economic driver? That’s the big question at the heart of this Business Report.
Selling answers to that question has become a big business. IBM, Cisco, Hitachi, Siemens, and others have taken aim at this market, publicizing successful examples of cities that have used their technology to tackle the challenges of parking, traffic, transportation, weather, energy use, water management, and policing. Cities already spend a billion dollars a year on these systems, and that’s expected to grow to $12 billion a year or more in the next 10 years.
To justify this kind of outlay, urban technologists will have to move past the test projects that dominate discussions today. Instead, they’ll have to solve some of the profound and growing problems of urban living. Cities leaning in that direction are using various technologies to ease parking, measure traffic, and save water (see “Sensing Santander”), reduce rates of violent crime (see “Data-Toting Cops”), and prepare for ever more severe weather patterns.
There are lessons to be learned, too, from cities whose grandiose technological ideas have fallen short, like the eco-city initiative of Tianjin, China (see “China’s Future City”), which has few residents despite great technology and deep government support.
The streets are similarly largely empty in the experimental high-tech cities of Songdo, South Korea; Masdar City, Abu Dhabi; and Paredes, Portugal, which are being designed to have minimal impact on the environment and offer high-tech conveniences such as solar-powered air-conditioning and pneumatic waste disposal systems instead of garbage trucks. Meanwhile, established cities are taking a much more incremental, less ambitious, and perhaps more workable approach, often benefiting from relatively inexpensive and flexible digital technologies….”

The Plan to Map Illegal Fishing From Space


W. Wayt Gibbs at Wired: “llicit fishing goes on every day at an industrial scale. But large commercial fishers are about to get a new set of overseers: conservationists—and soon the general public—armed with space-based reconnaissance of the global fleet.
Crews on big fishing boats deploy an impressive arsenal of technology—from advanced sonars to GPS navigation and mapping systems—as they chase down prey and trawl the seabed. These tools are so effective that roughly a third of the world’s fisheries are now overharvested, and more than three-quarters of the stocks that remain have hit their sustainable limits, according to the FAO. For some species, most of the catch is unreported, unregulated, or flat-out illegal.
But now environmentalists are using sophisticated technology of their own to peel away that cloak of invisibility. With satellite data from SpaceQuest and financial and engineering support from Google, two environmental activist groups have built the first global surveillance system that can track large fishing vessels anywhere in the world.
A prototype of the system, called Global Fishing Watch, was unveiled today at the IUCN World Parks Congress in Sydney. The tool makes use of Google’s mapping software and servers to display the tracks followed in 2012 and 2013 by some 25,000 ships that were either registered as large commercial fishers or were moving in ways that strongly suggest fishing activity.
The project was led by Oceana, a marine conservation advocacy group, and the software was developed by SkyTruth, a small non-profit that specializes in using remote sensing technologies to map environmentally sensitive activities such as fracking and flaring from oil and gas fields. Although the system currently displays voyages from nearly a year ago, “the plan is that we will build out a public release version that will have near-real-time data,” said Jackie Savitz, Oceana’s VP for U.S. oceans. “Then you’ll actually be able to see someone out there fishing within hours to days,” fast enough to act on the information if the fishing is happening illegally, such as in a marine protected area.
The effort got its start at a conference in February, when Savitz sat down with Paul Woods of SkyTruth and Brian Sullivan of Google’s Ocean and Earth Outreach program, and the three discovered they had all been thinking along the same lines: that the pieces were in hand to put eyes on the global fishing fleet, or at least the bigger boats out there. SpaceQuest now has four satellites in orbit that continually pick up radio transmissions that large ships send out as part of their automatic identification system (AIS), broadcasts that include a unique ID number and the vessel’s current position, speed, and heading. Each packet of data is relatively small, but the total AIS data stream is massive because it captures all kinds of boats: naval warships, supertankers, barges, even some yachts. To AIS a boat is a boat; there’s no easy way to tell which ones are fishing.
So the group turned to Analyze Corp., where data scientists teamed up with a former NOAA agent who worked for many years as an official fishery observer to develop a heuristic algorithm that synthesizes input such as rapid changes in trajectory, distances covered over the past 24 hours, long-term movements and port visits over months, and the self-declared identity and class of the boat. “It combines all that and spits out a weighted classification—essentially a probability that this vessel is fishing at this particular spot and time,” Woods said….”

The Reliability of Tweets as a Supplementary Method of Seasonal Influenza Surveillance


New Paper by Ming-Hsiang Tsou et al in the Journal of Medical Internet Research: “Existing influenza surveillance in the United States is focused on the collection of data from sentinel physicians and hospitals; however, the compilation and distribution of reports are usually delayed by up to 2 weeks. With the popularity of social media growing, the Internet is a source for syndromic surveillance due to the availability of large amounts of data. In this study, tweets, or posts of 140 characters or less, from the website Twitter were collected and analyzed for their potential as surveillance for seasonal influenza.
Objective: There were three aims: (1) to improve the correlation of tweets to sentinel-provided influenza-like illness (ILI) rates by city through filtering and a machine-learning classifier, (2) to observe correlations of tweets for emergency department ILI rates by city, and (3) to explore correlations for tweets to laboratory-confirmed influenza cases in San Diego.
Methods: Tweets containing the keyword “flu” were collected within a 17-mile radius from 11 US cities selected for population and availability of ILI data. At the end of the collection period, 159,802 tweets were used for correlation analyses with sentinel-provided ILI and emergency department ILI rates as reported by the corresponding city or county health department. Two separate methods were used to observe correlations between tweets and ILI rates: filtering the tweets by type (non-retweets, retweets, tweets with a URL, tweets without a URL), and the use of a machine-learning classifier that determined whether a tweet was “valid”, or from a user who was likely ill with the flu.
Results: Correlations varied by city but general trends were observed. Non-retweets and tweets without a URL had higher and more significant (P<.05) correlations than retweets and tweets with a URL. Correlations of tweets to emergency department ILI rates were higher than the correlations observed for sentinel-provided ILI for most of the cities. The machine-learning classifier yielded the highest correlations for many of the cities when using the sentinel-provided or emergency department ILI as well as the number of laboratory-confirmed influenza cases in San Diego. High correlation values (r=.93) with significance at P<.001 were observed for laboratory-confirmed influenza cases for most categories and tweets determined to be valid by the classifier.
Conclusions: Compared to tweet analyses in the previous influenza season, this study demonstrated increased accuracy in using Twitter as a supplementary surveillance tool for influenza as better filtering and classification methods yielded higher correlations for the 2013-2014 influenza season than those found for tweets in the previous influenza season, where emergency department ILI rates were better correlated to tweets than sentinel-provided ILI rates. Further investigations in the field would require expansion with regard to the location that the tweets are collected from, as well as the availability of more ILI data…”

Off the map


The Economist: “Rich countries are deluged with data; developing ones are suffering from drought…
AFRICA is the continent of missing data. Fewer than half of births are recorded; some countries have not taken a census in several decades. On maps only big cities and main streets are identified; the rest looks as empty as the Sahara. Lack of data afflicts other developing regions, too. The self-built slums that ring many Latin American cities are poorly mapped, and even estimates of their population are vague. Afghanistan is still using census figures from 1979—and that count was cut short after census-takers were killed by mujahideen.
As rich countries collect and analyse data from as many objects and activities as possible—including thermostats, fitness trackers and location-based services such as Foursquare—a data divide has opened up. The lack of reliable data in poor countries thwarts both development and disaster-relief. When Médecins Sans Frontières (MSF), a charity, moved into Liberia to combat Ebola earlier this year, maps of the capital, Monrovia, fell far short of what was needed to provide aid or track the disease’s spread. Major roads were marked, but not minor ones or individual buildings.
Poor data afflict even the highest-profile international development effort: the Millennium Development Goals (MDGs). The targets, which include ending extreme poverty, cutting infant mortality and getting all children into primary school, were set by UN members in 2000, to be achieved by 2015. But, according to a report by an independent UN advisory group published on November 6th, as the deadline approaches, the figures used to track progress are shaky. The availability of data on 55 core indicators for 157 countries has never exceeded 70%, it found (see chart)….
Some of the data gaps are now starting to be filled from non-government sources. A volunteer effort called Humanitarian OpenStreetMap Team (HOT) improves maps with information from locals and hosts “mapathons” to identify objects shown in satellite images. Spurred by pleas from those fighting Ebola, the group has intensified its efforts in Monrovia since August; most of the city’s roads and many buildings have now been filled in (see maps). Identifying individual buildings is essential, since in dense slums without formal roads they are the landmarks by which outbreaks can be tracked and assistance targeted.
On November 7th a group of charities including MSF, Red Cross and HOT unveiled MissingMaps.org, a joint initiative to produce free, detailed maps of cities across the developing world—before humanitarian crises erupt, not during them. The co-ordinated effort is needed, says Ivan Gayton of MSF: aid workers will not use a map with too little detail, and are unlikely, without a reason, to put work into improving a map they do not use. The hope is that the backing of large charities means the locals they work with will help.
In Kenya and Namibia mobile-phone operators have made call-data records available to researchers, who have used them to combat malaria. By comparing users’ movements with data on outbreaks, epidemiologists are better able to predict where the disease might spread. mTrac, a Ugandan programme that replaces paper reports from health workers with texts sent from their mobile phones, has made data on medical cases and supplies more complete and timely. The share of facilities that have run out of malaria treatments has fallen from 80% to 15% since it was introduced.
Private-sector data are also being used to spot trends before official sources become aware of them. Premise, a startup in Silicon Valley that compiles economics data in emerging markets, has found that as the number of cases of Ebola rose in Liberia, the price of staple foods soared: a health crisis risked becoming a hunger crisis. In recent weeks, as the number of new cases fell, prices did, too. The authorities already knew that travel restrictions and closed borders would push up food prices; they now have a way to measure and track price shifts as they happen….”

A New Ebola Crisis Page Built with Open Data


HDX team: “We are introducing a new Ebola crisis page that provides an overview of the data available in HDX. The page includes an interactive map of the worst-affected countries, the top-line figures for the crisis, a graph of cumulative Ebola cases and deaths, and over 40 datasets.
We have been working closely with UNMEER and WHO to make Ebola data available for public use. We have also received important contributions from the British Red Cross, InterAction, MapAction, the Standby Task Force, the US Department of Defense, and WFP, among others.

How we built it

The process to create this page started a couple of months ago by simply linking to existing data sites, such as Open Street Map’s geospatial data or OCHA’s common operational datasets. We then created a service by extracting the data on Ebola cases and deaths from the bi-weekly WHO situation report and making the raw files available for analysts and developers.
The OCHA Regional Office in Dakar contributed a dataset that included Ebola cases by district, which they had been collecting from reports by the national Ministries of Health since March 2014. This data was picked up by The New York Times graphics team and by Gapminder which partnered with Google Crisis Response to add the data to the Google Public Data Explorer.

As more organizations shared Ebola datasets through HDX, users started to transform the data into useful graphs and maps. These visuals were then shared back with the wider community through the HDX gallery. We have incorporated many of these user-generated visual elements into the design of our new Ebola crisis page….”
See also Hacking Ebola.

The Governance Of Socio-Technical Systems


New book edited by Susana Borrás and Jakob Edler: “Why are so few electric cars in our streets today? Why is it difficult to introduce electronic patient records in our hospitals? To answer these questions we need to understand how state and non-state actors interact with the purpose of transforming socio-technical systems.
Examining the “who” (agents), “how” (policy instruments) and “why” (societal legitimacy) of the governance process, this book presents a conceptual framework for the governance of change in socio-technical systems. Bridging the gap between disciplinary fields, expert contributions provide innovative empirical cases of different modes of governing change. The Governance of Socio-Technical Systems offers a stepping-stone towards building a theory of governance of change and presents a new research agenda on the interaction between science, technology and society.”

A New Taxonomy of Smart City Projects


New paper by Guido Perboli et al: “City logistics proposes an integrated vision of freight transportation systems within urban area and it aims at the optimization of them as a whole in terms of efficiency, security, safety, viability and environmental sustainability. Recently, this perspective has been extended by the Smart City concept in order to include other aspects of city management: building, energy, environment, government, living, mobility, education, health and so on. At the best of our knowledge, a classification of Smart City Projects has not been created yet. This paper introduces such a classification, highlighting success factors and analyzing new trends in Smart City.”

Principles for 21st Century Government


Dan Hon at Code for America: “I’m proud to share the beta of our principles for 21st century government. In this update, we’ve incorporated feedback we received from the 2014 Summit, as well as work from the U.S. Digital Service and Gov.UK that we think applies to the problems faced by local governments.
In the last few decades, the combination of agile and lean ways of working with digital technology and the internet have allowed businesses to serve people’s needs better than ever before. When people interact with their government though, it’s clear that their expectations aren’t being met.
Part of our work at Code for America is to make building digital government easy to understand and easy to copy.
We believe these seven principles help governments understand the values required to build digital government. They are critical for governments of any size or structure to deliver more effective, efficient, and inclusive services to their community. We’ve seen their importance over the last four years, in 32 Fellowship cities big and small across America, and in conversation with those around the world who have been transforming government.
In the past, we’ve described these concepts as “capabilities” — the abilities of governments to work or act in a certain way. But we have realised that there is something more fundamental than just the ability to work or act in a certain way.
We call these principles because it is only when governments agree to, follow, and adopt them at every level, that governments genuinely change and improve the way they work. Together, they provide a clear sense of direction that can then be acted upon….”

Code of Conduct: Cyber Crowdsourcing for Good


Patrick Meier at iRevolution: “There is currently no unified code of conduct for digital crowdsourcing efforts in the development, humanitarian or human rights space. As such, we propose the following principles (displayed below) as a way to catalyze a conversation on these issues and to improve and/or expand this Code of Conduct as appropriate.
This initial draft was put together by Kate ChapmanBrooke Simons and myself. The link above points to this open, editable Google Doc. So please feel free to contribute your thoughts by inserting comments where appropriate. Thank you.
An organization that launches a digital crowdsourcing project must:

  • Provide clear volunteer guidelines on how to participate in the project so that volunteers are able to contribute meaningfully.
  • Test their crowdsourcing platform prior to any project or pilot to ensure that the system will not crash due to obvious bugs.
  • Disclose the purpose of the project, exactly which entities will be using and/or have access to the resulting data, to what end exactly, over what period of time and what the expected impact of the project is likely to be.
  • Disclose whether volunteer contributions to the project will or may be used as training data in subsequent machine learning research
  • ….

An organization that launches a digital crowdsourcing project should:

  • Share as much of the resulting data with volunteers as possible without violating data privacy or the principle of Do No Harm.
  • Enable volunteers to opt out of having their tasks contribute to subsequent machine learning research. Provide digital volunteers with the option of having their contributions withheld from subsequent machine learning studies
  • … “