Building the Learning City


Daniel Castro at GovTech: “…Like other technologies, smart cities will evolve and mature over time. The earliest will provide basic insights from data and enable local leaders to engage in evidence-based governance. These efforts will be important, but they will represent only incremental change from what cities have already been doing. For example, Baltimore created its CitiStat program in 1999 to measure all municipal functions and improve oversight and accountability of city agencies. Early smart cities will have substantially more data at their disposal, but they will not necessarily use this data in fundamentally new ways.

The second stage of smart cities will use predictive analytics to identify patterns and forecast trends. These types of insights will be especially valuable to city planners and local officials responsible for improving municipal services and responding to changing demands. These cities will reduce downtime on critical municipal infrastructure by performing preventive maintenance on vehicles, bridges and buildings, and more quickly intervene when public health and safety issues arise. This stage will rely on powerful data-driven technologies, such as the systems that enable Netflix to offer movie recommendations and Amazon to suggest additional products for customers.

The third stage of smart cities will focus on using “prescriptive analytics” to use data to optimize processes automatically. Whereas the second stage of smart cities will be primarily about using data to supply insights about the future that will allow city leaders to evaluate different choices, this third stage will be about relying on algorithms to make many of these decisions independently. Much like a system of smart traffic signals uses real-time data to optimize traffic flow, these algorithms will help to automate more government functions and increase the productivity of municipal employees.

At all three stages of smart city development, there is an opportunity for city leaders to look beyond local needs and consider how they can design a smart city that will be part of a larger network of cities that share and learn from one another. On its own, a smart city can use data to track local trends, but as part of a network, a smart city can benchmark itself against a set of similar peers. For example, water and waste management departments can compare metrics to assess their relative performance and identify opportunities for change.

If they hope to successfully develop into learning cities, cities can begin the process of setting up to work jointly with their peers by participating in forums such as the Global City Teams Challenge, an initiative to bring together government and industry stakeholders working on common smart city problems. But longer-term change will require city leaders to reorient their planning to consider not only the needs of their city, but also how they fit into the larger network….(More)”

A framework for the free flow of non-personal data in the EU


European Commission Press Release: “To unlock the full potential of the EU data economy, the Commission is proposing a new set of rules to govern the free flow of non-personal data in the EU. Together with the already existing rules for personal data, the new measures will enable the storage and processing of non-personal data across the Union to boost the competitiveness of European businesses and to modernise public services in an effective EU single market for data services. Removing data localisation restrictions is considered the most important factor for the data economy to double its value to 4% of GDP in 2020….

The framework proposes:

  1. The principle of free flow of non-personal data across borders: Member States can no longer oblige organisations to locate the storage or processing of data within their borders. Restrictions will only be justified for reasons of public security. Member States will have to notify the Commission of new or existing data localisation requirements. The free flow of non-personal data will make it easier and cheaper for businesses to operate across borders without having to duplicate IT systems or to save the same data in different places.
  2. The principle of data availability for regulatory control: Competent authorities will be able to exercise their rights of access to data wherever it is stored or processed in the EU. The free flow of non-personal data will not affect the obligations for businesses and other organisations to provide certain data for regulatory control purposes.
  3. The development of EU codes of conduct to remove obstacles to switching between service providers of cloud storage and to porting data back to users’ own IT systems…. (Full press release and all documents related to the package)”

Data Sharing Vital in Fight Against Childhood Obesity


Digit: “The Data Lab is teaming up with UNICEF in a bid to encourage data sharing in public and private organisations to help solve pressing public problems. The first collaborative project aims to tackle the issue of childhood obesity in Scotland, with between 29% and 33% of children aged 2-15 at risk of serious obesity-related health complications.

According to UNICEF, solving some of the most complex problems affecting children around the world will require access to different data sets and expertise from diverse sectors. The rapid rise in the availability of quality data offers a wealth of information to address complex problems affecting children. The charity has identified an opportunity to tap into this potential through collaborative working, prompting the development of DataCollaboratives.org in partnership with The Governance Lab at the NYU Tandon School of Engineering, and the Omidyar Network.  The aim for DataCollaboratives is to encourage organisations from different sectors, including private companies, research institutions, government agencies and others, to exchange and share data to help solve public problems.

The initiative is now being promoted in Scotland through UNICEF’s partnership with The Data Lab, who will work together to deliver a Data Collaboratives hub in Scotland where data scientists and strategists will work on some of the most important issues facing children around the world. Finding solutions to these problems has the potential to transform the lives of some of the most disadvantaged children in Scotland, the UK, and around the globe….(More)”.

Internet of Things tackles global animal poaching


Springwise: “ZSL (Zoological Society of London), one of the most famous zoos in Europe, has teamed up with non-profit technology company Digital Catapult to support the development of anti-poaching technology. The partnership will use the Internet of Things (IoT) and Low Power Wide Area Network (LPWAN) technologies to create a sensor and satellite-enabled network that will be able to help conservationists monitor wildlife and respond to poaching threats on land and sea in some of the world’s most remote national parks.

Up to 35,000 African elephants were killed by poachers in 2016, and black rhino and mountain gorilla populations continue to be at high risk. LPWAN could help prevent poaching in game reserves by enabling remote sensors to communicate with one another over long distance while using only a small amount of power. These connected sensors are able to detect activities nearby and determine whether these originate from wildlife or poachers, creating immediate alerts for those monitoring the area.

Digital Catapult has installed a LPWAN base station at the ZSL headquarters at London Zoo, which will enable prototypes to be tested on site. This technology will build on the revolutionary work already underway in areas including Kenya, Nepal, Australia, the Chagos Archipelago, and Antarctica.

The practise of poaching has been the target of many technology companies, with a similar project using artificial intelligence to monitor poachers recently coming to light. One of the many devastating impacts of poaching is the potential to cause extinction of some animals, and one startup has tackled this potential catastrophe with rhinos by producing a 3D printed horn that could help the species avoid being a target….(More)”.

How are Italian Companies Embracing Open Data?


open-data-200-italy (1)Are companies embracing the use of open government data? How, why and what data is being leveraged? To answer these questions, the GovLab started a project three years ago, Open Data 500, to map and assess — in a comparative manner, across sectors and countries — the private sector’s use of open data to develop new products and services, and create social value.

Today we are launching Open Data 200 Italy, in partnership with Fondazione Bruno Kessler, which seeks to showcase the breadth and depth of companies using open data in Italy.

OD200 Italy is the first and only platform to map the use of open data by companies in Italy. 

Our findings show there is a growing ecosystem around open data in Italy that goes beyond traditional open data advocates. …

The OD200 Italy project shows the diversity of data being used, which makes it necessary to keep open data broad and sustained.

“The merits and use of open data for businesses are often praised but not supported by evidence. OD200 Italy is a great contribution to the evidence base of who, how and why corporations are leveraging open data,” said Stefaan Verhulst, Co-Founder of The GovLab and Chief Research and Development Officer. “Policy makers, practitioners and researchers can leverage the data generated by this initiative to improve the supply and use of open data, or to generate new insights. As such, OD200 Italy is a new open data set on open data.”…(More)”.

Artificial Intelligence and Public Policy


Paper by Adam D. ThiererAndrea Castillo and Raymond Russell: “There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later…(More)”.

Chatbot helps asylum seekers prepare for their interviews


Springwise: “MarHub is a new chatbot developed by students at the University of California-Berkeley’s Haas School of Businessto help asylum seekers through the complicated process of applying to become an official refugee – which can take up to 18 months – and to avoid using smugglers.

Finding the right information for the asylum process isn’t easy, and although most asylum seekers are in possession of a smartphone, a lot of the information is either missing or out of date. MarHub is designed to help with that, as it will walk the user through what they can expect and also how to present their case. MarHub is also expandable, so that new information or regulations can be quickly added to make it a hub of useful information.

The concept of MarHub was born in late 2016, in response to the Hult Prize social enterprise challenge, which was focusing on refugees for 2017. The development team quickly realized that there was a gap in the market which they felt they could fill. MarHub will initially be made available through Facebook, and then later on WhatsApp and text messaging….(More)”.

Harnessing the Data Revolution to Achieve the Sustainable Development Goals


Erol Yayboke et al at CSIS: “Functioning societies collect accurate data and utilize the evidence to inform policy. The use of evidence derived from data in policymaking requires the capability to collect and analyze accurate data, clear administrative channels through which timely evidence is made available to decisionmakers, and the political will to rely on—and ideally share—the evidence. The collection of accurate and timely data, especially in the developing world, is often logistically difficult, not politically expedient, and/or expensive.

Before launching its second round of global goals—the Sustainable Development Goals (SDGs)—the United Nations convened a High-Level Panel of Eminent Persons on the Post-2015 Development Agenda. As part of its final report, the Panel called for a “data revolution” and recommended the formation of an independent body to lead the charge.1The report resulted in the creation of the Global Partnership for Sustainable Development Data (GPSDD)—an independent group of countries, companies, data communities, and NGOs—and the SDG Data Labs, a private initiative partnered with the GPSDD. In doing so the United Nations and its partners signaled broad interest in data and evidence-based policymaking at a high level. In fact, the GPSDD calls for the “revolution in data” by addressing the “crisis of non-existent, inaccessible or unreliable data.”As this report shows, this is easier said than done.

This report defines the data revolution as an unprecedented increase in the volume and types of data—and the subsequent demand for them—thanks to the ongoing yet uneven proliferation of new technologies. This revolution is allowing governments, companies, researchers, and citizens to monitor progress and drive action, often with real-time, dynamic, disaggregated data. Much work will be needed to make sure the data revolution reaches developing countries facing difficult challenges (i.e., before the data revolution fully becomes the data revolution for sustainable development). It is important to think of the revolution as a multistep process, beginning with building basic knowledge and awareness of the value of data. This is followed by a more specific focus on public private partnerships, opportunities, and constraints regarding collection and utilization of data for evidence-based policy decisions….

This report provides the following recommendations to the international community to play a constructive role in the data revolution:

  • Don’t fixate on big data alone. Focus on the foundation necessary to facilitate leapfrogs around all types of data: small, big, and everywhere in between.
  • Increase funding for capacity building as part of an expansion of broader educational development priorities.
  • Highlight, share, and support enlightened government-driven approaches to data.
  • Increase funding for the data revolution and coordinate donor efforts.
  • Coordinate UN data revolution-related activities closely with an expanded GPSDD.
  • Secure consensus on data sharing, ownership, and privacy-related international standards….(More)”.

These 3 barriers make it hard for policymakers to use the evidence that development researchers produce


Michael Callen, Adnan Khan, Asim I. Khwaja, Asad Liaqat and Emily Myers at the Monkey Cage/Washington Post: “In international development, the “evidence revolution” has generated a surge in policy research over the past two decades. We now have a clearer idea of what works and what doesn’t. In India, performance pay for teachers works: students in schools where bonuses were on offer got significantly higher test scores. In Kenya, charging small fees for malaria bed nets doesn’t work — and is actually less cost-effective than free distribution. The American Economic Association’s registry for randomized controlled trials now lists 1,287 studies in 106 countries, many of which are testing policies that very well may be expanded.

But can policymakers put this evidence to use?

Here’s how we did our research

We assessed the constraints that keep policymakers from acting on evidence. We surveyed a total of 1,509 civil servants in Pakistan and 108 in India as part of a program called Building Capacity to Use Research Evidence (BCURE), carried out by Evidence for Policy Design (EPoD)at Harvard Kennedy School and funded by the British government. We found that simply presenting evidence to policymakers doesn’t necessarily improve their decision-making. The link between evidence and policy is complicated by several factors.

1. There are serious constraints in policymakers’ ability to interpret evidence….

2. Organizational and structural barriers get in the way of using evidence….

 

3. When presented with quantitative vs. qualitative evidence, policymakers update their beliefs in unexpected ways....(More)

Data-Driven Policy Making: The Policy Lab Approach


Paper by Anne Fleur van Veenstra and Bas Kotterink: “Societal challenges such as migration, poverty, and climate change can be considered ‘wicked problems’ for which no optimal solution exists. To address such problems, public administrations increasingly aim for datadriven policy making. Data-driven policy making aims to make optimal use of sensor data, and collaborate with citizens to co-create policy. However, few public administrations have realized this so far. Therefore, in this paper an approach for data-driven policy making is developed that can be used in the setting of a Policy Lab. A Policy Lab is an experimental environment in which stakeholders collaborate to develop and test policy. Based on literature, we first identify innovations in data-driven policy making. Subsequently, we map these innovations to the stages of the policy cycle. We found that most innovations are concerned with using new data sources in traditional statistics and that methodologies capturing the benefits of data-driven policy making are still under development. Further research should focus on policy experimentation while developing new methodologies for data-driven policy making at the same time….(More)”.