Smart Cities – International Case Studies


“These case studies were developed by the Inter-American Development Bank (IDB), in association with the Korea Research Institute for Human Settlements (KRIHS).

Anyang, Korea Anyang, a 600,000 population city near Seoul is developing international recognition on its smart city project that has been implemented incrementally since 2003. This initiative began with the Bus Information System to enhance citizen’s convenience at first, and has been expanding its domain into wider Intelligent Transport System as well as crime and disaster prevention in an integrated manner. Anyang is considered a benchmark for smart city with a 2012 Presidential Award in Korea and receives large number of international visits. Anyang’s Integrated Operation and Control Center (IOCC) acts as the platform that gathers, analyzes and distributes information for mobility, disasters management and crime. Anyang is currently utilizing big data for policy development and is continuing its endeavor to expand its smart city services into areas such as waste and air quality management. Download Anyang case study

Medellín, Colombia Medellin is a city that went from being known for its security problems to being an international referent of technological and social innovation, urban transformation, equity, and citizen participation. This report shows how Medellin has implemented a series of strategies that have made it a smart city that is developing capacity and organic structure in the entities that control mobility, the environment, and security. In addition, these initiatives have created mechanisms to communicate and interact with citizens in order to promote continuous improvement of smart services.

Through the Program “MDE: Medellin Smart City,” Medellin is implementing projects to create free Internet access zones, community centers, a Mi-Medellin co-creation portal, open data, online transactions, and other services. Another strategy is the creation of the Smart Mobility System which, through the use of technology, has achieved a reduction in the number of accidents, improvement in mobility, and a reduction in incident response time. Download Medellin case study

Namyangju, Korea

Orlando, U.S.

Pangyo, Korea

Rio de Janeiro, Brazil… 

Santander, España

Singapore

Songdo, Korea

Tel Aviv, Israel(More)”

Bridging data gaps for policymaking: crowdsourcing and big data for development


 for the DevPolicyBlog: “…By far the biggest innovation in data collection is the ability to access and analyse (in a meaningful way) user-generated data. This is data that is generated from forums, blogs, and social networking sites, where users purposefully contribute information and content in a public way, but also from everyday activities that inadvertently or passively provide data to those that are able to collect it.

User-generated data can help identify user views and behaviour to inform policy in a timely way rather than just relying on traditional data collection techniques (census, household surveys, stakeholder forums, focus groups, etc.), which are often cumbersome, very costly, untimely, and in many cases require some form of approval or support by government.

It might seem at first that user-generated data has limited usefulness in a development context due to the importance of the internet in generating this data combined with limited internet availability in many places. However, U-Report is one example of being able to access user-generated data independent of the internet.

U-Report was initiated by UNICEF Uganda in 2011 and is a free SMS based platform where Ugandans are able to register as “U-Reporters” and on a weekly basis give their views on topical issues (mostly related to health, education, and access to social services) or participate in opinion polls. As an example, Figure 1 shows the result from a U-Report poll on whether polio vaccinators came to U-Reporter houses to immunise all children under 5 in Uganda, broken down by districts. Presently, there are more than 300,000 U-Reporters in Uganda and more than one million U-Reporters across 24 countries that now have U-Report. As an indication of its potential impact on policymaking,UNICEF claims that every Member of Parliament in Uganda is signed up to receive U-Report statistics.

Figure 1: U-Report Uganda poll results

Figure 1: U-Report Uganda poll results

U-Report and other platforms such as Ushahidi (which supports, for example, I PAID A BRIBE, Watertracker, election monitoring, and crowdmapping) facilitate crowdsourcing of data where users contribute data for a specific purpose. In contrast, “big data” is a broader concept because the purpose of using the data is generally independent of the reasons why the data was generated in the first place.

Big data for development is a new phrase that we will probably hear a lot more (see here [pdf] and here). The United Nations Global Pulse, for example, supports a number of innovation labs which work on projects that aim to discover new ways in which data can help better decision-making. Many forms of “big data” are unstructured (free-form and text-based rather than table- or spreadsheet-based) and so a number of analytical techniques are required to make sense of the data before it can be used.

Measures of Twitter activity, for example, can be a real-time indicator of food price crises in Indonesia [pdf] (see Figure 2 below which shows the relationship between food-related tweet volume and food inflation: note that the large volume of tweets in the grey highlighted area is associated with policy debate on cutting the fuel subsidy rate) or provide a better understanding of the drivers of immunisation awareness. In these examples, researchers “text-mine” Twitter feeds by extracting tweets related to topics of interest and categorising text based on measures of sentiment (positive, negative, anger, joy, confusion, etc.) to better understand opinions and how they relate to the topic of interest. For example, Figure 3 shows the sentiment of tweets related to vaccination in Kenya over time and the dates of important vaccination related events.

Figure 2: Plot of monthly food-related tweet volume and official food price statistics

Figure 2: Plot of monthly food-related Tweet volume and official food price statistics

Figure 3: Sentiment of vaccine related tweets in Kenya

Figure 3: Sentiment of vaccine-related tweets in Kenya

Another big data example is the use of mobile phone usage to monitor the movement of populations in Senegal in 2013. The data can help to identify changes in the mobility patterns of vulnerable population groups and thereby provide an early warning system to inform humanitarian response effort.

The development of mobile banking too offers the potential for the generation of a staggering amount of data relevant for development research and informing policy decisions. However, it also highlights the public good nature of data collected by public and private sector institutions and the reliance that researchers have on them to access the data. Building trust and a reputation for being able to manage privacy and commercial issues will be a major challenge for researchers in this regard….(More)”

Visual Rulemaking


New York University Law Review Paper by Elizabeth G. Porter and Kathryn A. Watts: “Federal rulemaking has traditionally been understood as a text-bound, technocratic process. However, as this Article is the first to uncover, rulemaking stakeholders — including agencies, the President and members of the public — are now deploying politically tinged visuals to push their agendas at every stage of high-stakes, often virulently controversial, rulemakings. Rarely do these visual contributions appear in the official rulemaking record, which remains defined by dense text, lengthy cost-benefit analyses, and expert reports. Perhaps as a result, scholars have overlooked the phenomenon we identify here: the emergence of a visual rulemaking universe that is splashing images, GIFs, and videos across social media channels. While this new universe, which we call “visual rulemaking,” might appear to be wholly distinct from the textual rulemaking universe on which administrative law has long focused, the two are not in fact distinct. Visual politics are seeping into the technocracy.

This Article argues that visual rulemaking is a good thing. It furthers fundamental regulatory values, including transparency and political accountability. It may also facilitate participation by more diverse stakeholders — not merely regulatory insiders who are well-equipped to navigate dense text. Yet we recognize that visual rulemaking poses risks. Visual appeals may undermine the expert-driven foundation of the regulatory state, and some uses may threaten or outright violate key legal doctrines, including the Administrative Procedure Act and longstanding prohibitions on agency lobbying and propaganda. Nonetheless, we conclude that administrative law theory and doctrine ultimately can and should welcome this robust new visual rulemaking culture….(More)”

Mapping and Comparing Responsible Data Approaches


New report by Jos Berens, Ulrich Mans and Stefaan Verhulst: “Recent years have witnessed something of a sea-change in the way humanitarian organizations consider and use data. Growing awareness of the potential of data has led to new enthusiasm and new, innovative applications that seek to respond to and mitigate crises in fresh ways. At the same time, it has become apparent that the potential benefits are accompanied by risks. A new framework is needed that can help balance the benefits and risks, and that can aid humanitarian organizations and others (e.g., policymakers) develop a more responsible approach to data collection and use in their efforts to combat natural and man-made crises around the world. …

Screen Shot 2016-07-06 at 9.31.58 AMThe report we are releasing today, “Mapping and Comparing Responsible Data Approaches”, attempts to guide the first steps toward such a framework by learning from current approaches and principles. It is the outcome of a joint research project commissioned by UNOCHA and conducted in collaboration between the GovLab at NYU and Leiden University. In an effort to better understand the landscape, we have considered existing data use policies and principles from 17 organizations. These include 7 UN agencies, 7 International Organizations, 2 government agencies and 1 research institute. Our study of these organizations’ policies allowed us to extract a number of key takeaways that, together, amount to something like a roadmap for responsible data use for any humanitarian organization considering using data in new ways.

We began our research by closely mapping the existing responsible data use policies. To do this, we developed a template with eight broad themes that determines the key ingredients of responsible data framework. This use of a consistent template across organizations permits us to study and compare the 17 data use policies in a structured and systematic manner. Based on this template, we were able to extract 7 key takeaways for what works best when using data in a humanitarian context – presented in the conclusion to the paper being released today. They are designed to be broad enough to be broadly applicable, yet specific enough to be operational and actually usable….(More)”

Building a Democracy Machine: Toward an Integrated and Empowered Form of Civic Engagement


Essay by John Gastil: “Dozens—and possibly hundreds—of online platforms have been built in the past decade to facilitate specific forms of civic engagement. Unconnected to each other, let alone an integrated system easy for citizens to use, these platforms cannot begin to realize their full potential. The author proposes a massive collaborative project to build an integrated platform called, tongue squarely in cheek, “The Democracy Machine.” The Machine draws on public energy and ideas, mixing those into concrete policy advice, influencing government decision making, and creating a feedback loop that helps officials and citizens track progress together as they continuously turn the policymaking crank. This online system could help to harmonize civic leaders, vocal and marginalized citizens, and government. Democracy’s need for ongoing public consultation would fuel the Machine, which would, in turn, generate the empowered deliberation and public legitimacy that government needs to make tough policy decisions….(More)”

Priorities for the National Privacy Research Strategy


James Kurose and Keith Marzullo at the White House: “Vast improvements in computing and communications are creating new opportunities for improving life and health, eliminating barriers to education and employment, and enabling advances in many sectors of the economy. The promise of these new applications frequently comes from their ability to create, collect, process, and archive information on a massive scale.

However, the rapid increase in the quantity of personal information that is being collected and retained, combined with our increased ability to analyze and combine it with other information, is creating concerns about privacy. When information about people and their activities can be collected, analyzed, and repurposed in so many ways, it can create new opportunities for crime, discrimination, inadvertent disclosure, embarrassment, and harassment.

This Administration has been a strong champion of initiatives to improve the state of privacy, such as the “Consumer Privacy Bill of Rights” proposal and the creation of the Federal Privacy Council. Similarly, the White House report Big Data: Seizing Opportunities, Preserving Values highlights the need for large-scale privacy research, stating: “We should dramatically increase investment for research and development in privacy-enhancing technologies, encouraging cross-cutting research that involves not only computer science and mathematics, but also social science, communications and legal disciplines.”

Today, we are pleased to release the National Privacy Research Strategy. Research agencies across government participated in the development of the strategy, reviewing existing Federal research activities in privacy-enhancing technologies, soliciting inputs from the private sector, and identifying priorities for privacy research funded by the Federal Government. The National Privacy Research Strategy calls for research along a continuum of challenges, from how people understand privacy in different situations and how their privacy needs can be formally specified, to how these needs can be addressed, to how to mitigate and remediate the effects when privacy expectations are violated. This strategy proposes the following priorities for privacy research:

  • Foster a multidisciplinary approach to privacy research and solutions;
  • Understand and measure privacy desires and impacts;
  • Develop system design methods that incorporate privacy desires, requirements, and controls;
  • Increase transparency of data collection, sharing, use, and retention;
  • Assure that information flows and use are consistent with privacy rules;
  • Develop approaches for remediation and recovery; and
  • Reduce privacy risks of analytical algorithms.

With this strategy, our goal is to produce knowledge and technology that will enable individuals, commercial entities, and the Federal Government to benefit from technological advancements and data use while proactively identifying and mitigating privacy risks. Following the release of this strategy, we are also launching a Federal Privacy R&D Interagency Working Group, which will lead the coordination of the Federal Government’s privacy research efforts. Among the group’s first public activities will be to host a workshop to discuss the strategic plan and explore directions of follow-on research. It is our hope that this strategy will also inspire parallel efforts in the private sector….(More)”

OpenData.Innovation: an international journey to discover innovative uses of open government data


Nesta: “This paper by Mor Rubinstein (Open Knowledge International) and Josh Cowls and Corinne Cath (Oxford Internet Institute) explores the methods and motivations behind innovative uses of open government data in five specific country contexts – Chile, Argentine, Uruguay, Israel, and Denmark; and considers how the insights it uncovers might be adopted in a UK context.

Through a series of interviews with ‘social hackers’ and open data practitioners and experts in countries with recognised open government data ‘hubs’, the authors encountered a diverse range of practices and approaches in how actors in different sectors of society make innovative uses of open government data. This diversity also demonstrated how contextual factors shape the opportunities and challenges for impactful open government data use.

Based on insights from these international case studies, the paper offers a number of recommendations – around community engagement, data literacy and practices of opening data – which aim to support governments and citizens unlock greater knowledge exchange and social impact through open government data….(More)”

Open Data in Southeast Asia


Book by Manuel Stagars: “This book explores the power of greater openness, accountability, and transparency in digital information and government data for the nations of Southeast Asia. The author demonstrates that, although the term “open data” seems to be self-explanatory, it involves an evolving ecosystem of complex domains. Through empirical case studies, this book explains how governments in the ASEAN may harvest the benefits of open data to maximize their productivity, efficiency and innovation. The book also investigates how increasing digital divides in the population, boundaries to civil society, and shortfalls in civil and political rights threaten to arrest open data in early development, which may hamper post-2015 development agendas in the region. With robust open data policies and clear roadmaps, member states of the ASEAN can harvest the promising opportunities of open data in their particular developmental, institutional and legal settings. Governments, policy makers, entrepreneurs and academics will gain a clearer understanding of the factors that enable open data from this timely research….(More)”

Reforms to improve U.S. government accountability


Alexander B. Howard and Patrice McDermott in Science: “Five decades after the United States first enacted the Freedom of Information Act (FOIA), Congress has voted to make the first major reforms to the statute since 2007. President Lyndon Johnson signed the first FOIA on 4 July 1966, enshrining in law the public’s right to access to information from executive branch government agencies. Scientists and others around the world can use the FOIA to learn what the U.S. government has done in its policies and practices. Proposed reforms should be a net benefit to public understanding of the scientific process and knowledge, by increasing the access of scientists to archival materials and reducing the likelihood of science and scientists being suppressed by official secrecy or bureaucracy.

Although the FOIA has been important for accountability, reform is sorely needed. An analysis of the 15 federal government agencies that received the most FOIA requests found poor to abysmal compliance rates (1, 2). In 2016, the Associated Press found that the Obama Administration had set a new record for unfulfilled FOIA requests (3). Although that has to be considered in the context of a rise in request volume without commensurate increases in resources to address them, researchers have found that most agencies simply ignore routine requests for travel schedules (4). An audit of 165 federal government agencies found that only 40% complied with the E-FOIA Act of 1996; just 67 of them had online libraries that were regularly updated with a substantial number of documents released under FOIA (5).

In the face of growing concerns about compliance, FOIA reform was one of the few recent instances of bicameral bipartisanship in Congress, with both the House and Senate each passing bills this spring with broad support. Now that Congress moved to send the Senate bill on to the president to sign into law, implementation of specific provisions will bear close scrutiny, including the potential impact of disclosure upon scientists who work in or with government agencies (6). Proposed revisions to the FOIA statute would improve how government discloses information to the public, while leaving intact exemptions for privacy, proprietary information, deliberative documents, and national security.

Features of Reforms

One of the major reforms in the House and Senate bills was to codify the “presumption of openness” outlined by President Obama the day after he took office in January 2009 when he declared that FOIA should be administered with a clear presumption: In the face of doubt, “openness” would prevail. This presumption of openness was affirmed by U.S. Attorney General Holder in March 2009. Although these declarations have had limited effect in the agencies (as described above), codifying these reforms into law is crucial not only to ensure that this remains executive branch policy after this president leaves office but also to provide requesters with legal force beyond an executive order….(More)”

Privacy concerns in smart cities


Liesbet van Zoonen in Government Information Quarterly: “In this paper a framework is constructed to hypothesize if and how smart city technologies and urban big data produce privacy concerns among the people in these cities (as inhabitants, workers, visitors, and otherwise). The framework is built on the basis of two recurring dimensions in research about people’s concerns about privacy: one dimensions represents that people perceive particular data as more personal and sensitive than others, the other dimension represents that people’s privacy concerns differ according to the purpose for which data is collected, with the contrast between service and surveillance purposes most paramount. These two dimensions produce a 2 × 2 framework that hypothesizes which technologies and data-applications in smart cities are likely to raise people’s privacy concerns, distinguishing between raising hardly any concern (impersonal data, service purpose), to raising controversy (personal data, surveillance purpose). Specific examples from the city of Rotterdam are used to further explore and illustrate the academic and practical usefulness of the framework. It is argued that the general hypothesis of the framework offers clear directions for further empirical research and theory building about privacy concerns in smart cities, and that it provides a sensitizing instrument for local governments to identify the absence, presence, or emergence of privacy concerns among their citizens….(More)”