Smart or dumb? The real impact of India’s proposal to build 100 smart cities


 in The Conversation: “In 2014, the new Indian government declared its intention to achieve 100 smart cities.

In promoting this objective, it gave the example of a large development in the island city of Mumbai, Bhendi Bazaar. There, 3-5 storey housing would be replaced with towers of between 40 to 60 storeys to increase density. This has come to be known as “vertical with a vengeance”.

We have obtained details of the proposed project from the developer and the municipal authorities. Using an extended urban metabolism model, which measures the impacts of the built environment, we have assessed its overall impact. We determined how the flows of materials and energy will change as a result of the redevelopment.

Our research shows that the proposal is neither smart nor sustainable.

Measuring impacts

The Indian government clearly defined what they meant with “smart”. Over half of the 11 objectives were environmental and main components of the metabolism of a city. These include adequate water and sanitation, assured electricity, efficient transport, reduced air pollution and resource depletion, and sustainability.

We collected data from various primary and secondary sources. This included physical surveys during site visits, local government agencies, non-governmental organisations, the construction industry and research.

We then made three-dimensional models of the existing and proposed developments to establish morphological changes, including building heights, street widths, parking provision, roof areas, open space, landscaping and other aspects of built form.

Demographic changes (population density, total population) were based on census data, the developer’s calculations and an assessment of available space. Such information about the magnitude of the development and the associated population changes allowed us to analyse the additional resources required as well as the environmental impact….

Case studies such as Bhendi Bazaar provide an example of plans for increased density and urban regeneration. However, they do not offer an answer to the challenge of limited infrastructure to support the resource requirements of such developments.

The results of our research indicate significant adverse impacts on the environment. They show that the metabolism increases at a greater rate than the population grows. On this basis, this proposed development for Mumbai, or the other 99 cities, should not be called smart or sustainable.

With policies that aim to prevent urban sprawl, cities will inevitably grow vertically. But with high-rise housing comes dependence on centralised flows of energy, water supplies and waste disposal. Dependency in turn leads to vulnerability and insecurity….(More)”.

The hidden costs of open data


Sara Friedman at GCN: “As more local governments open their data for public use, the emphasis is often on “free” — using open source tools to freely share already-created government datasets, often with pro bono help from outside groups. But according to a new report, there are unforeseen costs when it comes pushing government datasets out of public-facing platforms — especially when geospatial data is involved.

The research, led by University of Waterloo professor Peter A. Johnson and McGill University professor Renee Sieber, was based on work as part of Geothink.ca partnership research grant and exploration of the direct and indirect costs of open data.

Costs related to data collection, publishing, data sharing, maintenance and updates are increasingly driving governments to third-party providers to help with hosting, standardization and analytical tools for data inspection, the researchers found. GIS implementation also has associated costs to train staff, develop standards, create valuations for geospatial data, connect data to various user communities and get feedback on challenges.

Due to these direct costs, some governments are more likely to avoid opening datasets that need complex assessment or anonymization techniques for GIS concerns. Johnson and Sieber identified four areas where the benefits of open geospatial data can generate unexpected costs.

First, open data can create “smoke and mirrors” situation where insufficient resources are put toward deploying open data for government use. Users then experience “transaction costs” when it comes to working in specialist data formats that need additional skills, training and software to use.

Second, the level of investment and quality of open data can lead to “material benefits and social privilege” for communities that devote resources to providing more comprehensive platforms.

While there are some open source data platforms, the majority of solutions are proprietary and charged on a pro-rata basis, which can present a challenge for cities with larger, poor populations compared to smaller, wealthier cities. Issues also arise when governments try to combine their data sets, leading to increased costs to reconcile problems.

The third problem revolves around the private sector pushing for the release of data sets that can benefit their business objectives. Companies could push for the release high-value sets, such as a real-time transit data, to help with their product development goals. This can divert attention from low-value sets, such as those detailing municipal services or installations, that could have a bigger impact on residents “from a civil society perspective.”

If communities decide to release the low-value sets first, Johnson and Sieber think the focus can then be shifted to high-value sets that can help recoup the costs of developing the platforms.

Lastly, the report finds inadvertent consequences could result from tying open data resources to private-sector companies. Public-private open data partnerships could lead to infrastructure problems that prevent data from being widely shared, and help private companies in developing their bids for public services….

Johnson and Sieber encourage communities to ask the following questions before investing in open data:

  1. Who are the intended constituents for this open data?
  2. What is the purpose behind the structure for providing this data set?
  3. Does this data enable the intended users to meet their goals?
  4. How are privacy concerns addressed?
  5. Who sets the priorities for release and updates?…(More)”

Read the full report here.

Data Africa


Data Africa is an open data platform designed to provide information on key themes for research and development such as: agriculture, climate, poverty and child health across Sub-Saharan Africa at the sub-national level. The main goal of the online tool is to present the themes to a wide, even non-technical audience through easily accessible visual narratives.

In its first stage, the platform is focused on national and sub-national level data for 13 countries:

  • Burkina Faso
  • Ethiopia
  • Ghana
  • Kenya
  • Malawi
  • Mali
  • Mozambique
  • Nigeria
  • Rwanda
  • Senegal
  • Tanzania
  • Uganda
  • Zambia

Over time, we anticipate expanding the coverage of the platform with additional countries and increasing the amount of data available through the platform….

The data contained in the online tool draws from a variety of sources, including:

The Implementation of Open Data in Indonesia


Paper by Dani Gunawan and Amalia Amalia: “Nowadays, public demands easy access to nonconfidential government data, such as public digital information on health, industry, and culture that can be accessed on the Internet. This will lead departments within government to be efficient and more transparent. As the results, rapid development of applications will solve citizens’ problems in many sectors. One Data Initiatives is the prove that the Government of Indonesia supports data transparency. This research investigates the implementation of open data in Indonesia based on Tim BernersLee five-star rating and open stage model by Kalampokis. The result shows that mostly data in Indonesia is freely available in the Internet, but most of them are not machine-readable and do not support non-proprietary format. The drawback of Indonesia’s open data is lack of ability to link the existing data with other data sources. Therefore, Indonesia is still making initial steps with data inventories and beginning to publish key datasets of public interest…(More)”

The Cost(s) of Open Geospatial Data


Johnson PA, Sieber RE, Scassa T, Stephens M, Robinson PJ. in Transactions in GIS: “The provision of open data by governments at all levels has rapidly increased over recent years. Given that one of the dominant motivations for the provision of open data is to generate ‘value’, both economic and civic, there are valid concerns over the costs incurred in this pursuit. Typically, costs of open data are framed as costs that are internal to the data providing government. Building on the strong history of GIScience research on data provision via spatial data infrastructures, this paper considers both the direct and indirect costs of open data provision, framing four main areas of indirect costs; citizen participation challenges, uneven provision across geography and user types, subsidy of private sector activities, and the creation of inroads for corporate influence on government. These areas of indirect cost lead to the development of critical questions, including constituency, purpose, enablement, protection, and priorities. These questions are proposed as a guide to governments that provide open data in addressing the indirect costs of open data….(More)”.

Innovation@DFID: Crowdsourcing New Ideas at the UK’s Department for International Development


Paper by Anke Schwittay and Paul Braund: “Over the last decade, traditional development institutions have joined market-based actors in embracing inclusive innovation to ensure the sector’s relevance and impacts. In 2014, the UK’s Department for International Development’s (DFID) Innovation Hub launched Amplify as its own flagship initiative. The programme, which is managed by IDEO, a Silicon Valley-based design consultancy, aims to crowdsource new ideas to various development challenges from a broad and diverse group of actors, including poor people themselves. By examining the direction, diversity and distribution of Amplify’s work, we argue that while development innovation can generate more inclusive practices, its transformative potential is constrained by broader developmental logics and policy regimes….(More)”

The accuracy of farmer-generated data in an agricultural citizen science methodology


Jonathan Steinke, Jacob van Etten and Pablo Mejía Zelan in Agronomy for Sustainable Development: “Over the last decades, participatory approaches involving on-farm experimentation have become more prevalent in agricultural research. Nevertheless, these approaches remain difficult to scale because they usually require close attention from well-trained professionals. Novel large-N participatory trials, building on recent advances in citizen science and crowdsourcing methodologies, involve large numbers of participants and little researcher supervision. Reduced supervision may affect data quality, but the “Wisdom of Crowds” principle implies that many independent observations from a diverse group of people often lead to highly accurate results when taken together. In this study, we test whether farmer-generated data in agricultural citizen science are good enough to generate valid statements about the research topic. We experimentally assess the accuracy of farmer observations in trials of crowdsourced crop variety selection that use triadic comparisons of technologies (tricot). At five sites in Honduras, 35 farmers (women and men) participated in tricot experiments. They ranked three varieties of common bean (Phaseolus vulgaris L.) for Plant vigorPlant architecturePest resistance, and Disease resistance. Furthermore, with a simulation approach using the empirical data, we did an order-of-magnitude estimation of the sample size of participants needed to produce relevant results. Reliability of farmers’ experimental observations was generally low (Kendall’s W 0.174 to 0.676). But aggregated observations contained information and had sufficient validity (Kendall’s tau coefficient 0.33 to 0.76) to identify the correct ranking orders of varieties by fitting Mallows-Bradley-Terry models to the data. Our sample size simulation shows that low reliability can be compensated by engaging higher numbers of observers to generate statistically meaningful results, demonstrating the usefulness of the Wisdom of Crowds principle in agricultural research. In this first study on data quality from a farmer citizen science methodology, we show that realistic numbers of less than 200 participants can produce meaningful results for agricultural research by tricot-style trials….(More)”.

Crowdsourcing Expertise to Increase Congressional Capacity


Austin Seaborn at Beeck Center: “Members of Congress have close connections with their districts, and information arising from local organizations, such as professional groups, academia, industry as well as constituents with relevant expertise (like retirees, veterans or students) is highly valuable to them.  Today, congressional staff capacity is at a historic low, while at the same time, constituents in districts are often well equipped to address the underlying policy questions that Congress seeks to solve….

In meetings we have had with House and Senate staffers, they repeatedly express both the difficulty managing their substantial area-specific work loads and their interest in finding ways to substantively engage constituents to find good nuggets of information to help them in their roles as policymakers. At the same time, constituents are demanding more transparency and dialogue from their elected representatives. In many cases, our project brings these two together. It allows Members to tap the expertise in their districts while at the same time creating an avenue for constituents to contribute their knowledge and area expertise to the legislative process. It’s a win for constituents and a win for Member of Congress and their staffs.

It is important to note that the United States lags behind other democracies in experimenting with more inclusive methods during the policymaking process. In the United Kingdom, for example, the UK Parliament has experimented with a variety of new digital tools to engage with constituents. These methods range from Twitter hashtags, which are now quite common given the rise in social media use by governments and elected officials, to a variety of web forums on a variety of platforms. Since June of 2015, they have also been doing digital debates, where questions from the general public are crowdsourced and later integrated into a parliamentary debate by the Member of Parliament leading the debate. Estonia, South Africa, Taiwan, France also…notable examples.

One promising new development we hope to explore more thoroughly is the U.S. Library of Congress’s recently announced legislative data App Challenge. This competition is distinct from the many hackathons that have been held on behalf of Congress in the past, in that this challenge seeks new methods not only to innovate, but also to integrate and legislate. In his announcement, the Library’s Chief Information Officer, Bernard A. Barton, Jr., stated, “An informed citizenry is better able to participate in our democracy, and this is a very real opportunity to contribute to a better understanding of the work being done in Washington.  It may even provide insights for the people doing the work around the clock, both on the Hill, and in state and district offices.  Your innovation and integration may ultimately benefit the way our elected officials legislate for our future.” We believe these sorts of new methods will play a crucial role in the future of engaging citizens in their democracies….(More)”.

Smart Technologies for Smart Governments: Transparency, Efficiency and Organizational Issues


Book edited by Manuel Pedro Rodríguez Bolívar: “…examines the introduction of smart technologies into public administrations and the organizational issues caused by these implementations, and the potential of information and communication technologies (ICTs) to rationalize and improve government, transform governance and organizational issues, and address economic, social, and environmental challenges. Cities are increasingly using new technologies in the delivery of public sector services and in the improvement of government transparency, business-led urban development, and urban sustainability. The book will examine specific smart projects that cities are embracing to improve transparency, efficiency, sustainability, mobility, and whether all cities are prepared to implement smart technologies and the incentives for promoting implementation. This focus on the smart technologies applied to public sector entities will be of interest to academics, researchers, policy-makers, public managers, international organizations and technical experts involved in and responsible for the governance, development and design of Smart Cities….(More)”.

How open data can help the Global South, from disaster relief to voter turnout


Stefaan Verhulst and Andrew Young in The Conversation Global: “The modern era is marked by growing faith in the power of data. “Big data”, “open data”, and “evidence-based decision-making” have become buzzwords, touted as solutions to the world’s most complex and persistent problems, from corruption and famine to the refugee crisis.

While perhaps most pronounced in higher income countries, this trend is now emerging globally. In Africa, Latin America, Asia and beyond, hopes are high that access to data can help developing economies by increasing transparency, fostering sustainable development, building climate resiliency and the like.

This is an exciting prospect, but can opening up data actually make a difference in people’s lives?

Getting data-driven about data

The GovLab at New York University spent the last year trying to answer that question….

Our conclusion: the enthusiasm is justified – as long as it’s tempered with a good measure of realism, too. Here are our six major takeaways:

1. We need a framework – Overall, there is still little evidence to substantiate the enthusiastic claims that open data can foment sustainable development and transform governance. That’s not surprising given the early stage of most open data initiatives.

It may be early for impact evaluation, but it’s not too soon to develop a model that will eventually allow us to assess the impact of opening up data over time.

To that end, the GovLab has created an evidence-based framework that aims to better capture the role of open data in developing countries. The Open Data Logic Framework below focuses on various points in the open data value cycle, from data supply to demand, use and impact.

Logic model of open data. The GovLab

2. Open data has real promise – Based on this framework and the underlying evidence that fed into it, we can guardedly conclude that open data does in fact spur development – but only under certain conditions and within the right supporting ecosystem.

One well-known success took place after Nepal’s 2015 earthquake when open data helped NGOs map important landmarks such as health facilities and road networks, among other uses.

And in Colombia, the International Centre for Tropical Agriculture launched Aclímate Colombia, a tool that gives smallholder farmers data-driven insight into planting strategies that makes them more resilient to climate change….

3. Open data can improve people’s lives Examining projects in a number of sectors critical to development, including health, humanitarian aid, agriculture, poverty alleviation, energy and education, we found four main ways that data can have an impact….

4. Data can be an asset in development While these impacts are apparent in both developed and developing countries, we believe that open data can have a particularly powerful role in developing economies.

Where data is scarce, as it often is in poorer countries, open data can lead to an inherently more equitable and democratic distribution of information and knowledge. This, in turn, may activate a wider range of expertise to address complex problems; it’s what we in the field call “open innovation”.

This quality can allow resource-starved developing economies to access and leverage the best minds around.

And because trust in government is quite low in many developing economies, the transparency bred of releasing data can have after-effects that go well beyond the immediate impact of the data itself…

5. The ingredients matter To better understand why some open data projects fail while others succeed, we created a “periodic table” of open data (below), which includes 27 enabling factors divided into five broad categories….

6. We can plan for impact Our report ends by identifying how development organisations can catalyse the release and use of open data to make a difference on the ground.

Recommendations include:

· Define the problem, understand the user, and be aware of local conditions;

· Focus on readiness, responsiveness and change management;

· Nurture an open data ecosystem through collaboration and partnerships;

· Have a risk mitigation strategy;

· Secure resources and focus on sustainability; and

· Build a strong evidence base and support more research.

Next steps

In short, while it may still be too early to fully capture open data’s as-of-yet muted impact on developing economies, there are certainly reasons for optimism.

Much like blockchaindrones and other much-hyped technical advances, it’s time to start substantiating the excitement over open data with real, hard evidence.

The next step is to get systematic, using the kind of analytical framework we present here to gain comparative and actionable insight into if, when and how open data works. Only by getting data-driven about open data can we help it live up to its potential….(More)