Building Digital Government Strategies


Book by Rodrigo Sandoval-Almazan et al: “This book provides key strategic principles and best practices to guide the design and implementation of digital government strategies. It provides a series of recommendations and findings to think about IT applications in government as a platform for information, services and collaboration, and strategies to avoid identified pitfalls. Digital government research suggests that information technologies have the potential to generate immense public value and transform the relationships between governments, citizens, businesses and other stakeholders. However, developing innovative and high impact solutions for citizens hinges on the development of strategic institutional, organizational and technical capabilities.

Thus far,  particular characteristics and problems of the public sector organization promote the development of poorly integrated and difficult to maintain applications. For example, governments maintain separate applications for open data, transparency, and public services, leading to duplication of efforts and a waste of resources. The costs associated with maintaining such sets of poorly integrated systems may limit the use of resources to future projects and innovation.

This book provides best practices and recommendations based on extensive research in both Mexico and the United States on how governments can develop a digital government strategy for creating public value, how to finance digital innovation in the public sector, how to building successful collaboration networks and foster citizen engagement, and how to correctly implement open government projects and open data. It will be of interest to researchers, practitioners, students, and public sector IT professionals that work in the design and implementation of technology-based projects and programs….(More)”.

How data can heal our oceans


Nishan Degnarain and Steve Adler at WEF: “We have collected more data on our oceans in the past two years than in the history of the planet.

There has been a proliferation of remote and near sensors above, on, and beneath the oceans. New low-cost micro satellites ring the earth and can record what happens below daily. Thousands of tidal buoys follow currents transmitting ocean temperature, salinity, acidity and current speed every minute. Undersea autonomous drones photograph and map the continental shelf and seabed, explore deep sea volcanic vents, and can help discover mineral and rare earth deposits.

The volume, diversity and frequency of data is increasing as the cost of sensors fall, new low-cost satellites are launched, and an emerging drone sector begins to offer new insights into our oceans. In addition, new processing capabilities are enhancing the value we receive from such data on the biological, physical and chemical properties of our oceans.

Yet it is not enough.

We need much more data at higher frequency, quality, and variety to understand our oceans to the degree we already understand the land. Less than 5% of the oceans are comprehensively monitored. We need more data collection capacity to unlock the sustainable development potential of the oceans and protect critical ecosystems.

More data from satellites will help identify illegal fishing activity, track plastic pollution, and detect whales and prevent vessel collisions. More data will help speed the placement of offshore wind and tide farms, improve vessel telematics, develop smart aquaculture, protect urban coastal zones, and enhance coastal tourism.

Unlocking the ocean data market

But we’re not there yet.

This new wave of data innovation is constrained by inadequate data supply, demand, and governance. The supply of existing ocean data is locked by paper records, old formats, proprietary archives, inadequate infrastructure, and scarce ocean data skills and capacity.

The market for ocean observation is driven by science and science isn’t adequately funded.

To unlock future commercial potential, new financing mechanisms are needed to create market demand that will stimulate greater investments in new ocean data collection, innovation and capacity.

Efforts such as the Financial Stability Board’s Taskforce on Climate-related Financial Disclosure have gone some way to raise awareness and create demand for such ocean-related climate risk data.

Much data that is produced is collected by nations, universities and research organizations, NGO’s, and the private sector, but only a small percentage is Open Data and widely available.

Data creates more value when it is widely utilized and well governed. Helping organize to improve data infrastructure, quality, integrity, and availability is a requirement for achieving new ocean data-driven business models and markets. New Ocean Data Governance models, standards, platforms, and skills are urgently needed to stimulate new market demand for innovation and sustainable development….(More)”.

Smart or dumb? The real impact of India’s proposal to build 100 smart cities


 in The Conversation: “In 2014, the new Indian government declared its intention to achieve 100 smart cities.

In promoting this objective, it gave the example of a large development in the island city of Mumbai, Bhendi Bazaar. There, 3-5 storey housing would be replaced with towers of between 40 to 60 storeys to increase density. This has come to be known as “vertical with a vengeance”.

We have obtained details of the proposed project from the developer and the municipal authorities. Using an extended urban metabolism model, which measures the impacts of the built environment, we have assessed its overall impact. We determined how the flows of materials and energy will change as a result of the redevelopment.

Our research shows that the proposal is neither smart nor sustainable.

Measuring impacts

The Indian government clearly defined what they meant with “smart”. Over half of the 11 objectives were environmental and main components of the metabolism of a city. These include adequate water and sanitation, assured electricity, efficient transport, reduced air pollution and resource depletion, and sustainability.

We collected data from various primary and secondary sources. This included physical surveys during site visits, local government agencies, non-governmental organisations, the construction industry and research.

We then made three-dimensional models of the existing and proposed developments to establish morphological changes, including building heights, street widths, parking provision, roof areas, open space, landscaping and other aspects of built form.

Demographic changes (population density, total population) were based on census data, the developer’s calculations and an assessment of available space. Such information about the magnitude of the development and the associated population changes allowed us to analyse the additional resources required as well as the environmental impact….

Case studies such as Bhendi Bazaar provide an example of plans for increased density and urban regeneration. However, they do not offer an answer to the challenge of limited infrastructure to support the resource requirements of such developments.

The results of our research indicate significant adverse impacts on the environment. They show that the metabolism increases at a greater rate than the population grows. On this basis, this proposed development for Mumbai, or the other 99 cities, should not be called smart or sustainable.

With policies that aim to prevent urban sprawl, cities will inevitably grow vertically. But with high-rise housing comes dependence on centralised flows of energy, water supplies and waste disposal. Dependency in turn leads to vulnerability and insecurity….(More)”.

The hidden costs of open data


Sara Friedman at GCN: “As more local governments open their data for public use, the emphasis is often on “free” — using open source tools to freely share already-created government datasets, often with pro bono help from outside groups. But according to a new report, there are unforeseen costs when it comes pushing government datasets out of public-facing platforms — especially when geospatial data is involved.

The research, led by University of Waterloo professor Peter A. Johnson and McGill University professor Renee Sieber, was based on work as part of Geothink.ca partnership research grant and exploration of the direct and indirect costs of open data.

Costs related to data collection, publishing, data sharing, maintenance and updates are increasingly driving governments to third-party providers to help with hosting, standardization and analytical tools for data inspection, the researchers found. GIS implementation also has associated costs to train staff, develop standards, create valuations for geospatial data, connect data to various user communities and get feedback on challenges.

Due to these direct costs, some governments are more likely to avoid opening datasets that need complex assessment or anonymization techniques for GIS concerns. Johnson and Sieber identified four areas where the benefits of open geospatial data can generate unexpected costs.

First, open data can create “smoke and mirrors” situation where insufficient resources are put toward deploying open data for government use. Users then experience “transaction costs” when it comes to working in specialist data formats that need additional skills, training and software to use.

Second, the level of investment and quality of open data can lead to “material benefits and social privilege” for communities that devote resources to providing more comprehensive platforms.

While there are some open source data platforms, the majority of solutions are proprietary and charged on a pro-rata basis, which can present a challenge for cities with larger, poor populations compared to smaller, wealthier cities. Issues also arise when governments try to combine their data sets, leading to increased costs to reconcile problems.

The third problem revolves around the private sector pushing for the release of data sets that can benefit their business objectives. Companies could push for the release high-value sets, such as a real-time transit data, to help with their product development goals. This can divert attention from low-value sets, such as those detailing municipal services or installations, that could have a bigger impact on residents “from a civil society perspective.”

If communities decide to release the low-value sets first, Johnson and Sieber think the focus can then be shifted to high-value sets that can help recoup the costs of developing the platforms.

Lastly, the report finds inadvertent consequences could result from tying open data resources to private-sector companies. Public-private open data partnerships could lead to infrastructure problems that prevent data from being widely shared, and help private companies in developing their bids for public services….

Johnson and Sieber encourage communities to ask the following questions before investing in open data:

  1. Who are the intended constituents for this open data?
  2. What is the purpose behind the structure for providing this data set?
  3. Does this data enable the intended users to meet their goals?
  4. How are privacy concerns addressed?
  5. Who sets the priorities for release and updates?…(More)”

Read the full report here.

Data Africa


Data Africa is an open data platform designed to provide information on key themes for research and development such as: agriculture, climate, poverty and child health across Sub-Saharan Africa at the sub-national level. The main goal of the online tool is to present the themes to a wide, even non-technical audience through easily accessible visual narratives.

In its first stage, the platform is focused on national and sub-national level data for 13 countries:

  • Burkina Faso
  • Ethiopia
  • Ghana
  • Kenya
  • Malawi
  • Mali
  • Mozambique
  • Nigeria
  • Rwanda
  • Senegal
  • Tanzania
  • Uganda
  • Zambia

Over time, we anticipate expanding the coverage of the platform with additional countries and increasing the amount of data available through the platform….

The data contained in the online tool draws from a variety of sources, including:

The Implementation of Open Data in Indonesia


Paper by Dani Gunawan and Amalia Amalia: “Nowadays, public demands easy access to nonconfidential government data, such as public digital information on health, industry, and culture that can be accessed on the Internet. This will lead departments within government to be efficient and more transparent. As the results, rapid development of applications will solve citizens’ problems in many sectors. One Data Initiatives is the prove that the Government of Indonesia supports data transparency. This research investigates the implementation of open data in Indonesia based on Tim BernersLee five-star rating and open stage model by Kalampokis. The result shows that mostly data in Indonesia is freely available in the Internet, but most of them are not machine-readable and do not support non-proprietary format. The drawback of Indonesia’s open data is lack of ability to link the existing data with other data sources. Therefore, Indonesia is still making initial steps with data inventories and beginning to publish key datasets of public interest…(More)”

The Cost(s) of Open Geospatial Data


Johnson PA, Sieber RE, Scassa T, Stephens M, Robinson PJ. in Transactions in GIS: “The provision of open data by governments at all levels has rapidly increased over recent years. Given that one of the dominant motivations for the provision of open data is to generate ‘value’, both economic and civic, there are valid concerns over the costs incurred in this pursuit. Typically, costs of open data are framed as costs that are internal to the data providing government. Building on the strong history of GIScience research on data provision via spatial data infrastructures, this paper considers both the direct and indirect costs of open data provision, framing four main areas of indirect costs; citizen participation challenges, uneven provision across geography and user types, subsidy of private sector activities, and the creation of inroads for corporate influence on government. These areas of indirect cost lead to the development of critical questions, including constituency, purpose, enablement, protection, and priorities. These questions are proposed as a guide to governments that provide open data in addressing the indirect costs of open data….(More)”.

Innovation@DFID: Crowdsourcing New Ideas at the UK’s Department for International Development


Paper by Anke Schwittay and Paul Braund: “Over the last decade, traditional development institutions have joined market-based actors in embracing inclusive innovation to ensure the sector’s relevance and impacts. In 2014, the UK’s Department for International Development’s (DFID) Innovation Hub launched Amplify as its own flagship initiative. The programme, which is managed by IDEO, a Silicon Valley-based design consultancy, aims to crowdsource new ideas to various development challenges from a broad and diverse group of actors, including poor people themselves. By examining the direction, diversity and distribution of Amplify’s work, we argue that while development innovation can generate more inclusive practices, its transformative potential is constrained by broader developmental logics and policy regimes….(More)”

The accuracy of farmer-generated data in an agricultural citizen science methodology


Jonathan Steinke, Jacob van Etten and Pablo Mejía Zelan in Agronomy for Sustainable Development: “Over the last decades, participatory approaches involving on-farm experimentation have become more prevalent in agricultural research. Nevertheless, these approaches remain difficult to scale because they usually require close attention from well-trained professionals. Novel large-N participatory trials, building on recent advances in citizen science and crowdsourcing methodologies, involve large numbers of participants and little researcher supervision. Reduced supervision may affect data quality, but the “Wisdom of Crowds” principle implies that many independent observations from a diverse group of people often lead to highly accurate results when taken together. In this study, we test whether farmer-generated data in agricultural citizen science are good enough to generate valid statements about the research topic. We experimentally assess the accuracy of farmer observations in trials of crowdsourced crop variety selection that use triadic comparisons of technologies (tricot). At five sites in Honduras, 35 farmers (women and men) participated in tricot experiments. They ranked three varieties of common bean (Phaseolus vulgaris L.) for Plant vigorPlant architecturePest resistance, and Disease resistance. Furthermore, with a simulation approach using the empirical data, we did an order-of-magnitude estimation of the sample size of participants needed to produce relevant results. Reliability of farmers’ experimental observations was generally low (Kendall’s W 0.174 to 0.676). But aggregated observations contained information and had sufficient validity (Kendall’s tau coefficient 0.33 to 0.76) to identify the correct ranking orders of varieties by fitting Mallows-Bradley-Terry models to the data. Our sample size simulation shows that low reliability can be compensated by engaging higher numbers of observers to generate statistically meaningful results, demonstrating the usefulness of the Wisdom of Crowds principle in agricultural research. In this first study on data quality from a farmer citizen science methodology, we show that realistic numbers of less than 200 participants can produce meaningful results for agricultural research by tricot-style trials….(More)”.

Crowdsourcing Expertise to Increase Congressional Capacity


Austin Seaborn at Beeck Center: “Members of Congress have close connections with their districts, and information arising from local organizations, such as professional groups, academia, industry as well as constituents with relevant expertise (like retirees, veterans or students) is highly valuable to them.  Today, congressional staff capacity is at a historic low, while at the same time, constituents in districts are often well equipped to address the underlying policy questions that Congress seeks to solve….

In meetings we have had with House and Senate staffers, they repeatedly express both the difficulty managing their substantial area-specific work loads and their interest in finding ways to substantively engage constituents to find good nuggets of information to help them in their roles as policymakers. At the same time, constituents are demanding more transparency and dialogue from their elected representatives. In many cases, our project brings these two together. It allows Members to tap the expertise in their districts while at the same time creating an avenue for constituents to contribute their knowledge and area expertise to the legislative process. It’s a win for constituents and a win for Member of Congress and their staffs.

It is important to note that the United States lags behind other democracies in experimenting with more inclusive methods during the policymaking process. In the United Kingdom, for example, the UK Parliament has experimented with a variety of new digital tools to engage with constituents. These methods range from Twitter hashtags, which are now quite common given the rise in social media use by governments and elected officials, to a variety of web forums on a variety of platforms. Since June of 2015, they have also been doing digital debates, where questions from the general public are crowdsourced and later integrated into a parliamentary debate by the Member of Parliament leading the debate. Estonia, South Africa, Taiwan, France also…notable examples.

One promising new development we hope to explore more thoroughly is the U.S. Library of Congress’s recently announced legislative data App Challenge. This competition is distinct from the many hackathons that have been held on behalf of Congress in the past, in that this challenge seeks new methods not only to innovate, but also to integrate and legislate. In his announcement, the Library’s Chief Information Officer, Bernard A. Barton, Jr., stated, “An informed citizenry is better able to participate in our democracy, and this is a very real opportunity to contribute to a better understanding of the work being done in Washington.  It may even provide insights for the people doing the work around the clock, both on the Hill, and in state and district offices.  Your innovation and integration may ultimately benefit the way our elected officials legislate for our future.” We believe these sorts of new methods will play a crucial role in the future of engaging citizens in their democracies….(More)”.