Opportunities and Challenges of Policy Informatics: Tackling Complex Problems through the Combination of Open Data, Technology and Analytics


Gabriel Puron-Cid et al in the International Journal on Public Administration in the Digital Age: “Contemporary societies face complex problems that challenge the sustainability of their social and economic systems. Such problems may require joint efforts from the public and private sectors as well as from the society at large in order to find innovative solutions. In addition, the open government movement constitutes a revitalized wave of access to data to promote innovation through transparency, participation and collaboration. This paper argues that currently there is an opportunity to combine emergent information technologies, new analytical methods, and open data in order to develop innovative solutions to some of the pressing problems in modern societies. Therefore, the objective is to propose a conceptual model to better understand policy innovations based on three pillars: data, information technologies, and analytical methods and techniques. The potential benefits generated from the creation of organizations with advanced analytical capabilities within governments, universities, and non-governmental organizations are numerous and the expected positive impacts on society are significant. However, this paper also discusses some important political, organizational, and technical challenges…(More).

 

Do Universities, Research Institutions Hold the Key to Open Data’s Next Chapter


Ben Miller at Government Technology: “Government produces a lot of data — reams of it, roomfuls of it, rivers of it. It comes in from citizen-submitted forms, fleet vehicles, roadway sensors and traffic lights. It comes from utilities, body cameras and smartphones. It fills up servers and spills into the cloud. It’s everywhere.

And often, all that data sits there not doing much. A governing entity might have robust data collection and it might have an open data policy, but that doesn’t mean it has the computing power, expertise or human capital to turn those efforts into value.

The amount of data available to government and the computing public promises to continue to multiply — the growing smart cities trend, for example, installs networks of sensors on everything from utility poles to garbage bins.

As all this happens, a movement — a new spin on an old concept — has begun to take root: partnerships between government and research institutes. Usually housed within universities and laboratories, these partnerships aim to match strength with strength. Where government has raw data, professors and researchers have expertise and analytics programs.

Several leaders in such partnerships, spanning some of the most tech-savvy cities in the country, see increasing momentum toward the concept. For instance, the John D. and Catherine T. MacArthur Foundation in September helped launch the MetroLab Network, an organization of more than 20 cities that have partnered with local universities and research institutes for smart-city-oriented projects….

Two recurring themes in projects that universities and research organizations take on in cooperation with government are project evaluation and impact analysis. That’s at least partially driven by the very nature of the open data movement: One reason to open data is to get a better idea of how well the government is operating….

Open data may have been part of the impetus for city-university partnerships, in that the availability of more data lured researchers wanting to work with it and extract value. But those partnerships have, in turn, led to government officials opening more data than ever before for useful applications.

Sort of.

“I think what you’re seeing is not just open data, but kind of shades of open — the desire to make the data open to university researchers, but not necessarily the broader public,” said Beth Noveck, co-founder of New York University’s GovLab.


shipping+crates

GOVLAB: DOCKER FOR DATA 

Much of what GovLab does is about opening up access to data, and that is the whole point of Docker for Data. The project aims to simplify and quicken the process of extracting and loading large data sets so they will respond to Structured Query Language commands by moving the computing power of that process to the cloud. The docker can be installed with a single line of code, and its website plays host to already-extracted data sets. Since its inception, the website has grown to include more than 100 gigabytes of data from more than 8,000 data sets. From Baltimore, for example, one can easily find information on public health, water sampling, arrests, senior centers and more. Photo via Shutterstock.


That’s partially because researchers are a controlled group who can be forced to sign memorandums of understanding and trained to protect privacy and prevent security breaches when government hands over sensitive data. That’s a top concern of agencies that manage data, and it shows in the GovLab’s work.

It was something Noveck found to be very clear when she started working on a project she simply calls “Arnold” because of project support from the Laura and John Arnold Foundation. The project involves building a better understanding of how different criminal justice jurisdictions collect, store and share data. The motivation is to help bridge the gaps between people who manage the data and people who should have easy access to it. When Noveck’s center conducted a survey among criminal justice record-keepers, the researchers found big differences between participants.

“There’s an incredible disparity of practices that range from some jurisdictions that have a very well established, formalized [memorandum of understanding] process for getting access to data, to just — you send an email to a guy and you hope that he responds, and there’s no organized way to gain access to data, not just between [researchers] and government entities, but between government entities,” she said….(More)

The impact of a move towards Open Data in West Africa


 at the Georgetown Journal of International Affairs:  “The concept of “open data” is not new, but its definition is quite recent. Since computers began communicating through networks, engineers have been developing standards to share data. The open data philosophy holds that some data should be freely available for use, reuse, distribute and publish without copyright and patent controls. Several mechanisms can also limit access to data like restricted database access, use of proprietary technologies or encryption. Ultimately, open data buttresses government initiatives to boost innovation, support transparency, empower citizens, encourage accountability, and fight corruption.

West Africa is primed for open data. The region experienced a 6% growth in 2014, according to the Africa Development Bank. Its Internet user network is also growing: 17% of the sub-Saharan population owned a unique smartphone in 2013, a number projected to grow to 37% by 2020 according to the GSMA. To improve the quality of governance and services in the digital age, the region must develop new infrastructures, revise digital strategies, simplify procurement procedures, adapt legal frameworks, and allow access to public data. Open data can enhance local economies and the standard of living.

This paper speaks towards the impact of open data in West Africa. First it assesses open data as a positive tool for governance and civil society. Then, it analyzes the current situation of open data across the region. Finally, it highlights specific best practices for enhancing impact in the future….(More)”

New #ODimpact Release: How is Open Data Creating Economic Opportunities and Solving Public Problems?


Andrew Young at The GovLab: “Last month, the GovLab and Omidyar Network launched Open Data’s Impact (odimpact.org), a custom-built repository offering a range of in-depth case studies on global open data projects. The initial launch of theproject featured the release of 13 open data impact case studies – ten undertaken by the GovLab, as well asthree case studies from Becky Hogge (@barefoot_techie), an independent researcher collaborating withOmidyar Network. Today, we are releasing a second batch of 12 case studies – nine case studies from theGovLab and three from Hogge…

The batch of case studies being revealed today examines two additional dimensions of impact. They find that:

  • Open data is creating new opportunities for citizens and organizations, by fostering innovation and promoting economic growth and job creation.
  • Open data is playing a role in solving public problems, primarily by allowing citizens and policymakers access to new forms of data-driven assessment of the problems at hand. It also enables data-driven engagement, producing more targeted interventions and enhanced collaboration.

The specific impacts revealed by today’s release of case studies are wide-ranging, and include both positive and negative transformations. We have found that open data has enabled:

  • The creation of new industries built on open weather data released by the United States NationalOceanic and Atmospheric Administration (NOAA).
  • The generation of billions of dollars of economic activity as a result of the Global Positioning System(GPS) being opened to the global public in the 1980s, and the United Kingdom’s Ordnance Survey geospatial offerings.
  • A more level playing field for small businesses in New York City seeking market research data.
  • The coordinated sharing of data among government and international actors during the response to theEbola outbreak in Sierra Leone.
  • The identification of discriminatory water access decisions in the case Kennedy v the City of Zanesville, resulting in a $10.9 million settlement for the African-American plaintiffs.
  • Increased awareness among Singaporeans about the location of hotspots for dengue fever transmission.
  • Improved, data-driven emergency response following earthquakes in Christchurch, New Zealand.
  • Troubling privacy violations on Eightmaps related to Californians’ political donation activity….(More)”

All case studies available at odimpact.org.

 

Data Collaboratives: Matching Demand with Supply of (Corporate) Data to solve Public Problems


Blog by Stefaan G. Verhulst, IrynaSusha and Alexander Kostura: “Data Collaboratives refer to a new form of collaboration, beyond the public-private partnership model, in which participants from different sectors (private companies, research institutions, and government agencies) share data to help solve public problems. Several of society’s greatest challenges — from climate change to poverty — require greater access to big (but not always open) data sets, more cross-sector collaboration, and increased capacity for data analysis. Participants at the workshop and breakout session explored the various ways in which data collaborative can help meet these needs.

Matching supply and demand of data emerged as one of the most important and overarching issues facing the big and open data communities. Participants agreed that more experimentation is needed so that new, innovative and more successful models of data sharing can be identified.

How to discover and enable such models? When asked how the international community might foster greater experimentation, participants indicated the need to develop the following:

· A responsible data framework that serves to build trust in sharing data would be based upon existing frameworks but also accommodates emerging technologies and practices. It would also need to be sensitive to public opinion and perception.

· Increased insight into different business models that may facilitate the sharing of data. As experimentation continues, the data community should map emerging practices and models of sharing so that successful cases can be replicated.

· Capacity to tap into the potential value of data. On the demand side,capacity refers to the ability to pose good questions, understand current data limitations, and seek new data sets responsibly. On the supply side, this means seeking shared value in collaboration, thinking creatively about public use of private data, and establishing norms of responsibility around security, privacy, and anonymity.

· Transparent stock of available data supply, including an inventory of what corporate data exist that can match multiple demands and that is shared through established networks and new collaborative institutional structures.

· Mapping emerging practices and models of sharing. Corporate data offers value not only for humanitarian action (which was a particular focus at the conference) but also for a variety of other domains, including science,agriculture, health care, urban development, environment, media and arts,and others. Gaining insight in the practices that emerge across sectors could broaden the spectrum of what is feasible and how.

In general, it was felt that understanding the business models underlying data collaboratives is of utmost importance in order to achieve win-win outcomes for both private and public sector players. Moreover, issues of public perception and trust were raised as important concerns of government organizations participating in data collaboratives….(More)”

Open Data Button


Open Access Button: “Hidden data is hindering research, and we’re tired of it. Next week we’ll release the Open Data Button beta as part of Open Data Day. The Open Data Button will help people find, release, and share the data behind papers. We need your support to share, test, and improve the Open Data Button. Today, we’re going to provide some in depth info about the tool.

You’ll be able to download the free Open Data Button on the 29th of February. Follow the launch conversation on Twitter and at #opendatabutton.

How the Open Data Button works

You will be able to download the Open Data Button on Chrome, and later on Firefox. When you need the data supporting a paper (even if it’s behind a paywall), push the Button. If the data has already been made available through the Open Data Button, we’ll give you a link. If it hasn’t, you’ll be able to start a request for the data. Eventually, we want to search a variety of other sources for it – but can’t yet (read on, we need your help with that).

The request will be sent to the author. We know sharing data can be hard and there’s sometimes good reasons not to. The author will be able to respond to it by saying how long it’ll take to share the data – or if they can’t. If the data is already available, the author can simply share a URL to the dataset. If it isn’t, they can attach files to a response for us to make available. Files shared with us will be deposited in the Open Science Framework for identification and archiving. The Open Science Framework supports data sharing for all disciplines. As much metadata as possible will be obtained from the paper, the rest we’ll ask the author for.

The progress of this request is tracked through our new “request” pages. On request pages others can support a request and be sent a copy of the data when it’s available. We’ll map requests, and stories will be searchable – both will now be embeddable objects.

Once available, we’ll send data to people who’ve requested it. You can award an Open Data Badge to the author if there’s enough supporting information to reproduce the data’s results.

At first we’ll only have a Chrome add-on, but support for Firefox will be available from Firefox 46. Support for a bookmarklet will also be provided, but we don’t have a release date yet….(More)”

 

Crowd2Map Tanzania


Crowd2Map Tanzania is a new crowdsourcing initiative aimed at creating a comprehensive map ofTanzania, including detailed depictions of all of its villages, roads and public resources (such as schools, shops, offices etc.) in OpenStreetMap and/or Google Maps, both of which are sadly rather poor at the moment. (For a convincing example, see our post about a not-so-blank-as-map-suggests Zeze village here.)

header about


…In February 2016, Crowd2Map Tanzania was one of the 7 projects selected in the Open Seventeenchallenge, which rallies the public to use open data as a means of achieving the 17 SustainableDevelopment Goals as proposed but the UN in September 2015! We are now excited to carry on with the helpof O17 partners – Citizen Cyberlab, The GovLab, ONE and SciFabric! We’re tackling Goal 11: creatingsustainable cities & communities and Goal 4: education through technology….(More)

The city as platform


The report of the 2015 Aspen Institute Roundtable on Information Technology: “In the age of ubiquitous Internet connections, smartphones and data, the future vitality of cities is increasingly based on their ability to use digital networks in intelligent, strategic ways. While we are accustomed to thinking of cities as geophysical places governed by mayors, conventional political structures and bureaucracies, this template of city governance is under great pressure to evolve. Urban dwellers now live their lives in all sorts of hyper-connected virtual spaces, pulsating with real-time information, intelligent devices, remote-access databases and participatory crowdsourcing. Expertise is distributed, not centralized. Governance is not just a matter of winning elections and assigning tasks to bureaucracies; it is about the skillful collection and curation of information as a way to create new affordances for commerce and social life.

Except among a small class of vanguard cities, however, the far-reaching implications of the “networked city” for economic development, urban planning, social life and democracy, have not been explored in depth. The Aspen Institute Communications and Society Program thus convened an eclectic group of thirty experts to explore how networking technologies are rapidly changing the urban landscape in nearly every dimension. The goal was to learn how open networks, onlinecooperation and open data can enhance urban planning and administration, and more broadly, how they might improve economic opportunity and civic engagement. The conference, the 24th Annual Aspen Roundtable on Information Technology, also addressed the implications of new digital technologies for urban transportation, public health and safety, and socio-economic inequality….(Download the InfoTech 2015 Report)”

Exploring the economic value of open government data


Fatemeh Ahmadi Zeleti et al in Government Information Quarterly: “Business models for open data have emerged in response to the economic opportunities presented by the increasing availability of open data. However, scholarly efforts providing elaborations, rigorous analysis and comparison of open data models are very limited. This could be partly attributed to the fact that most discussions on Open Data Business Models (ODBMs) are predominantly in the practice community. This shortcoming has resulted in a growing list of ODBMs which, on closer examination, are not clearly delineated and lack clear value orientation. This has made the understanding of value creation and exploitation mechanisms in existing open data businesses difficult and challenging to transfer. Following the Design Science Research (DSR) tradition, we developed a 6-Value (6-V) business model framework as a design artifact to facilitate the explication and detailed analysis of existing ODBMs in practice. Based on the results from the analysis, we identify business model patterns and emerging core value disciplines for open data businesses. Our results not only help streamline existing ODBMs and help in linking them to the overall business strategy, but could also guide governments in developing the required capabilities to support and sustain the business models….(More)”

 

Zika Emergency Puts Open Data Policies to the Test


Larry Peiperl and Peter Hotez at PLOS: “The spreading epidemic of Zika virus, with its putative and alarming associations with Guillain-Barre syndrome and infant microcephaly, has arrived just as several initiatives have come into place to minimize delays in sharing the results of scientific research.

In September 2015, in response to concerns that research publishing practices had delayed access tocrucial information in the Ebola crisis, the World Health Organization convened a consultation “[i]nrecognition of the need to streamline mechanisms of data dissemination—globally and in as close toreal-time as possible” in the context of public health emergencies.

Participating medical journal editors, representing PLOS,BMJ and Nature journals and NEJM, provided a statement that journals should not act to delay access to data in a public health emergency: “In such scenarios,journals should not penalize, and, indeed, shouldencourage or mandate public sharing of relevant data…”

In a subsequent Comment in The Lancet, authors frommajor research funding organizations expressed supportfor data sharing in public health emergencies. TheInternational Committee of Medical Journal Editors(ICMJE), meeting in November 2015, lent further support to the principles of the WHO consultation byamending ICMJE “Recommendations” to endorse data sharing for public health emergencies of anygeographic scope.

Now that WHO has declared Zika to be a Public Health Emergency of International Concern, responses from these groups in recent days appear consistent with their recent declarations.

The ICMJE has announced that “In light of the need to rapidly understand and respond to the globalemergency caused by the Zika virus, content in ICMJE journals related to Zika virus is being made freeto access. We urge other journals to do the same. Further, as stated in our Recommendations, in theevent of a public health emergency (as defined by public health officials), information with immediateimplications for public health should be disseminated without concern that this will preclude subsequentconsideration for publication in a journal.”(www.icmje.org, accessed 9 Feburary 2016)

WHO has implemented special provisions for research manuscripts relevant to the Zika epidemic thatare submitted to WHO Bulletin; such papers “will be assigned a digital object identifier and posted onlinein the “Zika Open” collection within 24 hours while undergoing peer review. The data in these papers willthus be attributed to the authors while being freely available for reader scrutiny and unrestricted use”under a Creative Commons Attribution License (CC BY IGO 3.0).

At PLOS, where open access and data sharing apply as matter of course, all PLOS journals aim toexpedite peer review evaluation, pre-publication posting, and data sharing from research relevant to theZika outbreak. PLOS Currents Outbreaks offers an online platform for rapid publication of preliminaryresults, PLOS Neglected Tropical Diseases has committed to provide priority handling of Zika reports ingeneral, and other PLOS journals will prioritize submissions within their respective scopes. The PLOSZika Collection page provides central access to relevant and continually updated content from acrossthe PLOS journals, blogs, and collaborating organizations.

Today, the Wellcome Trust has issued a statement urging journals to commit to “make all content concerning the Zika virus free to access,” and funders to “require researchers undertaking work relevant to public health emergencies to set in place mechanisms to share quality-assured interim and final data as rapidly and widely as possible, including with public health and research communities and the World Health Organisation.”  Among 31 initial signatories are such journals and publishers as PLOS, Springer Nature, Science journals, The JAMA Network, eLife, the Lancet, and New England Journal ofMedicine; and funding organizations including Bill and Melinda Gates Foundation, UK Medical ResearchCouncil,  US National Institutes of Health, Wellcome Trust, and other major national and internationalresearch funders.

This policy shift prompts reconsideration of how we publish urgently needed data during a public health emergency….(More)”