More African governments are enacting open data policies but still aren’t willing to share information


Joshua Masinde at Quartz Africa: “Working as a data journalist and researcher in Uganda, Lydia Namubiru does not remember a moment she had an easy time accessing official government data in the execution of her work. She has had to literally beg for such information from officials with little success.

In June this year, she approached the Uganda Bureau of Statistics seeking a nationally representative sample of micro data from the country’s 2014 census. Despite frequent calls and emails, she is still waiting for the information from the bureau several months down the line….

It doesn’t have to be that way of course. In neighboring Kenya there’s much optimism there’ll be a different attitude to open data. Last month civil society activists and supporters of open data celebrated the government signing the Access to Information bill into law. It comes after many years of lobbying….

Despite well-earned reputations of authoritarianism and conservative attitudes to governance, it turns out more African governments are opening up to their citizens in the guise of espousing transparency and accountability in the conduct of their affairs.

However, in truth, a government saying it’s allowing citizens to access data or information is very different from the actual practice of enabling that access. For the most part, several governments’ open data initiatives often serve far more mundane purposes and may not be the data that citizens really want—the kind that potentially exposes corruption or laxity in public service…

“Countries that have embraced open data have seen real savings in public spending and improved efficiency in services. Nowhere is this more vital than in our nations – many of which face severe health and education crises,” Nnenna Nwakanma, Africa regional coordinator at World Wide Web Foundation,points out.

 What is more prevalent now is what some open data advocates call ‘open washing’, which is described as a real threat to the open data movement according to the World Wide Web Foundation. By ‘open washing’, governments merely enact open data policies but do not follow through to full implementation. Others simply put in place strong freedom of information and right to information laws but do not really let the citizens take full advantage of the open data. This could, however, be as a result of institutional shortcomings, internal bureaucracies or lack of political will.

As the initiatives towards open data gather steam, challenges such as government agencies being unwilling to release official information as well as state bureaucracies are still prominent. Many governments are also only keen on releasing information that will not portray them as ‘naked’ but that which they feel will project them in positive light. But, as to whether laws will make governments more open, even with the information that citizens really need, is a matter of conjecture. For Namubiru, open data should be a culture that grows more subtly than by way of just passing laws for the sake of it.

“If they release enough packets of data on what they consider neutral or positive information, the storytellers will still be able to connect the dots.”…(More)”

Recent Developments in Open Data Policy


Presentation by Paul Uhlir:  “Several International organizations have issued policy statements on open data policies in the past two years. This presentation provides an overview of those statements and their relevance to developing countries.

International Statements on Open Data Policy

Open data policies have become much more supported internationally in recent years. Policy statements in just the most recent 2014-2016 period that endorse and promote openness to research data derived from public funding include: the African Data Consensus (UNECA 2014); the CODATA Nairobi Principles for Data Sharing for Science and Development in Developing Countries (PASTD 2014); the Hague Declaration on Knowledge Discovery in the Digital Age (LIBER 2014); Policy Guidelines for Open Access and Data Dissemination and Preservation (RECODE 2015); Accord on Open Data in a Big Data World (Science International 2015). This presentation will present the principal guidelines of these policy statements.

The Relevance of Open Data from Publicly Funded Research for Development

There are many reasons that publicly funded research data should be made as freely and openly available as possible. Some of these are noted here, although many other benefits are possible. For research, it is closing the gap with more economically developed countries, making researchers more visible on the web, enhancing their collaborative potential, and linking them globally. For educational benefits, open data assists greatly in helping students learn how to do data science and to manage data better. From a socioeconomic standpoint, open data policies have been shown to enhance economic opportunities and to enable citizens to improve their lives in myriad ways. Such policies are more ethical in allowing access to those that have no means to pay and not having to pay for the data twice—once through taxes to create the data in the first place and again at the user level . Finally, access to factual data can improve governance, leading to better decision making by policymakers, improved oversight by constituents, and digital repatriation of objects held by former colonial powers.

Some of these benefits are cited directly in the policy statements themselves, while others are developed more fully in other documents (Bailey Mathae and Uhlir 2012, Uhlir 2015). Of course, not all publicly funded data and information can be made available and there are appropriate reasons—such as the protection of national security, personal privacy, commercial concerns, and confidentiality of all kinds—that make the withholding of them legal and ethical. However, the default rule should be one of openness, balanced against a legitimate reason not to make the data public….(More)”

Doctors’ Individual Opioid Prescription ‘Report Cards’ Show Impact


Scott Calvert at the Wall Street Journal: “Several states, including Arizona, Kentucky and Ohio, are using their state prescription monitoring databases to send doctors individualized “report cards” that show how their prescribing of addictive opioids and other drugs compares with their peers.

“Arizona probably has the most complete one out there right now—it’s pretty impressive,” said Patrick Knue, director of the Prescription Drug Monitoring Program Training and Technical Assistance Center at Brandeis University, which helps states improve their databases.

Arizona’s quarterly reports rate a doctor’s prescribing of oxycodone and certain other drugs as normal, high, severe or extreme compared with the state’s other doctors in his medical specialty.

During a two-year pilot program, the number of opiate prescriptions fell 10% in five counties while rising in other counties, said Dean Wright, former head of the state’s prescription-monitoring program. The report cards also contributed to a 4% drop in overdose deaths in the pilot counties, he said.

The state now issues the report cards statewide and in June sent notices to more than 13,000 doctors statewide. Mr. Wright said the message is clear: “Stop and think about what you’re prescribing and the impact it can have.”
The report cards list statistics such as how many of a doctor’s patients received controlled substances from five or more doctors. Elizabeth Dodge, Mr. Wright’s successor, said some doctors ask for the patients’ names—information they might have gleaned from the database….(More)”

Open data, transparency and accountability


Topic guide by Liz Carolan: “…introduces evidence and lessons learned about open data, transparency and accountability in the international development context. It discusses the definitions, theories, challenges and debates presented by the relationship between these concepts, summarises the current state of open data implementation in international development, and highlights lessons and resources for designing and implementing open data programmes.

Open data involves the release of data so that anyone can access, use and share it. The Open DataCharter (2015) describes six principles that aim to make data easier to find, use and combine:

  • open by default
  • timely and comprehensive
  • accessible and usable
  • comparable and interoperable
  • for improved governance and citizen engagement
  • for inclusive development and innovation

One of the main objectives of making data open is to promote transparency.

Transparency is a characteristic of government, companies, organisations and individuals that are open in the clear disclosure of information, rules, plans, processes and actions. Trans­parency of information is a crucial part of this. Within a development context, transparency and accountability initiatives have emerged over the last decade as a way to address developmental failures and democratic deficits.

There is a strong intersection between open data and transparency as concepts, yet as fields of study and practice, they have remained somewhat separate. This guide draws extensively on analysis and evidence from both sets of literature, beginning by outlining the main concepts and the theories behind the relationships between them.

Data release and transparency are parts of the chain of events leading to accountability.  For open data and transparency initiatives to lead to accountability, the required conditions include:

  • getting the right data published, which requires an understanding of the politics of data publication
  • enabling actors to find, process and use information, and to act on any outputs, which requires an accountability ecosystem that includes equipped and empowered intermediaries
  • enabling institutional or social forms of enforceability or citizens’ ability to choose better services,which requires infrastructure that can impose sanctions, or sufficient choice or official support for citizens

Programmes intended to increase access to information can be impacted by and can affect inequality. They can also pose risks to privacy and may enable the misuse of data for the exploitation of individuals and markets.

Despite a range of international open data initiatives and pressures, developing countries are lagging behind in the implementation of reforms at government level, in the overall availability of data, and in the use of open data for transparency and accountability. What is more, there are signs that ‘open-washing’ –superficial efforts to publish data without full integration with transparency commitments – may be obscuring backsliding in other aspects of accountability.

The topic guide pulls together lessons and guidance from open data, transparency and accountability work,including an outline of technical and non-technical aspects of implementing a government open data initiative. It also lists further resources, tools and guidance….(More)”

Data Driven Governments: Creating Value Through Open Government Data


Chapter by Judie Attard , Fabrizio Orlandi and Sören Auer in Transactions on Large-Scale Data- and Knowledge-Centered Systems XXVII: “Governments are one of the largest producers and collectors of data in many different domains and one major aim of open government data initiatives is the release of social and commercial value. Hence, we here explore existing processes of value creation on government data. We identify the dimensions that impact, or are impacted by value creation, and distinguish between the different value creating roles and participating stakeholders. We propose the use of Linked Data as an approach to enhance the value creation process, and provide a Value Creation Assessment Framework to analyse the resulting impact. We also implement the assessment framework to evaluate two government data portals….(More)”

What is being done with open government data?


An exploratory analysis of public uses of New York City open data by Karen Okamoto in Webology: “In 2012, New York City Council passed legislation to make government data open and freely available to the public. By approving this legislation, City Council was attempting to make local government more transparent, accountable, and streamlined in its operations. It was also attempting to create economic opportunities and to encourage the public to identify ways in which to improve government and local communities. The purpose of this study is to explore public uses of New York City open data. Currently, more than 1300 datasets covering broad areas such as health, education, transportation, public safety, housing and business are available on the City’s Open Data Portal. This study found a plethora of maps, visualizations, tools, apps and analyses made by the public using New York City open data. Indeed, open data is inspiring a productive range of creative reuses yet questions remain concerning how useable the data is for users without technical skills and resources….(More)”

White House, Transportation Dept. want help using open data to prevent traffic crashes


Samantha Ehlinger in FedScoop: “The Transportation Department is looking for public input on how to better interpret and use data on fatal crashes after 2015 data revealed a startling spike of 7.2 percent more deaths in traffic accidents that year.

Looking for new solutions that could prevent more deaths on the roads, the department released three months earlier than usual the 2015 open dataset about each fatal crash. With it, the department and the White House announced a call to action for people to use the data set as a jumping off point for a dialogue on how to prevent crashes, as well as understand what might be causing the spike.

“What we’re ultimately looking for is getting more people engaged in the data … matching this with other publicly available data, or data that the private sector might be willing to make available, to dive in and to tell these stories,” said Bryan Thomas, communications director for the National Highway Traffic Safety Administration, to FedScoop.

One striking statistic was that “pedestrian and pedalcyclist fatalities increased to a level not seen in 20 years,” according to a DOT press release. …

“We want folks to be engaged directly with our own data scientists, so we can help people through the dataset and help answer their questions as they work their way through, bounce ideas off of us, etc.,” Thomas said. “We really want to be accessible in that way.”

He added that as ideas “come to fruition,” there will be opportunities to present what people have learned.

“It’s a very, very rich data set, there’s a lot of information there,” Thomas said. “Our own ability is, frankly, limited to investigate all of the questions that you might have of it. And so we want to get the public really diving in as well.”…

Here are the questions “worth exploring,” according to the call to action:

  • How might improving economic conditions around the country change how Americans are getting around? What models can we develop to identify communities that might be at a higher risk for fatal crashes?
  • How might climate change increase the risk of fatal crashes in a community?
  • How might we use studies of attitudes toward speeding, distracted driving, and seat belt use to better target marketing and behavioral change campaigns?
  • How might we monitor public health indicators and behavior risk indicators to target communities that might have a high prevalence of behaviors linked with fatal crashes (drinking, drug use/addiction, etc.)? What countermeasures should we create to address these issues?”…(More)”

Questioning Big Data: Crowdsourcing crisis data towards an inclusive humanitarian response


Femke Mulder, Julie Ferguson, Peter Groenewegen, Kees Boersma, and Jeroen Wolbers in Big Data and Society: “The aim of this paper is to critically explore whether crowdsourced Big Data enables an inclusive humanitarian response at times of crisis. We argue that all data, including Big Data, are socially constructed artefacts that reflect the contexts and processes of their creation. To support our argument, we qualitatively analysed the process of ‘Big Data making’ that occurred by way of crowdsourcing through open data platforms, in the context of two specific humanitarian crises, namely the 2010 earthquake in Haiti and the 2015 earthquake in Nepal. We show that the process of creating Big Data from local and global sources of knowledge entails the transformation of information as it moves from one distinct group of contributors to the next. The implication of this transformation is that locally based, affected people and often the original ‘crowd’ are excluded from the information flow, and from the interpretation process of crowdsourced crisis knowledge, as used by formal responding organizations, and are marginalized in their ability to benefit from Big Data in support of their own means. Our paper contributes a critical perspective to the debate on participatory Big Data, by explaining the process of in and exclusion during data making, towards more responsive humanitarian relief….(More)”.

Make Data Sharing Routine to Prepare for Public Health Emergencies


Jean-Paul Chretien, Caitlin M. Rivers, and Michael A. Johansson in PLOS Medicine: “In February 2016, Wellcome Trust organized a pledge among leading scientific organizations and health agencies encouraging researchers to release data relevant to the Zika outbreak as rapidly and widely as possible [1]. This initiative echoed a September 2015 World Health Organization (WHO) consultation that assessed data sharing during the recent West Africa Ebola outbreak and called on researchers to make data publicly available during public health emergencies [2]. These statements were necessary because the traditional way of communicating research results—publication in peer-reviewed journals, often months or years after data collection—is too slow during an emergency.

The acute health threat of outbreaks provides a strong argument for more complete, quick, and broad sharing of research data during emergencies. But the Ebola and Zika outbreaks suggest that data sharing cannot be limited to emergencies without compromising emergency preparedness. To prepare for future outbreaks, the scientific community should expand data sharing for all health research….

Open data deserves recognition and support as a key component of emergency preparedness. Initiatives to facilitate discovery of datasets and track their use [4042]; provide measures of academic contribution, including data sharing that enables secondary analysis [43]; establish common platforms for sharing and integrating research data [44]; and improve data-sharing capacity in resource-limited areas [45] are critical to improving preparedness and response.

Research sponsors, scholarly journals, and collaborative research networks can leverage these new opportunities with enhanced data-sharing requirements for both nonemergency and emergency settings. A proposal to amend the International Health Regulations with clear codes of practice for data sharing warrants serious consideration [46]. Any new requirements should allow scientists to conduct and communicate the results of secondary analyses, broadening the scope of inquiry and catalyzing discovery. Publication embargo periods, such as one under consideration for genetic sequences of pandemic-potential influenza viruses [47], may lower barriers to data sharing but may also slow the timely use of data for public health.

Integrating open science approaches into routine research should make data sharing more effective during emergencies, but this evolution is more than just practice for emergencies. The cause and context of the next outbreak are unknowable; research that seems routine now may be critical tomorrow. Establishing openness as the standard will help build the scientific foundation needed to contain the next outbreak.

Recent epidemics were surprises—Zika and chikungunya sweeping through the Americas; an Ebola pandemic with more than 10,000 deaths; the emergence of severe acute respiratory syndrome and Middle East respiratory syndrome, and an influenza pandemic (influenza A[H1N1]pdm09) originating in Mexico—and we can be sure there are more surprises to come. Opening all research provides the best chance to accelerate discovery and development that will help during the next surprise….(More)”

Managing Federal Information as a Strategic Resource


White House: “Today the Office of Management and Budget (OMB) is releasing an update to the Federal Government’s governing document for the management of Federal information resources: Circular A-130, Managing Information as a Strategic Resource.

The way we manage information technology(IT), security, data governance, and privacy has rapidly evolved since A-130 was last updated in 2000.  In today’s digital world, we are creating and collecting large volumes of data to carry out the Federal Government’s various missions to serve the American people.  This data is duplicated, stored, processed, analyzed, and transferred with ease.  As government continues to digitize, we must ensure we manage data to not only keep it secure, but also allow us to harness this information to provide the best possible service to our citizens.

Today’s update to Circular A-130 gathers in one resource a wide range of policy updates for Federal agencies regarding cybersecurity, information governance, privacy, records management, open data, and acquisitions.  It also establishes general policy for IT planning and budgeting through governance, acquisition, and management of Federal information, personnel, equipment, funds, IT resources, and supporting infrastructure and services.  In particular, A-130 focuses on three key elements to help spur innovation throughout the government:

  • Real Time Knowledge of the Environment.  In today’s rapidly changing environment, threats and technology are evolving at previously unimagined speeds.  In such a setting, the Government cannot afford to authorize a system and not look at it again for years at a time.  In order to keep pace, we must move away from periodic, compliance-driven assessment exercises and, instead, continuously assess our systems and build-in security and privacy with every update and re-design.  Throughout the Circular, we make clear the shift away from check-list exercises and toward the ongoing monitoring, assessment, and evaluation of Federal information resources.
  • Proactive Risk ManagementTo keep pace with the needs of citizens, we must constantly innovate.  As part of such efforts, however, the Federal Government must modernize the way it identifies, categorizes, and handles risk to ensure both privacy and security.  Significant increases in the volume of data processed and utilized by Federal resources requires new ways of storing, transferring, and managing it Circular A-130 emphasizes the need for strong data governance that encourages agencies to proactively identify risks, determine practical and implementable solutions to address said risks, and implement and continually test the solutions.  This repeated testing of agency solutions will help to proactively identify additional risks, starting the process anew.
  • Shared ResponsibilityCitizens are connecting with each other in ways never before imagined.  From social media to email, the connectivity we have with one another can lead to tremendous advances.  The updated A-130 helps to ensure everyone remains responsible and accountable for assuring privacy and security of information – from managers to employees to citizens interacting with government services. …(More)”