Can Open Data Drive Innovative Healthcare?


Will Greene at Huffington Post: “As healthcare systems worldwide become increasingly digitized, medical scientists and health researchers have more data than ever. Yet much valuable health information remains locked in proprietary or hidden databases. A growing number of open data initiatives aim to change this, but it won’t be easy….

To overcome these challenges, a growing array of stakeholders — including healthcare and tech companies, research institutions, NGOs, universities, governments, patient groups, and individuals — are banding together to develop new regulations and guidelines, and generally promote open data in healthcare.

Some of these initiatives focus on improving transparency in clinical trials. Among those pushing for researchers to share more clinical trials data are groups like AllTrials and the Yale Open Data Access (YODA) Project, donor organizations like the Gates Foundation, and biomedical journals like The BMJ. Private healthcare companies, including some that resisted data sharing in the past, are increasingly seeing value in open collaboration as well.

Other initiatives focus on empowering patients to share their own health data. Consumer genomics companies, personal health records providers, disease management apps, online patient communities and other healthcare services give patients greater access to personal health data than ever before. Some also allow consumers to share it with researchers, enroll in clinical trials, or find other ways to leverage it for the benefit of others.

Another group of initiatives seek to improve the quality and availability of public health data, such as that pertaining to epidemiological trends, health financing, and human behavior.

Governments often play a key role in collecting this kind of data, but some are more open and effective than others. “Open government is about more than a mere commitment to share data,” says Peter Speyer, Chief Data and Technology Officer at the Institute for Health Metrics and Evaluation (IHME), a health research center at the University of Washington. “It’s also about supporting a whole ecosystem for using these data and tapping into creativity and resources that are not available within any single organization.”

Open data may be particularly important in managing infectious disease outbreaks and other public health emergencies. Following the recent Ebola crisis, the World Health Organization issued a statement on the need for rapid data sharing in emergency situations. It laid out guidelines that could help save lives when the next pandemic strikes.

But on its own, open data does not lead to healthcare innovation. “Simply making large amounts of data accessible is good for transparency and trust,” says Craig Lipset, Head of Clinical Innovation at Pfizer, “but it does not inherently improve R&D or health research. We still need important collaborations and partnerships that make full use of these vast data stores.”

Many such collaborations and partnerships are already underway. They may help drive a new era of healthcare innovation ..(More)”

Smoke Signals: Open data & analytics for preventing fire deaths


Enigma: “Today we are launching Smoke Signals, an open source civic analytics tool that helps local communities determine which city blocks are at the highest risk of not having a smoke alarm.

25,000 people are killed or injured in 1 million fires across the United States each year. With over 130 million housing units across the country, 4.5 million of them do not have smoke detectors, placing their inhabitants at substantial risk. Driving this number down is the single most important factor for saving lives put at risk by fire.

Organizations like the Red Cross are investing a lot of resources to buy and install smoke alarms in people’s homes. But a big challenge remains: in a city of millions, what doors should you knock on first when conducting an outreach effort?

We began working on the problem of targeting the blocks at highest risk of not having a smoke alarm with the City of New Orleans last spring. (You can read about this work here.) Over the past few months, with collaboration from the Red Cross and DataKind, we’ve built out a generalized model and a set of tools to offer the same analytics potential to 178 American cities, all in a way that is simple to use and sensitive to how on-the-ground operations are organized.

We believe that Smoke Signals is more a collection of tools and collaborations than it is a slick piece of software that can somehow act as a panacea to the problem of fire fatalities. Core to its purpose and mission are a set of commitments:

  • an ongoing collaboration with the Red Cross wherein our smoke alarm work informs their on-the-ground outreach
  • a collaboration with DataKind to continue applying volunteer work to the improvement of the underlying models and data that drive the risk analysis
  • a working relationship with major American cities to help integrate our prediction models into their outreach programs

and tools:

  • a downloadable CSV for 178 American municipalities that associate city streets to risk scores
  • an interactive map for an immediate bird’s eye assessment of at-risk city blocks
  • an API endpoint to which users can upload a CSV of local fire incidents in order to improve scores for their area

We believe this is an important contribution to public safety and the better delivery of government services. However, we also consider it a work in progress, a demonstration of how civic analytic solutions can be shared and generalized across the country. We are open sourcing all of the components that went into it and invite anyone with an interest in making it better to get involved….(More)”

Data Collaboratives: Sharing Public Data in Private Hands for Social Good


Beth Simone Noveck (The GovLab) in Forbes: “Sensor-rich consumer electronics such as mobile phones, wearable devices, commercial cameras and even cars are collecting zettabytes of data about the environment and about us. According to one McKinsey study, the volume of data is growing at fifty percent a year. No one needs convincing that these private storehouses of information represent a goldmine for business, but these data can do double duty as rich social assets—if they are shared wisely.

Think about a couple of recent examples: Sharing data held by businesses and corporations (i.e. public data in private hands) can help to improve policy interventions. California planners make water allocation decisions based upon expertise, data and analytical tools from public and private sources, including Intel, the Earth Research Institute at the University of California at Santa Barbara, and the World Food Center at the University of California at Davis.

In Europe, several phone companies have made anonymized datasets available, making it possible for researchers to track calling and commuting patterns and gain better insight into social problems from unemployment to mental health. In the United States, LinkedIn is providing free data about demand for IT jobs in different markets which, when combined with open data from the Department of Labor, helps communities target efforts around training….

Despite the promise of data sharing, these kind of data collaboratives remain relatively new. There is a need toaccelerate their use by giving companies strong tax incentives for sharing data for public good. There’s a need for more study to identify models for data sharing in ways that respect personal privacy and security and enable companies to do well by doing good. My colleagues at The GovLab together with UN Global Pulse and the University of Leiden, for example, published this initial analysis of terms and conditions used when exchanging data as part of a prize-backed challenge. We also need philanthropy to start putting money into “meta research;” it’s not going to be enough to just open up databases: we need to know if the data is good.

After years of growing disenchantment with closed-door institutions, the push for greater use of data in governing can be seen as both a response and as a mirror to the Big Data revolution in business. Although more than 1,000,000 government datasets about everything from air quality to farmers markets are openly available online in downloadable formats, much of the data about environmental, biometric, epidemiological, and physical conditions rest in private hands. Governing better requires a new empiricism for developing solutions together. That will depend on access to these private, not just public data….(More)”

Openness an Essential Building Block for Inclusive Societies


 (Mexico) in the Huffington Post: “The international community faces a complex environment that requires transforming the way we govern. In that sense, 2015 marks a historic milestone, as 193 Member States of the United Nations will come together to agree on the adoption of the 2030 Agenda. With the definition of the 17 Sustainable Development Goals (SDGs), we will set an ambitious course toward a better and more inclusive world for the next 15 years.

The SDGs will be established just when governments deal with new and more defiant challenges, which require increased collaboration with multiple stakeholders to deliver innovative solutions. For that reason, cutting-edge technologies, fueled by vast amounts of data, provide an efficient platform to foster a global transformation and consolidate more responsive, collaborative and open governments.

Goal 16 seeks to promote just, peaceful and inclusive societies by ensuring access to public information, strengthening the rule of law, as well as building stronger and more accountable institutions. By doing so, we will contribute to successfully achieve the rest of the 2030 Agenda objectives.

During the 70th United Nations General Assembly, the 11 countries of the Steering Committee of the Open Government Partnership (OGP), along with civil-society leaders, will gather to acknowledge Goal 16 as a common target through a Joint Declaration: Open Government for the Implementation of the 2030 Agenda for Sustainable Development. As the Global Summit of OGP convenes this year in Mexico City, on October 28th and 29th, my government will call on all 65 members to subscribe to this fundamental declaration.

The SDGs will be reached only through trustworthy, effective and inclusive institutions. This is why Mexico, as current chair of the OGP, has committed to promote citizen participation, innovative policies, transparency and accountability.

Furthermore, we have worked with a global community of key players to develop the international Open Data Charter (ODC), which sets the founding principles for a greater coherence and increased use of open data across the world. We seek to recognize the value of having timely, comprehensive, accessible, and comparable data to improve governance and citizen engagement, as well as to foster inclusive development and innovation….(More)”

Algorithm predicts and prevents train delays two hours in advance


Springwise: “Transport apps such as Ototo make it easier than ever for passengers to stay informed about problems with public transport, but real-time information can only help so much — by the time users find out about a delayed service, it is often too late to take an alternative route. Now, Stockholmstag — the company that runs Sweden’s trains — have found a solution in the form of an algorithm called ‘The Commuter Prognosis’, which can predict network delays up to two hours in advance, giving train operators time to issue extra services or provide travelers with adequate warning.
The system was created by mathematician Wilhelm Landerholm. It uses historical data to predict how a small delay, even as little as two minutes, will affect the running of the rest of the network. Often the initial late train causes a ripple effect, with subsequent services being delayed to accommodate new platform arrival time, which then affect subsequent trains, and so on. But soon, using ‘The Commuter Prognosis’, Stockholmstag train operators will be able to make the necessary adjustments to prevent this. In addition, the information will be relayed to commuters, enabling them to take a different train and therefore reducing overcrowding. The prediction tool is expected to be put into use in Sweden by the end of the year….(More)”

Open data is not just for startups


Mike Altendorf at CIO: “…Surely open data is just for start-ups, market research companies and people that want to save the world? Well there are two reasons why I wanted to dedicate a bit of time to the subject of open data. First, one of the major barriers to internal innovation that I hear about all the time is the inability to use internal data to inform that innovation. This is usually because data is deemed too sensitive, too complex, too siloed or too difficult to make usable. Leaving aside the issues that any of those problems are going to cause for the organisation more generally, it is easy to see how this can create a problem. So why not use someone else’s data?

The point of creating internal labs and innovation centres is to explore the art of the possible. I quite agree that insight from your own data is a good place to start but it isn’t the only place. You could also argue that by using your own data you are restricting your thinking because you are only looking at information that already relates to your business. If the point of a lab is to explore ideas for supporting the business then you may be better off looking outwards at what is happening in the world around you rather than inwards into the constrained world of the industry you already inhabit….

The fact is there is vast amounts of data sets that are freely available that can be made to work for you if you can just apply the creativity and technical smarts to them.

My second point is less about open data than about opening up data. Organisations collect information on their business operations, customers and suppliers all the time. The smart ones know how to use it to build competitive advantage but the really smart ones also know that there is significant extra value to be gained from sharing that data with the customer or supplier that it relates to. The customer or supplier can then use it to make informed decisions themselves. Some organisations have been doing this for a while. Customers of First Direct have been able to analyse their own spending patterns for years (although the data has been somewhat limited). The benefit to the customer is that they can make informed decisions based on actual data about their past behaviours and so adapt their spending habits accordingly (or put their head firmly in the sand and carry on as before in my case!). The benefit to the bank is that they are able to suggest ideas for how to improve a customer’s financial health alongside the data. Others have looked at how they can help customers by sharing (anonymised) information about what people with similar lifestyles/needs are doing/buying so customers can learn from each other. Trials have shown that customers welcomed the insight….(More)”

 

Sustainable Value of Open Government Data


Phd Thesis from Thorhildur Jetzek: “The impact of the digital revolution on our societies can be compared to the ripples caused by a stone thrown in water: spreading outwards and affecting a larger and larger part of our lives with every year that passes. One of the many effects of this revolution is the emergence of an already unprecedented amount of digital data that is accumulating exponentially. Moreover, a central affordance of digitization is the ability to distribute, share and collaborate, and we have thus seen an “open theme” gaining currency in recent years. These trends are reflected in the explosion of Open Data Initiatives (ODIs) around the world. However, while hundreds of national and local governments have established open data portals, there is a general feeling that these ODIs have not yet lived up to their true potential. This feeling is not without good reason; the recent Open Data Barometer report highlights that strong evidence on the impacts of open government data is almost universally lacking (Davies, 2013). This lack of evidence is disconcerting for government organizations that have already expended money on opening data, and might even result in the termination of some ODIs. This lack of evidence also raises some relevant questions regarding the nature of value generation in the context of free data and sharing of information over networks. Do we have the right methods, the right intellectual tools, to understand and reflect the value that is generated in such ecosystems?

This PhD study addresses the question of How is value generated from open data? through a mixed methods, macro-level approach. For the qualitative analysis, I have conducted two longitudinal case studies in two different contexts. The first is the case of the Basic Data Program (BDP), which is a Danish ODI. For this case, I studied the supply-side of open data publication, from the creation of open data strategy towards the dissemination and use of data. The second case is a demand-side study on the energy tech company Opower. Opower has been an open data user for many years and have used open data to create and disseminate personalized information on energy use. This information has already contributed to a measurable world-wide reduction in CO2 emissions as well as monetary savings. Furthermore, to complement the insights from these two cases I analyzed quantitative data from 76 countries over the years 2012 and 2013. I have used these diverse sources of data to uncover the most important relationships or mechanisms, that can explain how open data are used to generate sustainable value….(More)”

Revolution Delayed: The Impact of Open Data on the Fight against Corruption


Report by RiSSC – Research Centre on Security and Crime (Italy): “In the recent years, the demand for Open Data picked up stream among stakeholders to increasing transparency and accountability of the Public Sector. Governments are supporting Open Data supply, to achieve social and economic benefits, return on investments, and political consensus.

While it is self-evident that Open Data contributes to greater transparency – as it makes data more available and easy to use by the public and governments, its impact on fighting corruption largely depends on the ability to analyse it and develop initiatives that trigger both social accountability mechanisms, and government responsiveness against illicit or inappropriate behaviours.

To date, Open Data Revolution against corruption is delayed. The impact of Open Data on the prevention and repression of corruption, and on the development of anti- corruption tools, appears to be limited, and the return on investments not yet forthcoming. Evidence remains anecdotal, and a better understanding on the mechanisms and dynamics of using Open Data against corruption is needed.

The overall objective of this exploratory study is to provide evidence on the results achieved by Open Data, and recommendations for the European Commission and Member States’ authorities, for the implementation of effective anti-corruption strategies based on transparency and openness, to unlock the potential impact of “Open Data revolution” against Corruption.

The project has explored the legal framework and the status of implementation of Open Data policies in four EU Countries – Italy, United Kingdom, Spain, and Austria. TACOD project has searched for evidence on Open Data role on law enforcement cooperation, anti-corruption initiatives, public campaigns, and investigative journalism against corruption.

RiSSC – Research Centre on Security and Crime (Italy), the University of Oxford and the University of Nottingham (United Kingdom), Transparency International (Italy and United Kingdom), the Institute for Conflict Resolution (Austria), and Blomeyer&Sanz (Spain), have carried out the research between January 2014 and February 2015, under an agreement with the European Commission – DH Migration and Home Affairs. The project has been coordinated by RiSSC, with the support of a European Working Group of Experts, chaired by prof. Richard Rose, and an external evaluator, Mr. Andrea Menapace, and it has benefited from the contribution of many experts, activists, representatives of Institutions in the four Countries….(More)

What should governments require for their open data portals?


Luke Fretwell at GovFresh: “Johns Hopkins University’s new Center for Government Excellence is developing a much-needed open data portal requirements resource to serve as a “set of sample requirements to help governments evaluate, develop (or procure), deploy, and launch an open data web site (portal).”

As many governments ramp up their open data initiatives, this is an important project in that we often see open data platform decisions being made without a holistic approach and awareness of what government should purchase (or have the flexibility to develop on its own).

“The idea here is that any interested city can use this as a baseline and make their own adjustments before proceeding,” said GovEx Director of Open Data Andrew Nicklin via email. “Perhaps with this we can create some common denominators amongst open data portals and eventually push the whole movement forwards.”

My fundamental suggestion is that government-run open data platforms be fully open source. There are a number of technical and financial reasons for this, which I will address in the future, but I believe strongly that if the platform you’re hosting data on doesn’t adhere to the same licensing standards you hold for your data, you’re only doing open data half right.

With both CKAN and DKAN continuing to grow in adoption, we’re seeing an emergence of reliable solutions that adequately meet the same technical and procurement requirements as propriety options (full disclosure: I work with NuCivic on DKAN and NuCivic Data).

Learn more about the GovEx open data portal standards project”

Towards decision support for disclosing data: Closed or open data?


Article by Zuiderwijk , Anneke and Janssen , Marijn in Information Polity: “The disclosure of open government data is a complex activity that may create public value yet might also encounter risks, such as the misinterpretation and misuse of data. While politicians support data release and assume that the positive value of open data will dominate, many governmental organizations are reluctant to open their data, as they are afraid of the dark side. The objective of this paper is to provide a decision-making model that assists in trade-offs between the pros and cons of open data. Data disclosure is dependent on the type of data (e.g. its sensitivity, structure and quality) and the context (e.g. organizational policies, legislation and the political influences). Based on the literature and fifteen in-depth interviews with public sector officials and data archivists, this paper identifies contextual and dataset-related variables which influence a trade-off. A decision-making model is presented capturing trade-offs, and in this way providing guidance for weighing the creation of public value and the risks. The model can be used for decision-making to open or not to open data. It is likely that the decision regarding which data should be opened or closed will shift over time….(More)”