Secondary use of health data in Europe


Report by Mark Boyd, Dr Milly Zimeta, Dr Jeni Tennison and Mahad Alassow: “Open and trusted health data systems can help Europe respond to the many urgent challenges facing its society and economy today. The global pandemic has already altered many of our societal and economic systems, and data has played a key role in enabling cross-border and cross-sector collaboration in public health responses.

Even before the pandemic, there was an urgent need to optimise healthcare systems and manage limited resources more effectively, to meet the needs of growing, and often ageing, populations. Now, there is a heightened need to develop early-diagnostic and health-surveillance systems, and more willingness to adopt digital healthcare solutions…

By reusing health data in different ways, we can increase the value of this data and help to enable these improvements. Clinical data, such as incidences of healthcare and clinical trials data, can be combined with data collected from other sources, such as sickness and insurance claims records, and from devices and wearable technologies. This data can then be anonymised and aggregated to generate new insights and optimise population health, improve patients’ health and experiences, create more efficient healthcare systems, and foster innovation.

This secondary use of health data can enable a wide range of benefits across the entire healthcare system. These include opportunities to optimise service, reduce health inequalities by better allocating resources, and enhance personalised healthcare –for example, by comparing treatments for people with similar characteristics. It can also help encourage innovation by extending research data to assess whether new therapies would work for a broader population….(More)”.

Government data management for the digital age


Essay by Axel Domeyer, Solveigh Hieronimus, Julia Klier, and Thomas Weber: “Digital society’s lifeblood is data—and governments have lots of data, representing a significant latent source of value for both the public and private sectors. If used effectively, and keeping in mind ever-increasing requirements with regard to data protection and data privacy, data can simplify delivery of public services, reduce fraud and human error, and catalyze massive operational efficiencies.

Despite these potential benefits, governments around the world remain largely unable to capture the opportunity. The key reason is that data are typically dispersed across a fragmented landscape of registers (datasets used by government entities for a specific purpose), which are often managed in organizational silos. Data are routinely stored in formats that are hard to process or in places where digital access is impossible. The consequence is that data are not available where needed, progress on digital government is inhibited, and citizens have little transparency on what data the government stores about them or how it is used.

Only a handful of countries have taken significant steps toward addressing these challenges. As other governments consider their options, the experiences of these countries may provide them with valuable guidance and also reveal five actions that can help governments unlock the value that is on their doorsteps.

As societies take steps to enhance data management, questions on topics such as data ownership, privacy concerns, and appropriate measures against security breaches will need to be answered by each government. The purpose of this article is to outline the positive benefits of modern data management and provide a perspective on how to get there…(More)”.

Little Rock Shows How Open Data Drives Resident Engagement


Blog by  Ross Schwartz: “The 12th Street corridor is in the heart of Little Rock, stretching west from downtown across multiple neighborhoods. But for years the area had suffered from high crime rates and disinvestment, and is considered a food desert.

With the intention of improving public safety and supporting efforts to revitalize the area, the City built a new police station in 2014 on the street. And, in the years following, as city staff ramped up efforts to place data at the center of problem-solving, it began to hold two-day-long “Data Academy” trainings for city employees and residents on foundational data practices, including data analysis.

Responding to public safety concerns, a 2018 Data Academy training focused on 12th Street. A cross-department team dug into data sets to understand the challenges facing the area, looking at variables including crime, building code violations, and poverty. It turned out the neighborhood with the highest levels of crime and blight was actually blocks away from 12th Street itself, in Midtown. A predominantly African-American neighborhood just east of the University of Arkansas at Little Rock campus, Midtown has a mix of older longtime homeowners and younger renters.

“It was a real data-driven ‘a-ha’ moment — an example of what you can understand about a city if you have the right data sets and look in the right places,” says Melissa Bridges, Little Rock’s performance and innovation coordinator. With support from What Works Cities (WWC), for the last five years she’s led Little Rock’s efforts to build open data and performance measurement resources and infrastructure…

Newly aware of Midtown’s challenges, city officials decided to engage residents in the neighborhood and adjacent areas. Data Academy members hosted a human-centered design workshop, during which residents were given the opportunity to self-prioritize their pressing concerns. Rather than lead the workshop, officials from various city departments quietly observed the discussion.

The main issue that emerged? Many parts of Midtown were poorly lit due to broken or blocked streetlights. Many residents didn’t feel safe and didn’t know how to alert the City to get lights fixed or vegetation cut back. A review of 311 request data showed that few streetlight problems in the area were ever reported to the City.

Aware of studies showing the correlation between dark streets and crime, the City designed a streetlight canvassing project in partnership with area neighborhood associations to engage and empower residents. Bridges and her team built canvassing route maps using Google Maps and Little Rock Citizen Connect, which collects 311 requests and other data sets. Then they gathered resident volunteers to walk or drive Midtown’s streets on a Friday night, using the City’s 311 mobile app to make a light service request and tag the location….(More)”.

New report confirms positive momentum for EU open science


Press release: “The Commission released the results and datasets of a study monitoring the open access mandate in Horizon 2020. With a steadily increase over the years and an average success rate of 83% open access to scientific publications, the European Commission is at the forefront of research and innovation funders concluded the consortium formed by the analysis company PPMI (Lithuania), research and innovation centre Athena (Greece) and Maastricht University (the Netherlands).

The Commission sought advice on a process and reliable metrics through which to monitor all aspects of the open access requirements in Horizon 2020, and inform how to best do it for Horizon Europe – which has a more stringent and comprehensive set of rights and obligations for Open Science.

The key findings of the study indicate that the early European Commission’s leadership in the Open Science policy has paid off. The Excellent Science pillar in Horizon 2020 has led the success story, with an open access rate of 86%. Of the leaders within this pillar are the European Research Council (ERC) and the Future and Emerging Technologies (FET) programme, with open access rates of over 88%.

Other interesting facts:

  • In terms of article processing charges (APCs), the study estimated the average cost in Horizon 2020 of publishing an open access article to be around EUR 2,200.  APCs for articles published in ‘hybrid’ journals (a cost that will no longer be eligible under Horizon Europe), have a higher average cost of EUR 2,600
  • Compliance in terms of depositing open access publications in a repository (even when publishing open access through a journal) is relatively high (81.9%), indicating that the current policy of depositing is well understood and implemented by researchers.
  • Regarding licences, 49% of Horizon 2020 publications were published using Creative Commons (CC) licences, which permit reuse (with various levels of restrictions) while 33% use publisher-specific licences that place restrictions on text and data mining (TDM).
  • Institutional repositories have responded in a satisfactory manner to the challenge of providing FAIR access to their publications, amending internal processes and metadata to incorporate necessary changes: 95% of deposited publications include in their metadata some type of persistent identifier (PID).
  • Datasets in repositories present a low compliance level as only approximately 39% of Horizon 2020 deposited datasets are findable, (i.e., the metadata includes a PID and URL to the data file), and only around 32% of deposited datasets are accessible (i.e., the data file can be fetched using a URL link in the metadata).  Horizon Europe will hopefully allow to achieve better results.
  • The study also identified gaps in the existing Horizon 2020 open access monitoring data, which pose further difficulties in assessing compliance. Self-reporting by beneficiaries also highlighted a number of issues…(More)”

No revolution: COVID-19 boosted open access, but preprints are only a fraction of pandemic papers


Article by Jeffrey Brainard: “In January 2020, as COVID-19 spread insidiously, research funders and journal publishers recognized their old ways wouldn’t do. They needed to hit the gas pedal to meet the desperate need for information that could help slow the disease.

One major funder, the Wellcome Trust, issued a call for changing business as usual. Authors should put up COVID-19 manuscripts as preprints, it urged, because those are publicly posted shortly after they’re written, before being peer reviewed. Scientists should share their data widely. And publishers should make journal articles open access, or free to read immediately when published.

Dozens of the world’s leading funders, publishers, and scientific societies (including AAAS, publisher of Science) signed Wellcome’s statement. Critics of the tradition-bound world of scientific publishing saw a rare opportunity to tackle long-standing complaints—for example, that journals place many papers behind paywalls and take months to complete peer review. They hoped the pandemic could help birth a new publishing system.

But nearly 2 years later, hopes for a wholesale revolution are fading. Preprints by medical researchers surged, but they remain a small fraction of the literature on COVID-19. Much of that literature is available for free, but access to the underlying data is spotty. COVID-19 journal articles were reviewed faster than previous papers, but not dramatically so, and some ask whether that gain in speed came at the expense of quality. “The overall system demonstrated what could be possible,” says Judy Luther, president of Informed Strategies, a publishing consulting firm.

One thing is clear. The pandemic prompted an avalanche of new papers: more than 530,000, released either by journals or as preprints, according to the Dimensions bibliometric database. That fed the largest 1-year increase in all scholarly articles, and the largest annual total ever. That response is “bonkers,” says Vincent Larivière of the University of Montreal, who studies scholarly publishing. “Everyone had to have their COVID moment and write something.”…(More)”.

The Innovation Project: Can advanced data science methods be a game-change for data sharing?


Report by JIPS (Joint Internal Displacement Profiling Service): “Much has changed in the humanitarian data landscape in the last decade and not primarily with the arrival of big data and artificial intelligence. Mostly, the changes are due to increased capacity and resources to collect more data quicker, leading to the professionalisation of information management as a domain of work. Larger amounts of data are becoming available in a more predictable way. We believe that as the field has progressed in filling critical data gaps, the problem is not the availability of data, but the curation and sharing of that data between actors as well as the use of that data to its full potential.

In 2018, JIPS embarked on an innovation journey to explore the potential of state-of-the-art technologies to incentivise data sharing and collaboration. This report covers the first phase of the innovation project and launches a series of articles in which we will share more about the innovation journey itself, discuss safe data sharing and collaboration, and look at the prototype we developed – made possible by the UNHCR Innovation Fund.

We argue that by making data and insights safe and secure to share between stakeholders, it will allow for a more efficient use of available data, reduce the resources needed to collect new data, strengthen collaboration and foster a culture of trust in the evidence-informed protection of people in displacement and crises.

The paper first defines the problem and outlines the processes through which data is currently shared among the humanitarian community. It explores questions such as: what are the existing data sharing methods and technologies? Which ones constitute a feasible option for humanitarian and development organisations? How can different actors share and collaborate on datasets without impairing confidentiality and exposing them to disclosure threats?…(More)”.

The “Onion Model”: A Layered Approach to Documenting How the Third Wave of Open Data Can Provide Societal Value


Blog post by Andrew Zahuranec, Andrew Young and Stefaan Verhulst: “There’s a lot that goes into data-driven decision-making. Behind the datasets, platforms, and analysts is a complex series of processes that inform what kinds of insight data can produce and what kinds of ends it can achieve. These individual processes can be hard to understand when viewed together but, by separating the stages out, we can not only track how data leads to decisions but promote better and more impactful data management.

Earlier this year, The Open Data Policy Lab published the Third Wave of Open Data Toolkit to explore the elements of data re-use. At the center of this toolkit was an abstraction that we call the Open Data Framework. Divided into individual, onion-like layers, the framework shows all the processes that go into capitalizing on data in the third wave, starting with the creation of a dataset through data collaboration, creating insights, and using those insights to produce value.

This blog tries to re-iterate what’s included in each layer of this data “onion model” and demonstrate how organizations can create societal value by making their data available for re-use by other parties….(More)”.

Governing smart cities: policy benchmarks for ethical and responsible smart city development


Report by the World Economic Forum: “… provides a benchmark for cities looking to establish policies for ethical and responsible governance of their smart city programmes. It explores current practices relating to five foundational policies: ICT accessibility, privacy impact assessment, cyber accountability, digital infrastructure and open data. The findings are based on surveys and interviews with policy experts and city government officials from the Alliance’s 36 “Pioneer Cities”. The data and insights presented in the report come from an assessment of detailed policy elements rather than the high-level indicators often used in maturity frameworks….(More)”.

The Open Data Policy Lab’s City Incubator


The GovLab: “Hackathons. Data Jams. Dashboards. Mapping, analyzing, and releasing open data. These are some of the essential first steps in building a data-driven culture in government. Yet, it’s not always easy to get data projects such as these off the ground. Governments often work in difficult situations under constrained resources. They have to manage various stakeholders and constituencies who have to be sold on the value that data can generate in their daily work.

Through the Open Data Policy Lab, The GovLab and Microsoft are providing various resources — such as the Data Stewards Academy, and the Third Wave of Open Data Toolkit — to support this goal. Still, we recognize that more tailored guidance is needed so cities can build new sustainable data infrastructure and launch projects that meet their policy goals.

Today, we’re providing that resource in the form of the Open Data Policy Lab’s City Incubator. A first-of-its-kind program to support data innovations in cities’ success and scale, the City Incubator will give 10 city officials access to the hands-on training and access to mentors to take their ideas to the next level. It will enable cutting edge work on various urban challenges and empower officials to create data collaboratives, data-sharing agreements, and other systems. This work is supported by Microsoft, Mastercard City Possible, Luminate, NYU CUSP and the Public Sector Network.

Our team is launching a call for ten city government intrapreneurs from around the world working on data-driven projects to apply to the City Incubator. Over the course of six months, participants will use start-up innovation and public sector program solving frameworks to develop and launch new data innovations. They will also receive support from a council of mentors from around the world.

Applications are due August 31, with an early application deadline of August 6 for applicants looking for feedback. Applicants are expected to present their idea and include information on the value their proposal will generate, the resources it will use, the partners it will involve, and the risks it might entail alongside other information in the form of a Data Innovation Canvas. Additional information can be found on the website here.”

The Data Innovation Canvas

Financial data unbound: The value of open data for individuals and institutions


Paper by McKinsey Global Institute: “As countries around the world look to ensure rapid recovery once the COVID-19 crisis abates, improved financial services are emerging as a key element to boost growth, raise economic efficiency, and lift productivity. Robust digital financial infrastructure proved its worth during the crisis, helping governments cushion people and businesses from the economic shock of the pandemic. The next frontier is to create an open-data ecosystem for finance.

Already, technological, regulatory, and competitive forces are moving markets toward easier and safer financial data sharing. Open-data initiatives are springing up globally, including the United Kingdom’s Open Banking Implementation Entity, the European Union’s second payment services directive, Australia’s new consumer protection laws, Brazil’s drafting of open data guidelines, and Nigeria’s new Open Technology Foundation (Open Banking Nigeria). In the United States, the Consumer Financial Protection Bureau aims to facilitate a consumer-authorized data-sharing market, while the Financial Data Exchange consortium attempts to promote common, interoperable standards for secure access to financial data. Yet, even as many countries put in place stronger digital financial infrastructure and data-sharing mechanisms, COVID-19 has exposed limitations and gaps in their reach, a theme we explored in earlier research.

This discussion paper from the McKinsey Global Institute (download full text in 36-page PDF) looks at the potential value that could be created—and the key issues that will need to be addressed—by the adoption of open data for finance. We focus on four regions: the European Union, India, the United Kingdom, and the United States.

By open data, we mean the ability to share financial data through a digital ecosystem in a manner that requires limited effort or manipulation. Advantages include more accurate credit risk evaluation and risk-based pricing, improved workforce allocation, better product delivery and customer service, and stronger fraud protection.

Our analysis suggests that the boost to the economy from broad adoption of open-data ecosystems could range from about 1 to 1.5 percent of GDP in 2030 in the European Union, the United Kingdom, and the United States, to as much as 4 to 5 percent in India. All market participants benefit, be they institutions or consumers—either individuals or micro-, small-, and medium-sized enterprises (MSMEs)—albeit to varying degrees….(More)”.