Little Rock Shows How Open Data Drives Resident Engagement


Blog by  Ross Schwartz: “The 12th Street corridor is in the heart of Little Rock, stretching west from downtown across multiple neighborhoods. But for years the area had suffered from high crime rates and disinvestment, and is considered a food desert.

With the intention of improving public safety and supporting efforts to revitalize the area, the City built a new police station in 2014 on the street. And, in the years following, as city staff ramped up efforts to place data at the center of problem-solving, it began to hold two-day-long “Data Academy” trainings for city employees and residents on foundational data practices, including data analysis.

Responding to public safety concerns, a 2018 Data Academy training focused on 12th Street. A cross-department team dug into data sets to understand the challenges facing the area, looking at variables including crime, building code violations, and poverty. It turned out the neighborhood with the highest levels of crime and blight was actually blocks away from 12th Street itself, in Midtown. A predominantly African-American neighborhood just east of the University of Arkansas at Little Rock campus, Midtown has a mix of older longtime homeowners and younger renters.

“It was a real data-driven ‘a-ha’ moment — an example of what you can understand about a city if you have the right data sets and look in the right places,” says Melissa Bridges, Little Rock’s performance and innovation coordinator. With support from What Works Cities (WWC), for the last five years she’s led Little Rock’s efforts to build open data and performance measurement resources and infrastructure…

Newly aware of Midtown’s challenges, city officials decided to engage residents in the neighborhood and adjacent areas. Data Academy members hosted a human-centered design workshop, during which residents were given the opportunity to self-prioritize their pressing concerns. Rather than lead the workshop, officials from various city departments quietly observed the discussion.

The main issue that emerged? Many parts of Midtown were poorly lit due to broken or blocked streetlights. Many residents didn’t feel safe and didn’t know how to alert the City to get lights fixed or vegetation cut back. A review of 311 request data showed that few streetlight problems in the area were ever reported to the City.

Aware of studies showing the correlation between dark streets and crime, the City designed a streetlight canvassing project in partnership with area neighborhood associations to engage and empower residents. Bridges and her team built canvassing route maps using Google Maps and Little Rock Citizen Connect, which collects 311 requests and other data sets. Then they gathered resident volunteers to walk or drive Midtown’s streets on a Friday night, using the City’s 311 mobile app to make a light service request and tag the location….(More)”.

New report confirms positive momentum for EU open science


Press release: “The Commission released the results and datasets of a study monitoring the open access mandate in Horizon 2020. With a steadily increase over the years and an average success rate of 83% open access to scientific publications, the European Commission is at the forefront of research and innovation funders concluded the consortium formed by the analysis company PPMI (Lithuania), research and innovation centre Athena (Greece) and Maastricht University (the Netherlands).

The Commission sought advice on a process and reliable metrics through which to monitor all aspects of the open access requirements in Horizon 2020, and inform how to best do it for Horizon Europe – which has a more stringent and comprehensive set of rights and obligations for Open Science.

The key findings of the study indicate that the early European Commission’s leadership in the Open Science policy has paid off. The Excellent Science pillar in Horizon 2020 has led the success story, with an open access rate of 86%. Of the leaders within this pillar are the European Research Council (ERC) and the Future and Emerging Technologies (FET) programme, with open access rates of over 88%.

Other interesting facts:

  • In terms of article processing charges (APCs), the study estimated the average cost in Horizon 2020 of publishing an open access article to be around EUR 2,200.  APCs for articles published in ‘hybrid’ journals (a cost that will no longer be eligible under Horizon Europe), have a higher average cost of EUR 2,600
  • Compliance in terms of depositing open access publications in a repository (even when publishing open access through a journal) is relatively high (81.9%), indicating that the current policy of depositing is well understood and implemented by researchers.
  • Regarding licences, 49% of Horizon 2020 publications were published using Creative Commons (CC) licences, which permit reuse (with various levels of restrictions) while 33% use publisher-specific licences that place restrictions on text and data mining (TDM).
  • Institutional repositories have responded in a satisfactory manner to the challenge of providing FAIR access to their publications, amending internal processes and metadata to incorporate necessary changes: 95% of deposited publications include in their metadata some type of persistent identifier (PID).
  • Datasets in repositories present a low compliance level as only approximately 39% of Horizon 2020 deposited datasets are findable, (i.e., the metadata includes a PID and URL to the data file), and only around 32% of deposited datasets are accessible (i.e., the data file can be fetched using a URL link in the metadata).  Horizon Europe will hopefully allow to achieve better results.
  • The study also identified gaps in the existing Horizon 2020 open access monitoring data, which pose further difficulties in assessing compliance. Self-reporting by beneficiaries also highlighted a number of issues…(More)”

No revolution: COVID-19 boosted open access, but preprints are only a fraction of pandemic papers


Article by Jeffrey Brainard: “In January 2020, as COVID-19 spread insidiously, research funders and journal publishers recognized their old ways wouldn’t do. They needed to hit the gas pedal to meet the desperate need for information that could help slow the disease.

One major funder, the Wellcome Trust, issued a call for changing business as usual. Authors should put up COVID-19 manuscripts as preprints, it urged, because those are publicly posted shortly after they’re written, before being peer reviewed. Scientists should share their data widely. And publishers should make journal articles open access, or free to read immediately when published.

Dozens of the world’s leading funders, publishers, and scientific societies (including AAAS, publisher of Science) signed Wellcome’s statement. Critics of the tradition-bound world of scientific publishing saw a rare opportunity to tackle long-standing complaints—for example, that journals place many papers behind paywalls and take months to complete peer review. They hoped the pandemic could help birth a new publishing system.

But nearly 2 years later, hopes for a wholesale revolution are fading. Preprints by medical researchers surged, but they remain a small fraction of the literature on COVID-19. Much of that literature is available for free, but access to the underlying data is spotty. COVID-19 journal articles were reviewed faster than previous papers, but not dramatically so, and some ask whether that gain in speed came at the expense of quality. “The overall system demonstrated what could be possible,” says Judy Luther, president of Informed Strategies, a publishing consulting firm.

One thing is clear. The pandemic prompted an avalanche of new papers: more than 530,000, released either by journals or as preprints, according to the Dimensions bibliometric database. That fed the largest 1-year increase in all scholarly articles, and the largest annual total ever. That response is “bonkers,” says Vincent Larivière of the University of Montreal, who studies scholarly publishing. “Everyone had to have their COVID moment and write something.”…(More)”.

The Innovation Project: Can advanced data science methods be a game-change for data sharing?


Report by JIPS (Joint Internal Displacement Profiling Service): “Much has changed in the humanitarian data landscape in the last decade and not primarily with the arrival of big data and artificial intelligence. Mostly, the changes are due to increased capacity and resources to collect more data quicker, leading to the professionalisation of information management as a domain of work. Larger amounts of data are becoming available in a more predictable way. We believe that as the field has progressed in filling critical data gaps, the problem is not the availability of data, but the curation and sharing of that data between actors as well as the use of that data to its full potential.

In 2018, JIPS embarked on an innovation journey to explore the potential of state-of-the-art technologies to incentivise data sharing and collaboration. This report covers the first phase of the innovation project and launches a series of articles in which we will share more about the innovation journey itself, discuss safe data sharing and collaboration, and look at the prototype we developed – made possible by the UNHCR Innovation Fund.

We argue that by making data and insights safe and secure to share between stakeholders, it will allow for a more efficient use of available data, reduce the resources needed to collect new data, strengthen collaboration and foster a culture of trust in the evidence-informed protection of people in displacement and crises.

The paper first defines the problem and outlines the processes through which data is currently shared among the humanitarian community. It explores questions such as: what are the existing data sharing methods and technologies? Which ones constitute a feasible option for humanitarian and development organisations? How can different actors share and collaborate on datasets without impairing confidentiality and exposing them to disclosure threats?…(More)”.

The “Onion Model”: A Layered Approach to Documenting How the Third Wave of Open Data Can Provide Societal Value


Blog post by Andrew Zahuranec, Andrew Young and Stefaan Verhulst: “There’s a lot that goes into data-driven decision-making. Behind the datasets, platforms, and analysts is a complex series of processes that inform what kinds of insight data can produce and what kinds of ends it can achieve. These individual processes can be hard to understand when viewed together but, by separating the stages out, we can not only track how data leads to decisions but promote better and more impactful data management.

Earlier this year, The Open Data Policy Lab published the Third Wave of Open Data Toolkit to explore the elements of data re-use. At the center of this toolkit was an abstraction that we call the Open Data Framework. Divided into individual, onion-like layers, the framework shows all the processes that go into capitalizing on data in the third wave, starting with the creation of a dataset through data collaboration, creating insights, and using those insights to produce value.

This blog tries to re-iterate what’s included in each layer of this data “onion model” and demonstrate how organizations can create societal value by making their data available for re-use by other parties….(More)”.

Governing smart cities: policy benchmarks for ethical and responsible smart city development


Report by the World Economic Forum: “… provides a benchmark for cities looking to establish policies for ethical and responsible governance of their smart city programmes. It explores current practices relating to five foundational policies: ICT accessibility, privacy impact assessment, cyber accountability, digital infrastructure and open data. The findings are based on surveys and interviews with policy experts and city government officials from the Alliance’s 36 “Pioneer Cities”. The data and insights presented in the report come from an assessment of detailed policy elements rather than the high-level indicators often used in maturity frameworks….(More)”.

The Open Data Policy Lab’s City Incubator


The GovLab: “Hackathons. Data Jams. Dashboards. Mapping, analyzing, and releasing open data. These are some of the essential first steps in building a data-driven culture in government. Yet, it’s not always easy to get data projects such as these off the ground. Governments often work in difficult situations under constrained resources. They have to manage various stakeholders and constituencies who have to be sold on the value that data can generate in their daily work.

Through the Open Data Policy Lab, The GovLab and Microsoft are providing various resources — such as the Data Stewards Academy, and the Third Wave of Open Data Toolkit — to support this goal. Still, we recognize that more tailored guidance is needed so cities can build new sustainable data infrastructure and launch projects that meet their policy goals.

Today, we’re providing that resource in the form of the Open Data Policy Lab’s City Incubator. A first-of-its-kind program to support data innovations in cities’ success and scale, the City Incubator will give 10 city officials access to the hands-on training and access to mentors to take their ideas to the next level. It will enable cutting edge work on various urban challenges and empower officials to create data collaboratives, data-sharing agreements, and other systems. This work is supported by Microsoft, Mastercard City Possible, Luminate, NYU CUSP and the Public Sector Network.

Our team is launching a call for ten city government intrapreneurs from around the world working on data-driven projects to apply to the City Incubator. Over the course of six months, participants will use start-up innovation and public sector program solving frameworks to develop and launch new data innovations. They will also receive support from a council of mentors from around the world.

Applications are due August 31, with an early application deadline of August 6 for applicants looking for feedback. Applicants are expected to present their idea and include information on the value their proposal will generate, the resources it will use, the partners it will involve, and the risks it might entail alongside other information in the form of a Data Innovation Canvas. Additional information can be found on the website here.”

The Data Innovation Canvas

Financial data unbound: The value of open data for individuals and institutions


Paper by McKinsey Global Institute: “As countries around the world look to ensure rapid recovery once the COVID-19 crisis abates, improved financial services are emerging as a key element to boost growth, raise economic efficiency, and lift productivity. Robust digital financial infrastructure proved its worth during the crisis, helping governments cushion people and businesses from the economic shock of the pandemic. The next frontier is to create an open-data ecosystem for finance.

Already, technological, regulatory, and competitive forces are moving markets toward easier and safer financial data sharing. Open-data initiatives are springing up globally, including the United Kingdom’s Open Banking Implementation Entity, the European Union’s second payment services directive, Australia’s new consumer protection laws, Brazil’s drafting of open data guidelines, and Nigeria’s new Open Technology Foundation (Open Banking Nigeria). In the United States, the Consumer Financial Protection Bureau aims to facilitate a consumer-authorized data-sharing market, while the Financial Data Exchange consortium attempts to promote common, interoperable standards for secure access to financial data. Yet, even as many countries put in place stronger digital financial infrastructure and data-sharing mechanisms, COVID-19 has exposed limitations and gaps in their reach, a theme we explored in earlier research.

This discussion paper from the McKinsey Global Institute (download full text in 36-page PDF) looks at the potential value that could be created—and the key issues that will need to be addressed—by the adoption of open data for finance. We focus on four regions: the European Union, India, the United Kingdom, and the United States.

By open data, we mean the ability to share financial data through a digital ecosystem in a manner that requires limited effort or manipulation. Advantages include more accurate credit risk evaluation and risk-based pricing, improved workforce allocation, better product delivery and customer service, and stronger fraud protection.

Our analysis suggests that the boost to the economy from broad adoption of open-data ecosystems could range from about 1 to 1.5 percent of GDP in 2030 in the European Union, the United Kingdom, and the United States, to as much as 4 to 5 percent in India. All market participants benefit, be they institutions or consumers—either individuals or micro-, small-, and medium-sized enterprises (MSMEs)—albeit to varying degrees….(More)”.

For Whose Benefit? Transparency in the development and procurement of COVID-19 vaccines


Report by Transparency International Global Health: “The COVID-19 pandemic has required an unprecedented public health response, with governments dedicating massive amounts of resources to their health systems at extraordinary speed. Governments have had to respond quickly to fast-changing contexts, with many competing interests, and little in the way of historical precedent to guide them.

Transparency here is paramount; publicly available information is critical to reducing the inherent risks of such a situation by ensuring governmental decisions are accountable and by enabling non-governmental expert input into the global vaccination process.

This report analyses transparency of two key stages of the vaccine development in chronological order: the development and subsequent buying of vaccines.

Given the scope, rapid progression and complexity of the global vaccination process, this is not an exhaustive analysis. First, all the following analysis is limited to 20 leading COVID-19 vaccines that were in, or had completed, phase 3 clinical trials as of 11th January 2021. Second, we have concentrated on transparency of two of the initial stages of the process: clinical trial transparency and the public contracting for the supply of vaccines. The report provides concrete recommendations on how to overcome current opacity in order to contribute to achieving the commitment of world leaders to ensure equal, fair and affordable access to COVID-19 vaccines for all countries….(More)”.

Three ways to supercharge your city’s open-data portal


Bloomberg Cities: “…Three open data approaches cities are finding success with:

Map it

Much of the data that people seem to be most interested in is location-based, local data leaders say. That includes everything from neighborhood crime stats and police data used by journalists and activists to property data regularly mined by real estate companies. Rather than simply making spatial data available, many cities have begun mapping it themselves, allowing users to browse information that’s useful to them.

At atlas.phila.gov, for example, Philadelphians can type in their own addresses to find property deeds, historic photos, nearby 311 complaints and service requests, and their polling place and date of the next local election, among other information. Los Angeles city’s GeoHub collects maps showing the locations of marijuana dispensariesreports of hate crimes, and five years of severe and fatal crashes between drivers and bikers or pedestrians, and dozens more.

A CincyInsights map highlighting cleaned up greens-aces across the city.
A CincyInsights map highlighting cleaned up green spaces across the city.

….

Train residents on how to use it

Cities with open-data policies learn from best practices in other city halls. In the last few years, many have begun offering trainings to equip residents with rudimentary data analysis skills. Baton Rouge, for example, offered a free, three-part Citizen Data Academy instructing residents on “how to find [open data], what it includes, and how to use it to understand trends and improve quality of life in our community.” …

In some communities, open-data officials work with city workers and neighborhood leaders to learn to help their communities access the benefits of public data even if only a small fraction of residents are accessing the data itself.

In Philadelphia, city teams work with the Citizens Planning Institute, an educational initiative of the city planning commission, to train neighborhood organizers in how to use city data around things like zoning and construction permits to keep up with development in their neighborhoods, says Kistine Carolan, open data program manager in the Office of Innovation and Technology. The Los Angeles Department of Neighborhood Empowerment runs a Data Literacy Program to help neighborhood groups make better use of the city’s data. So far, officials say, representatives of 50 of the city’s 99 neighborhood councils have signed up as part of the Data Liaisons program to learn new GIS and data-analysis skills to benefit their neighborhoods. 

Leverage the COVID moment

The COVID-19 pandemic has disrupted cities’ open-data plans, just like it has complicated every other aspect of society. Cities had to cancel scheduled in-person trainings and programs that help them reach some of their less-connected residents. But the pandemic has also demonstrated the fundamental role that data can play in helping to manage public emergencies. Cities large and small have hosted online tools that allow residents to track where cases are spiking—tools that have gotten many new people to interact with public data, officials say….(More)”.