Open data and data sharing: An economic analysis


Paper by Alevtina Krotova, Armin Mertens, Marc Scheufen: “Data is an important business resource. It forms the basis for various digital technologies such as artificial intelligence or smart services. However, access to data is unequally distributed in the market. Hence, some business ideas fail due to a lack of data sources. Although many governments have recognised the importance of open data and already make administrative data available to the public on a large scale, many companies are still reluctant to share their data among other firms and competitors. As a result, the economic potential of data is far from being fully exploited. Against this background, we analyse current developments in the area of open data. We compare the characteristics of open governmental and open company data in order to define the necessary framework conditions for data sharing. Subsequently, we examine the status quo of data sharing among firms. We use a qualitative analysis of survey data of European companies to derive the sufficient conditions to strengthen data sharing. Our analysis shows that governmental data is a public good, while company data can be seen as a club or private good. Latter frequently build the core for companies’ business models and hence are less suitable for data sharing. Finally, we find that promoting legal certainty and the economic impact present important policy steps for fostering data sharing….(More)”

Policy making in a digital world


Report by Lewis Lloyd: “…Policy makers across government lack the necessary skills and understanding to take advantage of digital technologies when tackling problems such as coronavirus and climate change. This report says already poor data management has been exacerbated by a lack of leadership, with the role of government chief data officer unfilled since 2017. These failings have been laid bare by the stuttering coronavirus Test and Trace programme. Drawing on interviews with policy experts and digital specialists inside and outside government, the report argues that better use of data and new technologies, such as artificial intelligence, would improve policy makers’ understanding of problems like coronavirus and climate change, and aid collaboration with colleagues, external organisations and the public in seeking solutions to them. It urges government to trial innovative applications of data and technology to ​a wider range of policies, but warns recent failures such as the A-level algorithm fiasco mean it must also do more to secure public trust in its use of such technologies. This means strengthening oversight and initiating a wider public debate about the appropriate use of digital technologies, and improving officials’ understanding of the limitations of data-driven analysis. The report recommends that the government:

  1. Appoints a chief data officer as soon as possible to drive work on improving data quality, tackle problems with legacy IT and make sure new data standards are applied and enforced across government.
  2. ​Places more emphasis on statistical and technological literacy when recruiting and training policy officials.
  3. Sets up a new independent body to lead on public engagement in policy making, with an initial focus on how and when government should use data and technology…(More)”.

Laboratories of Design: A Catalog of Policy Innovation Labs in Europe


Report by Anat Gofen and Esti Golan: “To address both persistent and emerging social and environmental problems, governments around the world have been seeking innovative ways to generate policy solutions in collaboration with citizens. One prominent trend during recent decades is the proliferation of Policy Innovation Labs (PILs), in which the search for policy solutions is embedded within scientific laboratory-like structures. Spread across the public, private, and non-profit sectors, and often funded by local, regional, or national governments, PILs utilize experimental methods, testing, and measurement to generate innovative, evidence-based policy solutions to complex public issues.

This catalog lists PILs in Europe. For each lab, a one-page profile specifies its vision, policy innovation approaches, methodologies, major projects, parent entity, funding sources, and its alignment with the United Nations Sustainable Development Goals (SDG) call to action. For each lab we identify governmental, municipal, multi-sectorial, academic, non-profit, or private sector affiliation.

The goals of compiling this catalog and making it available to citizens, scholars, NGOs, and public officials are to call attention to the growing spirit of citizen engagement in developing innovative policy solutions for their own communities and to facilitate collaboration and cross-pollination of ideas between organizations. Despite their increasing importance in public policy making, PILs are as yet understudied. This catalog will provide an opportunity for scholars to explore the function and value of community-oriented policy innovation as well as the effects of approaching policy making around disruptive social problems in a “scientific” way.

Methodology: This catalog of policy innovation labs was compiled based on published reports, as well as a Google search for each individual country using the terms “policy lab” and “innovation lab,” first in English, then in the native language. Sometimes the labs themselves came up in the search results; for others, an article or a blog that mentioned them appeared. Next, each lab was searched specifically by name or by using an identified link. Each lab website that was identified was searched for other labs that were mentioned. Some labs were identified more than once, and a few that were found to be defunct or lacking a website were excluded. Innovation labs that referred only to technical or technological innovations were omitted. Only labs that relate to policy and to so-called “public innovation” were included in this catalog. Eligible PILs could be run and/or sponsored by local, regional, or national governments, universities, non-profit organizations, or the private sector. This resulted in a total of 212 European PILs.

Notably, while the global proliferation of policy innovation labs is acknowledged by formal, global organizations, there are no clear-cut criteria to determine which organizations are considered PILs. Therefore, this catalog follows the precedent set by previous catalogs and identifies PILs as organizations that generate policy recommendations for social problems and public issues by employing a user-oriented design approach and utilizing experimental methods.

Information about every lab was collected form its website, with minimal editing for coherence. For some labs, information was presented in English on its website; for others, information in the native language was translated into English using machine translation followed by human editing. Data for the catalog was collected between December 2019 and July 2020. PILs are opening and closing with increasing frequency so this catalog serves as a snapshot in time, featuring PILs that are currently active as of the time of compilation….(More)”.

Common Pitfalls in the Interpretation of COVID-19 Data and Statistics


Paper by Andreas Backhaus: “…In the public debate, one can encounter at least three concepts that measure the deadliness of SARS-CoV-2: the case fatality rate (CFR), the infection fatality rate (IFR) and the mortality rate (MR). Unfortunately, these three concepts are sometimes used interchangeably, which creates confusion as they differ from each other by definition.

In its simplest form, the case fatality rate divides the total number of confirmed deaths by COVID-19 by the total number of confirmed cases of infections with SARS-CoV-2, neglecting adjustments for future deaths among current cases here. However, the number of confirmed cases is believed to severely underestimate the true number of infections. This is due to the asymptomatic process of the infection in many individuals and the lack of testing capacities. Hence, the CFR presumably reflects rather an upper bound to the true lethality of SARS-CoV-2, as its denominator does not take the undetected infections into account.

The infection fatality rate seeks to represent the lethality more accurately by incorporating the number of undetected infections or at least an estimate thereof into its calculation. Consequently, the IFR divides the total number of confirmed deaths by COVID-19 by the total number of infections with SARS-CoV-2. Due to its larger denominator but identical numerator, the IFR is lower than the CFR. The IFR represents a crucial parameter in epidemiological simulation models, such as that presented by Ferguson et al. (2020), as it determines the number of expected fatalities given the simulated spread of the disease among the population.

The methodological challenge regarding the IFR is, of course, to find a credible estimate of the undetected cases of infection. An early estimate of the IFR was provided on the basis of data collected in the course of the SARS-CoV-2 outbreak on the Diamond Princess cruise ship in February 2020. Mizumoto et al. (2020) estimate that 17.9% (95% confidence interval: 15.5-20.2) of the cases were asymptomatic. Russell et al. (2020), after adjusting for age, estimate that the IFR among the Diamond Princess cases is 1.3% (95% confidence interval: 0.38-3.6) when considering all cases, but 6.4% (95% confidence interval: 2.6–13) when considering only cases of patients that are 70 years and older. The serological studies that are currently being conducted in several countries and localities serve to provide more estimates of the true number of infections with SARS-CoV-2 that have occurred over the past few months….(More)”.

Open data governance: civic hacking movement, topics and opinions in digital space


Paper by Mara Maretti, Vanessa Russo & Emiliano del Gobbo: “The expression ‘open data’ relates to a system of informative and freely accessible databases that public administrations make generally available online in order to develop an informative network between institutions, enterprises and citizens. On this topic, using the semantic network analysis method, the research aims to investigate the communication structure and the governance of open data in the Twitter conversational environment. In particular, the research questions are: (1) Who are the main actors in the Italian open data infrastructure? (2) What are the main conversation topics online? (3) What are the pros and cons of the development and use (reuse) of open data in Italy? To answer these questions, we went through three research phases: (1) analysing the communication network, we found who are the main influencers; (2) once we found who were the main actors, we analysed the online content in the Twittersphere to detect the semantic areas; (3) then, through an online focus group with the main open data influencers, we explored the characteristics of Italian open data governance. Through the research, it has been shown that: (1) there is an Italian open data governance strategy; (2) the Italian civic hacker community plays an important role as an influencer; but (3) there are weaknesses in governance and in practical reuse….(More)”.

Introducing the Institute of Impossible Ideas


Blog by Dominic Campbell: “…We have an opportunity ahead of us to set up a new model which seeds and keeps innovation firmly in the public realm. Using entrepreneurial approaches, we can work together to not only deliver better outcomes for citizens for less but ideate, create and build technology-driven, sustainable services that remain in public hands.

Rebooting public services for the 21st century

Conventional wisdom is that the private sector is best placed to drive radical change with its ecosystem of funders, appetite for risk and perceived ability to attract the best and brightest minds. In the private sector, digital companies have disrupted whole industries. Tech startups are usurping the incumbents, improving experiences and reducing costs before expanding and completely transforming the landscape around them.

We’re talking about the likes of Netflix who started a new model for movie rentals, turned streaming platform for TV and is now one of the world’s largest producers of media. Or Airbnb, which got its start renting a spare room and air mattress, turned one of the largest travel booking platforms and is now moving into building physical hotels and housing. Two organisations who saw an opportunity in a market, and have gone on to reinvent a full-stack service.

The entrepreneurial approach has driven rapid innovation in some fields, but private sector outsourcing for the public realm has rarely led to truly radical innovation. That doesn’t stop the practice, and profits remain in private hands. Old models of innovation, either internal and incremental or left to the private sector, aren’t working.

The public sector can, and does, drive innovation. And yet, we continue to see private profits take off from the runway of publicly funded innovation, the state receiving little of the financial reward for the private sector’s increased role in public service delivery….(More)…Find out more about the Institute of Impossible Ideas.

Demystifying the Role of Data Interoperability in the Access and Sharing Debate


Paper by Jörg Hoffmann and Begoña Gonzalez Otero: “In the current data access and sharing debate, data interoperability is widely proclaimed as being key for efficiently reaping the economic welfare enhancing effects of further data re-use. Although, we agree, we found that the current law and policy framework pertaining data interoperability was missing a groundworks analysis. Without a clear understanding of the notions of interoperability, the role of data standards and application programming interfaces (APIs) to achieve this ambition, and the IP and trade secrets protection potentially hindering it, any regulatory analysis within the data access discussion will be incomplete. Any attempt at untangling the role of data interoperability in the access and sharing regimes requires a thorough understanding of the underlying technology and a common understanding of the different notions of data interoperability.

The paper firstly explains the technical complexity of interoperability and its enablers, namely data standards and application programming interfaces. It elaborates on the reasons data interoperability counts with different levels and puts emphasis on the fact that data interoperability is indirectly tangled to the data access right. Since data interoperability may be part of the legal obligations correlating to the access right, the scope of interoperability is and has already been subject to courts’ interpretation. While this may give some manoeuvre for balanced decision-making, it may not guarantee the ambition of efficient re-usability of data. This is why data governance market regulation under a public law approach is becoming more favourable. Yet, and this is elaborated in a second step, the paper builds on the assumption that interoperability should not become another policy on its own. This is followed by a competition economics assessment, taking into account that data interoperability is always a matter of degree and a lack of data interoperability does not necessarily lead to a market foreclosure of competitors and to causing harm to consumer welfare. Additionally, parts of application programming interfaces (APIs) may be protected under IP rights and trade secrets, which might conflict with data access rights. Instead of further solving the conflicting regimes within the respective legal regimes of the exclusive rights the paper concludes by suggesting that (sector-specific) data governance solutions should deal with this issue and align the different interests implied. This may provide for better, practical and well-balanced solutions instead of impractical and dysfunctional exceptions and limitations within the IP and trade secrets regimes….(More)”.

The Expertise Curse: How Policy Expertise Can Hinder Responsiveness


Report by Miguel Pereira‪ and Patrik Öhberg: “We argue that policy expertise may constrain the ability of politicians to be responsive. Legislators with more knowledge and experience in a given policy area have more confidence in their own issue-specific positions. Enhanced confidence, in turn, may lead legislators to discount opinions they disagree with. Two experiments with Swedish politicians support our argument. First, we find that officials with more expertise in a given domain are more likely to dismiss appeals from voters who hold contrasting opinions, regardless of their specific position on the policy, and less likely to accept that opposing views may represent the majority opinion. Consistent with the proposed mechanism, in a second experiment we show that inducing perceptions of expertise increases self-confidence. The results suggest that representatives with more expertise in a given area are paradoxically less capable of voicing public preferences in that domain. The study provides a novel explanation for distortions in policy responsiveness….(More)”

If data is 21st century oil, could foundations be the right owners?


Felix Oldenburg at Alliance: “What are the best investments for a foundation? This important question is one many foundation professionals are revisiting in light of low interest rates, high market volatility, and fears of deep economic trouble ahead. While stories of success certainly exist and are worth learning from, even the notorious lack of data cannot obscure the inconvenient truth that the idea of traditional endowments is in trouble.

I would argue that in order to unleash the potential of foundations, we should turn the question around, perhaps back on its feet: For which assets are foundations the best owners?

In the still dawning digital age, one fascinating answer may stare you right in the face as you read this. How much is your personal data worth? Your social media information, search and purchase history, they are the source of much of the market value of the fastest growing sector of our time. A rough estimate of market valuation of the major social platforms divided by their active users arrives at more than $1,000 USD per user, not differentiating by location or other factors. This sum is more than the median per capita wealth in about half the world’s countries. And if the trend continues, this value may continue to grow – and with it the big question of how to put one of the most valuable resource of our time to use for the good of all.

Acting as guardians of digital commons, data-endowed foundations could negotiate conditions for the commercial use of its assets, and invest the income to create equal digital opportunities, power 21st century education, and fight climate change.

Foundation ownership in the data sector may sound like a wild idea at first. Yet foundations and their predecessors have played the role of purpose-driven owners of critical assets and infrastructures throughout history. Monasteries (called ‘Stifte’ in German, the root of the German word for foundations) have protected knowledge and education in libraries, and secured health care in hospitals. Trusts have created affordable much of the social housing in the exploding cities of the 19th century. The German Marshall Plan created an endowment for economic recovery that is still in existence today.

The proposition is simple: Independent ownership for the good of all, beyond the commercial or national interests of individual corporations of governments, in perpetuity. Acting as guardians of digital commons, data-endowed foundations could negotiate conditions for the commercial use of its assets, and invest the income to create equal digital opportunities, power 21st century education, and fight climate change. An ideal model of ownership would also include a form of governance exercised by the users themselves through digital participation and elections. A foundation really only relies on one thing, a stable frame of rights in its legal home country. This is far from a trivial condition, but again history shows how many foundations have survived depressions, wars, and revolutions….(More)”

UK passport photo checker shows bias against dark-skinned women


Maryam Ahmed at BBC News: “Women with darker skin are more than twice as likely to be told their photos fail UK passport rules when they submit them online than lighter-skinned men, according to a BBC investigation.

One black student said she was wrongly told her mouth looked open each time she uploaded five different photos to the government website.

This shows how “systemic racism” can spread, Elaine Owusu said.

The Home Office said the tool helped users get their passports more quickly.

“The indicative check [helps] our customers to submit a photo that is right the first time,” said a spokeswoman.

“Over nine million people have used this service and our systems are improving.

“We will continue to develop and evaluate our systems with the objective of making applying for a passport as simple as possible for all.”

Skin colour

The passport application website uses an automated check to detect poor quality photos which do not meet Home Office rules. These include having a neutral expression, a closed mouth and looking straight at the camera.

BBC research found this check to be less accurate on darker-skinned people.

More than 1,000 photographs of politicians from across the world were fed into the online checker.

The results indicated:

  • Dark-skinned women are told their photos are poor quality 22% of the time, while the figure for light-skinned women is 14%
  • Dark-skinned men are told their photos are poor quality 15% of the time, while the figure for light-skinned men is 9%

Photos of women with the darkest skin were four times more likely to be graded poor quality, than women with the lightest skin….(More)”.