Selected Readings on Data Portability


By Juliet McMurren, Andrew Young, and Stefaan G. Verhulst

As part of an ongoing effort to build a knowledge base for the field of improving governance through technology, The GovLab publishes a series of Selected Readings, which provide an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology.

In this edition, we explore selected literature on data portability.

To suggest additional readings on this or any other topic, please email info@thelivinglib.org. All our Selected Readings can be found here.

Context

Data today exists largely in silos, generating problems and inefficiencies for the individual, business and society at large. These include:

  • difficulty switching (data) between competitive service providers;
  • delays in sharing data for important societal research initiatives;
  • barriers for data innovators to reuse data that could generate insights to inform individuals’ decision making; and
  • inhibitions to scale data donation.

Data portability — the principle that individuals have a right to obtain, copy, and reuse their own personal data and to transfer it from one IT platform or service to another for their own purposes — is positioned as a solution to these problems. When fully implemented, it would make data liquid, giving individuals the ability to access their own data in a usable and transferable format, transfer it from one service provider to another, or donate data for research and enhanced data analysis by those working in the public interest.

Some companies, including Google, Apple, Twitter and Facebook, have sought to advance data portability through initiatives like the Data Transfer Project, an open source software project designed to facilitate data transmittals. Newly enacted data protection legislation such as Europe’s General Data Protection Regulation (2018) and the California Consumer Privacy Act (2018) give data holders a right to data portability. However, despite the legal and technical advances made, many questions toward scaling up data liquidity and portability responsibly and systematically remain. These new data rights have generated complex and as yet unanswered questions about the limits of data ownership, the implications for privacy, security and intellectual property rights, and the practicalities of how, when, and to whom data can be transferred.

In this edition of the GovLab’s Selected Readings series, we examine the emerging literature on data portability to provide a foundation for future work on the value proposition of data portability. Readings are listed in alphabetical order.

Selected readings

Cho, Daegon, Pedro Ferreira, and Rahul Telang, The Impact of Mobile Number Portability on Price and Consumer Welfare (2016)

  • In this paper, the authors analyze how Mobile Number Portability (MNP) — the ability for consumers to maintain their phone number when changing providers, thus reducing switching costs — affected the relationship between switching costs, market price and consumer surplus after it was introduced in most European countries in the early 2000s.
  • Theory holds that when switching costs are high, market leaders will enjoy a substantial advantage and are able to keep prices high. Policy makers will therefore attempt to decrease switching costs to intensify competition and reduce prices to consumers.
  • The study reviewed quarterly data from 47 wireless service providers in 15 EU countries between 1999 and 2006. The data showed that MNP simultaneously decreased market price by over four percent and increased consumer welfare by an average of at least €2.15 per person per quarter. This increase amounted to a total of €880 million per quarter across the 15 EU countries analyzed in this paper and accounted for 15 percent of the increase in consumer surplus observed over this time.

CtrlShift, Data Mobility: The data portability growth opportunity for the UK economy (2018)

  • Commissioned by the UK Department of Digital, Culture, Media and Sport (DCMS), this study was intended to identify the potential of personal data portability for the UK economy.
  • Its scope went beyond the legal right to data portability envisaged by the GDPR, to encompass the current state of personal data portability and mobility, requirements for safe and secure data sharing, and the potential economic benefits through stimulation of innovation, productivity and competition.
  • The report concludes that increased personal data mobility has the potential to be a vital stimulus for the development of the digital economy, driving growth by empowering individuals to make use of their own data and consent to others using it to create new data-driven services and technologies.
  • However, the report concludes that there are significant challenges to be overcome, and new risks to be addressed, before the value of personal data can be realized. Much personal data remains locked in organizational silos, and systemic issues related to data security and governance and the uneven sharing of benefits need to be resolved.

Data Guidance and Future of Privacy Forum, Comparing Privacy Laws: GDPR v. CCPA (2018)

  • This paper compares the provisions of the GDPR with those of the California Consumer Privacy Act (2018).
  • Both article 20 of the GDPR and section 1798 of the CCPA recognize a right to data portability. Both also confer on data subjects the right to receive data from controllers free of charge upon request, and oblige controllers to create mechanisms to provide subjects with their data in portable and reusable form so that it can be transmitted to third parties for reuse.
  • In the CCPA, the right to data portability is an extension of the right to access, and only confers on data subjects the right to apply for data collected within the past 12 months and have it delivered to them. The GDPR does not impose a time limit, and allows data to be transferred from one data controller to another, but limits the right to automatically collected personal data provided by the data subject themselves through consent or contract.

Data Transfer Project, Data Transfer Project Overview and Fundamentals (2018)

  • The paper presents an overview of the goals, principles, architecture, and system components of the Data Transfer Project. The intent of the DTP is to increase the number of services offering data portability and provide users with the ability to transfer data directly in and out of participating providers through systems that are easy and intuitive to use, private and secure, reciprocal between services, and focused on user data. The project, which is supported by Microsoft, Google, Twitter and Facebook, is an open-source initiative that encourages the participation of other providers to reduce the infrastructure burden on providers and users.
  • In addition to benefits to innovation, competition, and user choice, the authors point to benefits to security, through allowing users to backup, organize, or archive their data, recover from account hijacking, and retrieve their data from deprecated services.
  • The DTP’s remit was to test concepts and feasibility for the transfer of specific types of user data between online services using a system of adapters to transfer proprietary formats into canonical formats that can be used to transfer data while allowing providers to maintain control over the security of their service. While not resolving all formatting or support issues, this approach would allow substantial data portability and encourage ecosystem sustainability.

Deloitte, How to Flourish in an Uncertain Future: Open Banking(2017)

  • This report addresses the innovative and disruptive potential of open banking, in which data is shared between members of the banking ecosystem at the authorization of the customer, with the potential to increase competition and facilitate new products and services. In the resulting marketplace model, customers could use a single banking interface to access products from multiple players, from established banks to newcomers and fintechs.
  • The report’s authors identify significant threats to current banking models. Banks that failed to embrace open banking could be relegated to a secondary role as an infrastructure provider, while third parties — tech companies, fintech, and price comparison websites — take over the customer relationship.
  • The report identifies four overlapping operating models banks could adopt within an open banking model: full service providers, delivering proprietary products through their own interface with little or no third-party integration; utilities, which provide other players with infrastructure without customer-facing services; suppliers, which offer proprietary products through third-party interfaces; and interfaces,which provide distribution services through a marketplace interface. To retain market share, incumbents are likely to need to adopt a combination of these roles, offering their own products and services and those of third parties through their own and others’ interfaces.

Digital Competition Expert Panel Unlocking Digital Competition(2019)

  • This report captures the findings of the UK Digital Competition Expert Panel, which was tasked in 2018 with considering opportunities and challenges the digital economy might pose for competition and competition policy and to recommend any necessary changes. The panel focused on the impact of big players within the sector, appropriate responses to mergers or anticompetitive practices, and the impact on consumers.
  • The panel found that the digital economy is creating many benefits, but that digital markets are subject to tipping, in which emerging winners can scoop much of the market. This concentration can give rise to substantial costs, especially to consumers, and cannot be solved by competition alone. However, government policy and regulatory solutions have limitations, including the slowness of policy change, uneven enforcement and profound informational asymmetries between companies and government.
  • The panel proposed the creation of a digital markets unit that would be tasked with developing a code of competitive conduct, enabling greater personal data mobility and systems designed with open standards, and advancing access to non-personal data to reduce barriers to market entry.
  • The panel’s model of data mobility goes beyond data portability, which involves consumers being able to request and transfer their own data from one provider to another. Instead, the panel recommended empowering consumers to instigate transfers of data between a business and a third party in order to access price information, compare goods and services, or access tailored advice and recommendations. They point to open banking as an example of how this could function in practice.
  • It also proposed updating merger policy to make it more forward-looking to better protect consumers and innovation and preserve the competitiveness of the market. It recommended the creation of antitrust policy that would enable the implementation of interim measures to limit damage to competition while antitrust cases are in process.

Egan, Erin, Charting a Way Forward: Data Portability and Privacy(2019)

  • This white paper by Facebook’s VP and Chief Privacy Officer, Policy, represents an attempt to advance the conversation about the relationship between data portability, privacy, and data protection. The author sets out five key questions about data portability: what is it, whose and what data should be portable, how privacy should be protected in the context of portability, and where responsibility for data misuse or improper protection should lie.
  • The paper finds that definitions of data portability still remain imprecise, particularly with regard to the distinction between data portability and data transfer. In the interest of feasibility and a reasonable operational burden on providers, it proposes time limits on providers’ obligations to make observed data portable.
  • The paper concludes that there are strong arguments both for and against allowing users to port their social graph — the map of connections between that user and other users of the service — but that the key determinant should be a capacity to ensure the privacy of all users involved. Best-practice data portability protocols that would resolve current differences of approach as to what, how and by whom information should be made available would help promote broader portability, as would resolution of liability for misuse or data exposure.

Engels, Barbara, Data portability among online platforms (2016)

  • The article examines the effects on competition and innovation of data portability among online platforms such as search engines, online marketplaces, and social media, and how relations between users, data, and platform services change in an environment of data portability.
  • The paper finds that the benefits to competition and innovation of portability are greatest in two kinds of environments: first, where platforms offer complementary products and can realize synergistic benefits by sharing data; and secondly, where platforms offer substitute or rival products but the risk of anti-competitive behaviour is high, as for search engines.
  • It identifies privacy and security issues raised by data portability. Portability could, for example, allow an identity fraudster to misuse personal data across multiple platforms, compounding the harm they cause.
  • It also suggests that standards for data interoperability could act to reduce innovation in data technology, encouraging data controllers to continue to use outdated technologies in order to comply with inflexible, government-mandated standards.

Graef, Inge, Martin Husovec and Nadezhda Purtova, Data Portability and Data Control: Lessons for an Emerging Concept in EU Law (2018)

  • This paper situates the data portability right conferred by the GDPR within rights-based data protection law. The authors argue that the right to data portability should be seen as a new regulatory tool aimed at stimulating competition and innovation in data-driven markets.
  • The authors note the potential for conflicts between the right to data portability and the intellectual property rights of data holders, suggesting that the framers underestimated the potential impact of such conflicts on copyright, trade secrets and sui generis database law.
  • Given that the right to data portability is being replicated within consumer protection law and the regulation of non-personal data, the authors argue framers of these laws should consider the potential for conflict and the impact of such conflict on incentives to innovate.

Mohsen, Mona Omar and Hassan A. Aziz The Blue Button Project: Engaging Patients in Healthcare by a Click of a Button (2015)

  • This paper provides a literature review on the Blue Button initiative, an early data portability project which allows Americans to access, view or download their health records in a variety of formats.
  • Originally launched through the Department of Veterans’ Affairs in 2010, the Blue Button initiative had expanded to more than 500 organizations by 2014, when the Department of Health and Human Services launched the Blue Button Connector to facilitate both patient access and development of new tools.
  • The Blue Button has enabled the development of tools such as the Harvard-developed Growth-Tastic app, which allows parents to check their child’s growth by submitting their downloaded pediatric health data. Pharmacies across the US have also adopted the Blue Button to provide patients with access to their prescription history.

More than Data and Mission: Smart, Got Data? The Value of Energy Data to Customers (2016)

  • This report outlines the public value of the Green Button, a data protocol that provides customers with private and secure access to their energy use data collected by smart meters.
  • The authors outline how the use of the Green Button can help states meet their energy and climate goals by enabling them to structure renewables and other distributed energy resources (DER) such as energy efficiency, demand response, and solar photovoltaics. Access to granular, near real time data can encourage innovation among DER providers, facilitating the development of applications like “virtual energy audits” that identify efficiency opportunities, allowing customers to reduce costs through time-of-use pricing, and enabling the optimization of photovoltaic systems to meet peak demand.
  • Energy efficiency receives the greatest boost from initiatives like the Green Button, with studies showing energy savings of up to 18 percent when customers have access to their meter data. In addition to improving energy conservation, access to meter data could improve the efficiency of appliances by allowing devices to trigger sleep modes in response to data on usage or price. However, at the time of writing, problems with data portability and interoperability were preventing these benefits from being realized, at a cost of tens of millions of dollars.
  • The authors recommend that commissions require utilities to make usage data available to customers or authorized third parties in standardized formats as part of basic utility service, and tariff data to developers for use in smart appliances.

MyData, Understanding Data Operators (2020)

  • MyData is a global movement of data users, activists and developers with a common goal to empower individuals with their personal data to enable them and their communities to develop knowledge, make informed decisions and interact more consciously and efficiently.
  • This introductory paper presents the state of knowledge about data operators, trusted data intermediaries that provide infrastructure for human-centric personal data management and governance, including data sharing and transfer. The operator model allows data controllers to outsource issues of legal compliance with data portability requirements, while offering individual users a transparent and intuitive way to manage the data transfer process.
  • The paper examines use cases from 48 “proto-operators” from 15 countries who fulfil some of the functions of an operator, albeit at an early level of maturity. The paper finds that operators offer management of identity authentication, data transaction permissions, connections between services, value exchange, data model management, personal data transfer and storage, governance support, and logging and accountability. At the heart of these functions is the need for minimum standards of data interoperability.
  • The paper reviews governance frameworks from the general (legislative) to the specific (operators), and explores proto-operator business models. In keeping with an emerging field, business models are currently unclear and potentially unsustainable, and one of a number of areas, including interoperability requirements and governance frameworks, that must still be developed.

National Science and Technology Council Smart Disclosure and Consumer Decision Making: Report of the Task Force on Smart Disclosure (2013)

  • This report summarizes the work and findings of the 2011–2013 Task Force on Smart Disclosure: Information and Efficiency, an interagency body tasked with advancing smart disclosure, through which data is made more available and accessible to both consumers and innovators.
  • The Task Force recognized the capacity of smart disclosure to inform consumer choices, empower them through access to useful personal data, enable the creation of new tools, products and services, and promote efficiency and growth. It reviewed federal efforts to promote smart disclosure within sectors and in data types that crosscut sectors, such as location data, consumer feedback, enforcement and compliance data and unique identifiers. It also surveyed specific public-private partnerships on access to data, such as the Blue and Green Button and MyData initiatives in health, energy and education respectively.
  • The Task Force reviewed steps taken by the Federal Government to implement smart disclosure, including adoption of machine readable formats and standards for metadata, use of APIs, and making data available in an unstructured format rather than not releasing it at all. It also reviewed “choice engines” making use of the data to provide services to consumers across a range of sectors.
  • The Task Force recommended that smart disclosure should be a core component of efforts to institutionalize and operationalize open data practices, with agencies proactively identifying, tagging, and planning the release of candidate data. It also recommended that this be supported by a government-wide community of practice.

Nicholas, Gabriel Taking It With You: Platform Barriers to Entry and the Limits of Data Portability (2020)

  • This paper considers whether, as is often claimed, data portability offers a genuine solution to the lack of competition within the tech sector.
  • It concludes that current regulatory approaches to data portability, which focus on reducing switching costs through technical solutions such as one-off exports and API interoperability, are not sufficient to generate increased competition. This is because they fail to address other barriers to entry, including network effects, unique data access, and economies of scale.
  • The author proposes an alternative approach, which he terms collective portability, which would allow groups of users to coordinate the transfer of their data to a new platform. This model raises questions about how such collectives would make decisions regarding portability, but would enable new entrants to successfully target specific user groups and scale rapidly without having to reach users one by one.

OECD, Enhancing Access to and Sharing of Data: Reconciling Risks and Benefits for Data Re-use across Societies (2019)

  • This background paper to a 2017 expert workshop on risks and benefits of data reuse considers data portability as one strategy within a data openness continuum that also includes open data, market-based B2B contractual agreements, and restricted data-sharing agreements within research and data for social good applications.
  • It considers four rationales offered for data portability. These include empowering individuals towards the “informational self-determination” aspired to by GDPR, increased competition within digital and other markets through reductions in information asymmetries between individuals and providers, switching costs, and barriers to market entry; and facilitating increased data flows.
  • The report highlights the need for both syntactic and semantic interoperability standards to ensure data can be reused across systems, both of which may be fostered by increased rights to data portability. Data intermediaries have an important role to play in the development of these standards, through initiatives like the Data Transfer Project, a collaboration which brought together Facebook, Google, Microsoft, and Twitter to create an open-source data portability platform.

Personal Data Protection Commission Singapore Response to Feedback on the Public Consultation on Proposed Data Portability and Data Innovation Provisions (2020)

  • The report summarizes the findings of the 2019 PDPC public consultation on proposals to introduce provisions on data portability and data innovation in Singapore’s Personal Data Protection Act.
  • The proposed provision would oblige organizations to transmit an individual’s data to another organization in a commonly used machine-readable format, upon the individual’s request. The obligation does not extend to data intermediaries or organizations that do not have a presence in Singapore, although data holders may choose to honor those requests.
  • The obligation would apply to electronic data that is either provided by the individual or generated by the individual’s activities in using the organization’s service or product, but not derived data created by the processing of other data by the data holder. Respondents were concerned that including derived data could harm organizations’ competitiveness.
  • Respondents were concerned about how to honour data portability requests where the data of third parties was involved, as in the case of a joint account holder, for example. The PDPC opted for a “balanced, reasonable, and pragmatic approach,” allowing data involving third parties to be ported where it was under the requesting individual’s control, was to be used for domestic and personal purposes, and related only to the organization’s product or service.

Quinn, Paul Is the GDPR and Its Right to Data Portability a Major Enabler of Citizen Science? (2018)

  • This article explores the potential of data portability to advance citizen science by enabling participants to port their personal data from one research project to another. Citizen science — the collection and contribution of large amounts of data by private individuals for scientific research — has grown rapidly in response to the development of new digital means to capture, store, organize, analyze and share data.
  • The GDPR right to data portability aids citizen science by requiring transfer of data in machine-readable format and allowing data subjects to request its transfer to another data controller. This requirement of interoperability does not amount to compatibility, however, and data thus transferred would probably still require cleaning to be usable, acting as a disincentive to reuse.
  • The GDPR’s limitation of transferability to personal data provided by the data subject excludes some forms of data that might possess significant scientific potential, such as secondary personal data derived from further processing or analysis.
  • The GDPR right to data portability also potentially limits citizen science by restricting the grounds for processing data to which the right applies to data obtained through a subject’s express consent or through the performance of a contract. This limitation excludes other forms of data processing described in the GDPR, such as data processing for preventive or occupational medicine, scientific research, or archiving for reasons of public or scientific interest. It is also not clear whether the GDPR compels data controllers to transfer data outside the European Union.

Wong, Janis and Tristan Henderson, How Portable is Portable? Exercising the GDPR’s Right to Data Portability (2018)

  • This paper presents the results of 230 real-world requests for data portability in order to assess how — and how well — the GDPR right to data portability is being implemented. The authors were interested in establishing the kinds of file formats that were returned in response to requests, and to identify practical difficulties encountered in making and interpreting requests, over a three month period beginning on the day the GDPR came into effect.
  • The findings revealed continuing problems around ensuring portability for both data controllers and data subjects. Of the 230 requests, only 163 were successfully completed.
  • Data controllers frequently had difficulty understanding the requirements of GDPR, providing data in incomplete or inappropriate formats: only 40 percent of the files supplied were in a fully compliant format. Additionally, some data controllers were confused between the right to data portability and other rights conferred by the GDPR, such as the right to access or erasure.

Business-to-Business Data Sharing: An Economic and Legal Analysis


Paper by Bertin Martens et al: “The European Commission announced in its Data Strategy (2020) its intentions to propose an enabling legislative framework for the governance of common European data spaces, to review and operationalize data portability, to prioritize standardization activities and foster data interoperability and to clarify usage rights for co-generated IoT data. This Strategy starts from the premise that there is not enough data sharing and that much data remain locked up and are not available for innovative re-use. The Commission will also consider the adoption of a New Competition Tool as well as the adoption of ex ante regulation for large online gate-keeping platforms as part of the announced Digital Services Act Package . In this context, the goal of this report is to examine the obstacles to Business-to-Business (B2B) data sharing: what keeps businesses from sharing or trading more of their data with other businesses and what can be done about it? For this purpose, this report uses the well-known tools of legal and economic thinking about market failures. It starts from the economic characteristics of data and explores to what extent private B2B data markets result in a socially optimal degree of data sharing, or whether there are market failures in data markets that might justify public policy intervention.

It examines the conditions under which monopolistic data market failures may occur. It contrasts these welfare losses with the welfare gains from economies of scope in data aggregation in large pools. It also discusses other potential sources of B2B data market failures due to negative externalities, risks and transaction costs and asymmetric information situations. In a next step, the paper explores solutions to overcome these market failures. Private third-party data intermediaries may be in a position to overcome market failures due to high transactions costs and risks. They can aggregate data in large pools to harvest the benefits of economies of scale and scope in data. Where third-party intervention fails, regulators can step in, with ex-post competition instruments and with ex-ante regulation. The latter includes data portability rights for personal data and mandatory data access rights….(More)”.

How Integrated Data Can Support COVID-19 Crisis and Recovery


Blog by Actionable Intelligence for Social Policy (AISP): “…State and local leaders are called upon to respond to the immediate harms of COVID-19. Yet with a looming recession threatening to undo gains among marginalized groups — particularly the Black middle class — tools to understand and disrupt long-term impacts on economic mobility and well-being are also urgently needed.

Administrative data[3] — the information collected during the course of routine service delivery, program administration, and business operations — provide an essential tool to help policymakers, community leaders, and researchers understand short- and long-term impacts of the pandemic. Several jurisdictions now have the capacity to link administrative data across programs in order to better understand how individuals interact with multiple systems, study longitudinal outcomes, and identify vulnerable subpopulations. As the COVID-19 crisis reveals weaknesses in the U.S. social safety net, states and localities with integrated administrative data infrastructure can use their capacity to identify populations and needs otherwise overlooked. Youth who “age out” of the child welfare system or individuals experiencing chronic homelessness often remain invisible when using traditional methods, aggregate data, or administrative records from a single source.

This blogpost demonstrates how nimble state and local data integration efforts have leveraged their capacity to quickly respond to and understand the impacts of COVID-19, while also reflecting on what can be done to mitigate harm and shift thinking about social welfare and the safety net….(More)”.

Applying new models of data stewardship to health and care data


Report by the Open Data Institute: “The outbreak of the coronavirus (Covid-19) has amplified and accelerated the need for an effective technology ecosystem that benefits everyone’s health. This report explores models of ‘data stewardship’ (the collection, maintenance and sharing of data) required to enable better evaluation

The pandemic has been accompanied by a marked increase in the use of digital technology, including introduction of remote consultation in general practice, new data flows to support the distribution of food and other essentials, and applications to support digital contact tracing.

This report explores models of ‘data stewardship’ (the collection, maintenance and sharing of data) required to enable better evaluation. It argues everybody involved in technology has a shared responsibility to enable evaluation, whether that means innovators sharing data for evaluation purposes, or healthcare providers being clearer, from the outset, about what data is needed to support effective evaluation.

This report re-envisages the role of evaluators as data stewards, who could use their positions as intermediaries to encourage stakeholders to share data, and help increase access to data for public benefit…(More)”.

Internet Searches for Acute Anxiety During the Early Stages of the COVID-19 Pandemic


Paper by John W. Ayers et al: “There is widespread concern that the coronavirus disease 2019 (COVID-19) pandemic may harm population mental health, chiefly owing to anxiety about the disease and its societal fallout. But traditional population mental health surveillance (eg, telephone surveys, medical records) is time consuming, expensive, and may miss persons who do not participate or seek care. To evaluate the association of COVID-19 with anxiety on a population basis, we examined internet searches indicative of acute anxiety during the early stages of the COVID-19 pandemic.Methods

The analysis relied on nonidentifiable, aggregate, public data and was exempted by the University of California San Diego Human Research Protections Program. Acute anxiety, including colloquially called anxiety attacks or panic attacks, was monitored because of its higher prevalence relative to other mental health problems. It can lead to other mental health problems (including depression), it is triggered by outside stressors, and it is socially contagious. Using Google Trends (https://trends.google.com/trends) we monitored the daily fraction of all internet searches (thereby adjusting the results for any change in total queries) that included the terms anxiety or panic in combination with attack (including panic attacksigns of anxiety attackanxiety attack symptoms) that originated from the US from January 1, 2004, through May 4, 2020. Raw search counts were inferred using Comscore estimates (comscore.com).

We compared search volumes after President Trump declared a national COVID-19 emergency on March 13, 2020, with expected search volumes if COVID-19 had not occurred, thereby taking into account the historical trend and periodicity in the data. Expected volumes were computed using an autoregressive integrated moving average model,4 based on historical trends from January 1, 2004 to March 12, 2020, to predict counterfactual trends for March 13, 2020 to May 9, 2020. The expected volumes with prediction intervals (PIs) and ratio of observed and expected volumes with bootstrap CIs were computed using R statistical software (version 3.5.3, R Foundation). The results were similar if we varied our interruption date plus or minus 1 week….(More)”.

Data Collaboratives


Andrew Young and Stefaan Verhulst at The Palgrave Encyclopedia of Interest Groups, Lobbying and Public Affairs: “The rise of the open data movement means that a growing amount of data is today being broken out of information silos and released or shared with third parties. Yet despite the growing accessibility of data, there continues to exist a mismatch between the supply of, and demand for, data (Verhulst & Young, 2018). This is because supply and demand are often widely dispersed – spread across government, the private sector, and civil society – meaning that those who need data do not know where to find it, and those who release data do not know how to effectively target it at those who can most effectively use it (Susha, Janssen, & Verhulst, 2017). While much commentary on the data era’s shortcomings focuses on issues such as data glut (Buchanan & Kock, 2001), misuse of data (Solove & Citron, 2017), or algorithmic bias (Hajian, Bonchi, & Castillo, 2016), this mismatch between supply and demand is at least equally problematic, resulting in tremendous inefficiencies and lost potential.

Data collaboratives, when designed responsibly (Alemanno, 2018), can help to address such shortcomings. They draw together otherwise siloed data – such as, for example, telecom data, satellite imagery, social media data, financial data – and a dispersed range of expertise. In the process, they help match supply and demand, and ensure that the appropriate institutions and individuals are using and analyzing data in ways that maximize the possibility of new, innovative social solutions (de Montjoye, Gambs, Blondel, et al., 2018)….(More)”.

Mapping socioeconomic indicators using social media advertising data


Paper by Ingmar Weber et al: “The United Nations Sustainable Development Goals (SDGs) are a global consensus on the world’s most pressing challenges. They come with a set of 232 indicators against which countries should regularly monitor their progress, ensuring that everyone is represented in up-to-date data that can be used to make decisions to improve people’s lives. However, existing data sources to measure progress on the SDGs are often outdated or lacking appropriate disaggregation. We evaluate the value that anonymous, publicly accessible advertising data from Facebook can provide in mapping socio-economic development in two low and middle income countries, the Philippines and India. Concretely, we show that audience estimates of how many Facebook users in a given location use particular device types, such as Android vs. iOS devices, or particular connection types, such as 2G vs. 4G, provide strong signals for modeling regional variation in the Wealth Index (WI), derived from the Demographic and Health Survey (DHS). We further show that, surprisingly, the predictive power of these digital connectivity features is roughly equal at both the high and low ends of the WI spectrum. Finally we show how such data can be used to create gender-disaggregated predictions, but that these predictions only appear plausible in contexts with gender equal Facebook usage, such as the Philippines, but not in contexts with large gender Facebook gaps, such as India….(More)”.

The EU is launching a market for personal data. Here’s what that means for privacy.


Anna Artyushina at MIT Tech Review: “The European Union has long been a trendsetter in privacy regulation. Its General Data Protection Regulation (GDPR) and stringent antitrust laws have inspired new legislation around the world. For decades, the EU has codified protections on personal data and fought against what it viewed as commercial exploitation of private information, proudly positioning its regulations in contrast to the light-touch privacy policies in the United States.

The new European data governance strategy (pdf) takes a fundamentally different approach. With it, the EU will become an active player in facilitating the use and monetization of its citizens’ personal data. Unveiled by the European Commission in February 2020, the strategy outlines policy measures and investments to be rolled out in the next five years.

This new strategy represents a radical shift in the EU’s focus, from protecting individual privacy to promoting data sharing as a civic duty. Specifically, it will create a pan-European market for personal data through a mechanism called a data trust. A data trust is a steward that manages people’s data on their behalf and has fiduciary duties toward its clients.

The EU’s new plan considers personal data to be a key asset for Europe. However, this approach raises some questions. First, the EU’s intent to profit from the personal data it collects puts European governments in a weak position to regulate the industry. Second, the improper use of data trusts can actually deprive citizens of their rights to their own data.

The Trusts Project, the first initiative put forth by the new EU policies, will be implemented by 2022. With a €7 million budget, it will set up a pan-European pool of personal and nonpersonal information that should become a one-stop shop for businesses and governments looking to access citizens’ information.

Global technology companies will not be allowed to store or move Europeans’ data. Instead, they will be required to access it via the trusts. Citizens will collect “data dividends,” which haven’t been clearly defined but could include monetary or nonmonetary payments from companies that use their personal data. With the EU’s roughly 500 million citizens poised to become data sources, the trusts will create the world’s largest data market.

For citizens, this means the data created by them and about them will be held in public servers and managed by data trusts. The European Commission envisions the trusts as a way to help European businesses and governments reuse and extract value from the massive amounts of data produced across the region, and to help European citizens benefit from their information. The project documentation, however, does not specify how individuals will be compensated.

Data trusts were first proposed by internet pioneer Sir Tim Berners Lee in 2018, and the concept has drawn considerable interest since then. Just like the trusts used to manage one’s property, data trusts may serve different purposes: they can be for-profit enterprises, or they can be set up for data storage and protection, or to work for a charitable cause.

IBM and Mastercard have built a data trust to manage the financial information of their European clients in Ireland; the UK and Canada have employed data trusts to stimulate the growth of the AI industries there; and recently, India announced plans to establish its own public data trust to spur the growth of technology companies.

The new EU project is modeled on Austria’s digital system, which keeps track of information produced by and about its citizens by assigning them unique identifiers and storing the data in public repositories.

Unfortunately, data trusts do not guarantee more transparency. The trust is governed by a charter created by the trust’s settlor, and its rules can be made to prioritize someone’s interests. The trust is run by a board of directors, which means a party that has more seats gains significant control.

The Trusts Project is bound to face some governance issues of its own. Public and private actors often do not see eye to eye when it comes to running critical infrastructure or managing valuable assets. Technology companies tend to favor policies that create opportunity for their own products and services. Caught in a conflict of interest, Europe may overlook the question of privacy….(More)”.

From Desert Battlefields To Coral Reefs, Private Satellites Revolutionize The View


NPR Story: “As the U.S. military and its allies attacked the last Islamic State holdouts last year, it wasn’t clear how many civilians were still in the besieged desert town of Baghouz, Syria.

So Human Rights Watch asked a private satellite company, Planet, for its regular daily photos and also made a special request for video.

“That live video actually was instrumental in convincing us that there were thousands of civilians trapped in this pocket,” said Josh Lyons of Human Rights Watch. “Therefore, the coalition forces absolutely had an obligation to stop and to avoid bombardment of that pocket at that time.”

Which they did until the civilians fled.

Lyons, who’s based in Geneva, has a job title you wouldn’t expect at a human rights group: director of geospatial analysis. He says satellite imagery is increasingly a crucial component of human rights investigations, bolstering traditional eyewitness accounts, especially in areas where it’s too dangerous to send researchers.

“Then we have this magical sort of fusion of data between open-source, eyewitness testimony and data from space. And that becomes essentially a new gold standard for investigations,” he said.

‘A string of pearls’

Satellite photos used to be restricted to the U.S. government and a handful of other nations. Now such imagery is available to everyone, creating a new world of possibilities for human rights groups, environmentalists and researchers who monitor nuclear programs.

They get those images from a handful of private, commercial satellite companies, like Planet and Maxar….(More)”.

The Risks and Rewards of Data Sharing for Smart Cities


Study by Massimo Russo and Tian Feng: “…To develop innovative solutions to problems old and new, many cities are aggregating and sharing more and more data, establishing platforms to facilitate private-sector participation, and holding “hackathons” and other digital events to invite public help. But digital solutions carry their own complications. Technology-led innovation often depends on access to data from a wide variety of sources to derive correlations and insights. Questions regarding data ownership, amalgamation, compensation, and privacy can be flashing red lights.

Smart cities are on the leading edge of the trend toward greater data sharing. They are also complex generators and users of data. Companies, industries, governments, and others are following in their wake, sharing more data in order to foster innovation and address such macro-level challenges as public health and welfare and climate change. Smart cities thus provide a constructive laboratory for studying the challenges and benefits of data sharing.

WHY CITIES SHARE DATA

BCG examined some 75 smart-city applications that use data from a variety of sources, including connected equipment (that is, the Internet of Things, or IoT). Nearly half the applications require data sourced from multiple industries or platforms. (See Exhibit 1.) For example, a parking reservation app assembles garage occupancy data, historical traffic data, current weather data, and information on upcoming public events to determine real-time parking costs. We also looked at a broader set of potential future applications and found that an additional 40% will likewise require cross-industry data aggregation.

Because today’s smart solutions are often sponsored by individual municipal departments, many IoT-enabled applications rely on limited, siloed data. But given the potential value of applications that require aggregation across sources, it’s no surprise that many cities are pursuing partnerships with tech providers to develop platforms and other initiatives that integrate data from multiple sources….(More)”.