The Potentially Adverse Impact of Twitter 2.0 on Scientific and Research Communication


Article by Julia Cohen: “In just over a month after the change in Twitter leadership, there have been significant changes to the social media platform, in its new “Twitter 2.0.” version. For researchers who use Twitter as a primary source of data, including many of the computer scientists at USC’s Information Sciences Institute (ISI), the effects could be debilitating…

Over the years, Twitter has been extremely friendly to researchers, providing and maintaining a robust API (application programming interface) specifically for academic research. The Twitter API for Academic Research allows researchers with specific objectives who are affiliated with an academic institution to gather historical and real-time data sets of tweets, and related metadata, at no cost. Currently, the Twitter API for Academic Research continues to be functional and maintained in Twitter 2.0.

The data obtained from the API provides a means to observe public conversations and understand people’s opinions about societal issues. Luca Luceri, a Postdoctoral Research Associate at ISI called Twitter “a primary platform to observe online discussion tied to political and social issues.” And Twitter touts its API for Academic Research as a way for “academic researchers to use data from the public conversation to study topics as diverse as the conversation on Twitter itself.”

However, if people continue deactivating their Twitter accounts, which appears to be the case, the makeup of the user base will change, with data sets and related studies proportionally affected. This is especially true if the user base evolves in a way that makes it more ideologically homogeneous and less diverse.

According to MIT Technology Review, in the first week after its transition, Twitter may have lost one million users, which translates to a 208% increase in lost accounts. And there’s also the concern that the site could not work as effectively, because of the substantial decrease in the size of the engineering teams. This includes concerns about the durability of the service researchers rely on for data, namely the Twitter API. Jason Baumgartner, founder of Pushshift, a social media data collection, analysis, and archiving platform, said in several recent API requests, his team also saw a significant increase in error rates – in the 25-30% range –when they typically see rates near 1%. Though for now this is anecdotal, it leaves researchers wondering if they will be able to rely on Twitter data for future research.

One example of how the makeup of the less-regulated Twitter 2.0 user base could significantly be altered is if marginalized groups leave Twitter at a higher rate than the general user base, e.g. due to increased hate speech. Keith Burghardt, a Computer Scientist at ISI who studies hate speech online said, “It’s not that an underregulated social media changes people’s opinions, but it just makes people much more vocal. So you will probably see a lot more content that is hateful.” In fact, a study by Montclair State University found that hate speech on Twitter skyrocketed in the week after the acquisition of Twitter….(More)”.

Learning to Share: Lessons on Data-Sharing from Beyond Social Media


Paper by CDT: “What role has social media played in society? Did it influence the rise of Trumpism in the U.S. and the passage of Brexit in the UK? What about the way authoritarians exercise power in India or China? Has social media undermined teenage mental health? What about its role in building social and community capital, promoting economic development, and so on?

To answer these and other important policy-related questions, researchers such as academics, journalists, and others need access to data from social media companies. However, this data is generally not available to researchers outside of social media companies and, where it is available, it is often insufficient, meaning that we are left with incomplete answers.

Governments on both sides of the Atlantic have passed or proposed legislation to address the problem by requiring social media companies to provide certain data to vetted researchers (Vogus, 2022a). Researchers themselves have thought a lot about the problem, including the specific types of data that can further public interest research, how researchers should be vetted, and the mechanisms companies can use to provide data (Vogus, 2022b).

For their part, social media companies have sanctioned some methods to share data to certain types of researchers through APIs (e.g., for researchers with university affiliations) and with certain limitations (such as limits on how much and what types of data are available). In general, these efforts have been insufficient. In part, this is due to legitimate concerns such as the need to protect user privacy or to avoid revealing company trade secrets.  But, in some cases, the lack of sharing is due to other factors such as lack of resources or knowledge about how to share data effectively or resistance to independent scrutiny.

The problem is complex but not intractable. In this report, we look to other industries where companies share data with researchers through different mechanisms while also addressing concerns around privacy. In doing so, our analysis contributes to current public and corporate discussions about how to safely and effectively share social media data with researchers. We review experiences based on the governance of clinical trials, electricity smart meters, and environmental impact data…(More)”

A Massive LinkedIn Study Reveals Who Actually Helps You Get That Job


Article by Viviane Callier : “If you want a new job, don’t just rely on friends or family. According to one of the most influential theories in social science, you’re more likely to nab a new position through your “weak ties,” loose acquaintances with whom you have few mutual connections. Sociologist Mark Granovetter first laid out this idea in a 1973 paper that has garnered more than 65,000 citations. But the theory, dubbed “the strength of weak ties,” after the title of Granovetter’s study, lacked causal evidence for decades. Now a sweeping study that looked at more than 20 million people on the professional social networking site LinkedIn over a five-year period finally shows that forging weak ties does indeed help people get new jobs. And it reveals which types of connections are most important for job hunters…Along with job seekers, policy makers could also learn from the new paper. “One thing the study highlights is the degree to which algorithms are guiding fundamental, baseline, important outcomes, like employment and unemployment,” Aral says. The role that LinkedIn’s People You May Know function plays in gaining a new job demonstrates “the tremendous leverage that algorithms have on employment and probably other factors of the economy as well.” It also suggests that such algorithms could create bellwethers for economic changes: in the same way that the Federal Reserve looks at the Consumer Price Index to decide whether to hike interest rates, Aral suggests, networks such as LinkedIn might provide new data sources to help policy makers parse what is happening in the economy. “I think these digital platforms are going to be an important source of that,” he says…(More)”

A Prehistory of Social Media


Essay by Kevin Driscoll: “Over the past few years, I’ve asked dozens of college students to write down, in a sentence or two, where the internet came from. Year after year, they recount the same stories about the US government, Silicon Valley, the military, and the threat of nuclear war. A few students mention the Department of Defense’s ARPANET by name. Several get the chronology wrong, placing the World Wide Web before the internet or expressing confusion about the invention of email. Others mention “tech wizards” or “geniuses” from Silicon Valley firms and university labs. No fewer than four students have simply written, “Bill Gates.”

Despite the internet’s staggering scale and global reach, its folk histories are surprisingly narrow. This mismatch reflects the uncertain definition of “the internet.” When nonexperts look for internet origin stories, they want to know about the internet as they know it, the internet they carry around in their pockets, the internet they turn to, day after day. Yet the internet of today is not a stable object with a single, coherent history. It is a dynamic socio-technical phenomenon that came into being during the 1990s, at the intersection of hundreds of regional, national, commercial, and cooperative networks—only one of which was previously known as “the internet.” In short, the best-known histories describe an internet that hasn’t existed since 1994. So why do my students continue to repeat stories from 25 years ago? Why haven’t our histories kept up?

The standard account of internet history took shape in the early 1990s, as a mixture of commercial online services, university networks, and local community networks mutated into something bigger, more commercial, and more accessible to the general public. As hype began to build around the “information superhighway,” people wanted a backstory. In countless magazines, TV news reports, and how-to books, the origin of the internet was traced back to ARPANET, the computer network created by the Advanced Research Projects Agency during the Cold War. This founding mythology has become a resource for advancing arguments on issues related to censorship, national sovereignty, cybersecurity, privacy, net neutrality, copyright, and more. But with only this narrow history of the early internet to rely on, the arguments put forth are similarly impoverished…(More)”.

The Effectiveness of Digital Interventions on COVID-19 Attitudes and Beliefs


Paper by Susan Athey, Kristen Grabarz, Michael Luca & Nils C. Wernerfelt: “During the course of the COVID-19 pandemic, a common strategy for public health organizations around the world has been to launch interventions via advertising campaigns on social media. Despite this ubiquity, little has been known about their average effectiveness. We conduct a large-scale program evaluation of campaigns from 174 public health organizations on Facebook and Instagram that collectively reached 2.1 billion individuals and cost around $40 million. We report the results of 819 randomized experiments that measured the impact of these campaigns across standardized, survey-based outcomes. We find on average these campaigns are effective at influencing self-reported beliefs, shifting opinions close to 1% at baseline with a cost per influenced person of about $3.41. There is further evidence that campaigns are especially effective at influencing users’ knowledge of how to get vaccines. Our results represent, to the best of our knowledge, the largest set of online public health interventions analyzed to date…(More)”

EU and US legislation seek to open up digital platform data


Article by Brandie Nonnecke and Camille Carlton: “Despite the potential societal benefits of granting independent researchers access to digital platform data, such as promotion of transparency and accountability, online platform companies have few legal obligations to do so and potentially stronger business incentives not to. Without legally binding mechanisms that provide greater clarity on what and how data can be shared with independent researchers in privacy-preserving ways, platforms are unlikely to share the breadth of data necessary for robust scientific inquiry and public oversight.

Here, we discuss two notable, legislative efforts aimed at opening up platform data: the Digital Services Act (DSA), recently approved by the European Parliament, and the Platform Accountability and Transparency Act (PATA), recently proposed by several US senators. Although the legislation could support researchers’ access to data, they could also fall short in many ways, highlighting the complex challenges in mandating data access for independent research and oversight.

As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for transparency and accountability through mandated data disclosures. Research insights from platform data can help, for example, to understand unintended harms of platform use on vulnerable populations, such as children and marginalized communities; identify coordinated foreign influence campaigns targeting elections; and support public health initiatives, such as documenting the spread of antivaccine mis-and disinformation…(More)”.

Law Enforcement and Technology: Using Social Media


Congressional Research Service Report: “As the ways in which individuals interact continue to evolve, social media has had an increasing role in facilitating communication and the sharing of content online—including moderated and unmoderated, user-generated content. Over 70% of U.S. adults are estimated to have used social media in 2021. Law enforcement has also turned to social media to help in its operations. Broadly, law enforcement relies on social media as a tool for information sharing as well as for gathering information to assist in investigations.


Social Media as a Communications Tool. Social media is one of many tools law enforcement can use to connect with the community. They may use it, for instance, to push out bulletins on wanted persons and establish tip lines to crowdsource potential investigative leads. It provides degrees of speed and reach unmatched by many other forms of communication law enforcement can use to connect with the public. Officials and researchers have highlighted social media as a tool that, if used properly, can enhance community policing.

Social Media and Investigations. Social media is one tool in agencies’ investigative toolkits to help establish investigative leads and assemble evidence on potential suspects. There are no federal laws that specifically govern law enforcement agencies’ use of information obtained from social media sites, but their ability to obtain or use certain information may be influenced by social media companies’ policies as well as law enforcement agencies’ own social media policies and the rules of criminal procedure. When individuals post content on social media platforms without audience restrictions, anyone— including law enforcement—can access this content without court authorization. However, some information that individuals post on social media may be restricted—by user choice or platform policies—in the scope of audience that may access it. In the instances where law enforcement does not have public access to information, they may rely on a number of tools and techniques, such as informants or undercover operations, to gain access to it. Law enforcement may also require social media platforms to provide access to certain restricted information through a warrant, subpoena, or other court order.

Social Media and Intelligence Gathering. The use of social media to gather intelligence has generated particular interest from policymakers, analysts, and the public. Social media companies have weighed in on the issue of social media monitoring by law enforcement, and some platforms have modified their policies to expressly prohibit their user data from being used by law enforcement to monitor social media. Law enforcement agencies themselves have reportedly grappled with the extent to which they should gather and rely on information and intelligence gleaned from social media. For instance, some observers have suggested that agencies may be reluctant to regularly analyze public social media posts because that could be viewed as spying on the American public and could subsequently chill free speech protected under the First Amendment…(More)”.

The Crowdsourced Panopticon


Book by Jeremy Weissman: “Behind the omnipresent screens of our laptops and smartphones, a digitally networked public has quickly grown larger than the population of any nation on Earth. On the flipside, in front of the ubiquitous recording devices that saturate our lives, individuals are hyper-exposed through a worldwide online broadcast that encourages the public to watch, judge, rate, and rank people’s lives. The interplay of these two forces – the invisibility of the anonymous crowd and the exposure of the individual before that crowd – is a central focus of this book. Informed by critiques of conformity and mass media by some of the greatest philosophers of the past two centuries, as well as by a wide range of historical and empirical studies, Weissman helps shed light on what may happen when our lives are increasingly broadcast online for everyone all the time, to be judged by the global community…(More)”.

Selected Readings on Data Portability


By Juliet McMurren, Andrew Young, and Stefaan G. Verhulst

As part of an ongoing effort to build a knowledge base for the field of improving governance through technology, The GovLab publishes a series of Selected Readings, which provide an annotated and curated collection of recommended works on themes such as open data, data collaboration, and civic technology.

In this edition, we explore selected literature on data portability.

To suggest additional readings on this or any other topic, please email info@thelivinglib.org. All our Selected Readings can be found here.

Context

Data today exists largely in silos, generating problems and inefficiencies for the individual, business and society at large. These include:

  • difficulty switching (data) between competitive service providers;
  • delays in sharing data for important societal research initiatives;
  • barriers for data innovators to reuse data that could generate insights to inform individuals’ decision making; and
  • inhibitions to scale data donation.

Data portability — the principle that individuals have a right to obtain, copy, and reuse their own personal data and to transfer it from one IT platform or service to another for their own purposes — is positioned as a solution to these problems. When fully implemented, it would make data liquid, giving individuals the ability to access their own data in a usable and transferable format, transfer it from one service provider to another, or donate data for research and enhanced data analysis by those working in the public interest.

Some companies, including Google, Apple, Twitter and Facebook, have sought to advance data portability through initiatives like the Data Transfer Project, an open source software project designed to facilitate data transmittals. Newly enacted data protection legislation such as Europe’s General Data Protection Regulation (2018) and the California Consumer Privacy Act (2018) give data holders a right to data portability. However, despite the legal and technical advances made, many questions toward scaling up data liquidity and portability responsibly and systematically remain. These new data rights have generated complex and as yet unanswered questions about the limits of data ownership, the implications for privacy, security and intellectual property rights, and the practicalities of how, when, and to whom data can be transferred.

In this edition of the GovLab’s Selected Readings series, we examine the emerging literature on data portability to provide a foundation for future work on the value proposition of data portability. Readings are listed in alphabetical order.

Selected readings

Cho, Daegon, Pedro Ferreira, and Rahul Telang, The Impact of Mobile Number Portability on Price and Consumer Welfare (2016)

  • In this paper, the authors analyze how Mobile Number Portability (MNP) — the ability for consumers to maintain their phone number when changing providers, thus reducing switching costs — affected the relationship between switching costs, market price and consumer surplus after it was introduced in most European countries in the early 2000s.
  • Theory holds that when switching costs are high, market leaders will enjoy a substantial advantage and are able to keep prices high. Policy makers will therefore attempt to decrease switching costs to intensify competition and reduce prices to consumers.
  • The study reviewed quarterly data from 47 wireless service providers in 15 EU countries between 1999 and 2006. The data showed that MNP simultaneously decreased market price by over four percent and increased consumer welfare by an average of at least €2.15 per person per quarter. This increase amounted to a total of €880 million per quarter across the 15 EU countries analyzed in this paper and accounted for 15 percent of the increase in consumer surplus observed over this time.

CtrlShift, Data Mobility: The data portability growth opportunity for the UK economy (2018)

  • Commissioned by the UK Department of Digital, Culture, Media and Sport (DCMS), this study was intended to identify the potential of personal data portability for the UK economy.
  • Its scope went beyond the legal right to data portability envisaged by the GDPR, to encompass the current state of personal data portability and mobility, requirements for safe and secure data sharing, and the potential economic benefits through stimulation of innovation, productivity and competition.
  • The report concludes that increased personal data mobility has the potential to be a vital stimulus for the development of the digital economy, driving growth by empowering individuals to make use of their own data and consent to others using it to create new data-driven services and technologies.
  • However, the report concludes that there are significant challenges to be overcome, and new risks to be addressed, before the value of personal data can be realized. Much personal data remains locked in organizational silos, and systemic issues related to data security and governance and the uneven sharing of benefits need to be resolved.

Data Guidance and Future of Privacy Forum, Comparing Privacy Laws: GDPR v. CCPA (2018)

  • This paper compares the provisions of the GDPR with those of the California Consumer Privacy Act (2018).
  • Both article 20 of the GDPR and section 1798 of the CCPA recognize a right to data portability. Both also confer on data subjects the right to receive data from controllers free of charge upon request, and oblige controllers to create mechanisms to provide subjects with their data in portable and reusable form so that it can be transmitted to third parties for reuse.
  • In the CCPA, the right to data portability is an extension of the right to access, and only confers on data subjects the right to apply for data collected within the past 12 months and have it delivered to them. The GDPR does not impose a time limit, and allows data to be transferred from one data controller to another, but limits the right to automatically collected personal data provided by the data subject themselves through consent or contract.

Data Transfer Project, Data Transfer Project Overview and Fundamentals (2018)

  • The paper presents an overview of the goals, principles, architecture, and system components of the Data Transfer Project. The intent of the DTP is to increase the number of services offering data portability and provide users with the ability to transfer data directly in and out of participating providers through systems that are easy and intuitive to use, private and secure, reciprocal between services, and focused on user data. The project, which is supported by Microsoft, Google, Twitter and Facebook, is an open-source initiative that encourages the participation of other providers to reduce the infrastructure burden on providers and users.
  • In addition to benefits to innovation, competition, and user choice, the authors point to benefits to security, through allowing users to backup, organize, or archive their data, recover from account hijacking, and retrieve their data from deprecated services.
  • The DTP’s remit was to test concepts and feasibility for the transfer of specific types of user data between online services using a system of adapters to transfer proprietary formats into canonical formats that can be used to transfer data while allowing providers to maintain control over the security of their service. While not resolving all formatting or support issues, this approach would allow substantial data portability and encourage ecosystem sustainability.

Deloitte, How to Flourish in an Uncertain Future: Open Banking(2017)

  • This report addresses the innovative and disruptive potential of open banking, in which data is shared between members of the banking ecosystem at the authorization of the customer, with the potential to increase competition and facilitate new products and services. In the resulting marketplace model, customers could use a single banking interface to access products from multiple players, from established banks to newcomers and fintechs.
  • The report’s authors identify significant threats to current banking models. Banks that failed to embrace open banking could be relegated to a secondary role as an infrastructure provider, while third parties — tech companies, fintech, and price comparison websites — take over the customer relationship.
  • The report identifies four overlapping operating models banks could adopt within an open banking model: full service providers, delivering proprietary products through their own interface with little or no third-party integration; utilities, which provide other players with infrastructure without customer-facing services; suppliers, which offer proprietary products through third-party interfaces; and interfaces,which provide distribution services through a marketplace interface. To retain market share, incumbents are likely to need to adopt a combination of these roles, offering their own products and services and those of third parties through their own and others’ interfaces.

Digital Competition Expert Panel Unlocking Digital Competition(2019)

  • This report captures the findings of the UK Digital Competition Expert Panel, which was tasked in 2018 with considering opportunities and challenges the digital economy might pose for competition and competition policy and to recommend any necessary changes. The panel focused on the impact of big players within the sector, appropriate responses to mergers or anticompetitive practices, and the impact on consumers.
  • The panel found that the digital economy is creating many benefits, but that digital markets are subject to tipping, in which emerging winners can scoop much of the market. This concentration can give rise to substantial costs, especially to consumers, and cannot be solved by competition alone. However, government policy and regulatory solutions have limitations, including the slowness of policy change, uneven enforcement and profound informational asymmetries between companies and government.
  • The panel proposed the creation of a digital markets unit that would be tasked with developing a code of competitive conduct, enabling greater personal data mobility and systems designed with open standards, and advancing access to non-personal data to reduce barriers to market entry.
  • The panel’s model of data mobility goes beyond data portability, which involves consumers being able to request and transfer their own data from one provider to another. Instead, the panel recommended empowering consumers to instigate transfers of data between a business and a third party in order to access price information, compare goods and services, or access tailored advice and recommendations. They point to open banking as an example of how this could function in practice.
  • It also proposed updating merger policy to make it more forward-looking to better protect consumers and innovation and preserve the competitiveness of the market. It recommended the creation of antitrust policy that would enable the implementation of interim measures to limit damage to competition while antitrust cases are in process.

Egan, Erin, Charting a Way Forward: Data Portability and Privacy(2019)

  • This white paper by Facebook’s VP and Chief Privacy Officer, Policy, represents an attempt to advance the conversation about the relationship between data portability, privacy, and data protection. The author sets out five key questions about data portability: what is it, whose and what data should be portable, how privacy should be protected in the context of portability, and where responsibility for data misuse or improper protection should lie.
  • The paper finds that definitions of data portability still remain imprecise, particularly with regard to the distinction between data portability and data transfer. In the interest of feasibility and a reasonable operational burden on providers, it proposes time limits on providers’ obligations to make observed data portable.
  • The paper concludes that there are strong arguments both for and against allowing users to port their social graph — the map of connections between that user and other users of the service — but that the key determinant should be a capacity to ensure the privacy of all users involved. Best-practice data portability protocols that would resolve current differences of approach as to what, how and by whom information should be made available would help promote broader portability, as would resolution of liability for misuse or data exposure.

Engels, Barbara, Data portability among online platforms (2016)

  • The article examines the effects on competition and innovation of data portability among online platforms such as search engines, online marketplaces, and social media, and how relations between users, data, and platform services change in an environment of data portability.
  • The paper finds that the benefits to competition and innovation of portability are greatest in two kinds of environments: first, where platforms offer complementary products and can realize synergistic benefits by sharing data; and secondly, where platforms offer substitute or rival products but the risk of anti-competitive behaviour is high, as for search engines.
  • It identifies privacy and security issues raised by data portability. Portability could, for example, allow an identity fraudster to misuse personal data across multiple platforms, compounding the harm they cause.
  • It also suggests that standards for data interoperability could act to reduce innovation in data technology, encouraging data controllers to continue to use outdated technologies in order to comply with inflexible, government-mandated standards.

Graef, Inge, Martin Husovec and Nadezhda Purtova, Data Portability and Data Control: Lessons for an Emerging Concept in EU Law (2018)

  • This paper situates the data portability right conferred by the GDPR within rights-based data protection law. The authors argue that the right to data portability should be seen as a new regulatory tool aimed at stimulating competition and innovation in data-driven markets.
  • The authors note the potential for conflicts between the right to data portability and the intellectual property rights of data holders, suggesting that the framers underestimated the potential impact of such conflicts on copyright, trade secrets and sui generis database law.
  • Given that the right to data portability is being replicated within consumer protection law and the regulation of non-personal data, the authors argue framers of these laws should consider the potential for conflict and the impact of such conflict on incentives to innovate.

Mohsen, Mona Omar and Hassan A. Aziz The Blue Button Project: Engaging Patients in Healthcare by a Click of a Button (2015)

  • This paper provides a literature review on the Blue Button initiative, an early data portability project which allows Americans to access, view or download their health records in a variety of formats.
  • Originally launched through the Department of Veterans’ Affairs in 2010, the Blue Button initiative had expanded to more than 500 organizations by 2014, when the Department of Health and Human Services launched the Blue Button Connector to facilitate both patient access and development of new tools.
  • The Blue Button has enabled the development of tools such as the Harvard-developed Growth-Tastic app, which allows parents to check their child’s growth by submitting their downloaded pediatric health data. Pharmacies across the US have also adopted the Blue Button to provide patients with access to their prescription history.

More than Data and Mission: Smart, Got Data? The Value of Energy Data to Customers (2016)

  • This report outlines the public value of the Green Button, a data protocol that provides customers with private and secure access to their energy use data collected by smart meters.
  • The authors outline how the use of the Green Button can help states meet their energy and climate goals by enabling them to structure renewables and other distributed energy resources (DER) such as energy efficiency, demand response, and solar photovoltaics. Access to granular, near real time data can encourage innovation among DER providers, facilitating the development of applications like “virtual energy audits” that identify efficiency opportunities, allowing customers to reduce costs through time-of-use pricing, and enabling the optimization of photovoltaic systems to meet peak demand.
  • Energy efficiency receives the greatest boost from initiatives like the Green Button, with studies showing energy savings of up to 18 percent when customers have access to their meter data. In addition to improving energy conservation, access to meter data could improve the efficiency of appliances by allowing devices to trigger sleep modes in response to data on usage or price. However, at the time of writing, problems with data portability and interoperability were preventing these benefits from being realized, at a cost of tens of millions of dollars.
  • The authors recommend that commissions require utilities to make usage data available to customers or authorized third parties in standardized formats as part of basic utility service, and tariff data to developers for use in smart appliances.

MyData, Understanding Data Operators (2020)

  • MyData is a global movement of data users, activists and developers with a common goal to empower individuals with their personal data to enable them and their communities to develop knowledge, make informed decisions and interact more consciously and efficiently.
  • This introductory paper presents the state of knowledge about data operators, trusted data intermediaries that provide infrastructure for human-centric personal data management and governance, including data sharing and transfer. The operator model allows data controllers to outsource issues of legal compliance with data portability requirements, while offering individual users a transparent and intuitive way to manage the data transfer process.
  • The paper examines use cases from 48 “proto-operators” from 15 countries who fulfil some of the functions of an operator, albeit at an early level of maturity. The paper finds that operators offer management of identity authentication, data transaction permissions, connections between services, value exchange, data model management, personal data transfer and storage, governance support, and logging and accountability. At the heart of these functions is the need for minimum standards of data interoperability.
  • The paper reviews governance frameworks from the general (legislative) to the specific (operators), and explores proto-operator business models. In keeping with an emerging field, business models are currently unclear and potentially unsustainable, and one of a number of areas, including interoperability requirements and governance frameworks, that must still be developed.

National Science and Technology Council Smart Disclosure and Consumer Decision Making: Report of the Task Force on Smart Disclosure (2013)

  • This report summarizes the work and findings of the 2011–2013 Task Force on Smart Disclosure: Information and Efficiency, an interagency body tasked with advancing smart disclosure, through which data is made more available and accessible to both consumers and innovators.
  • The Task Force recognized the capacity of smart disclosure to inform consumer choices, empower them through access to useful personal data, enable the creation of new tools, products and services, and promote efficiency and growth. It reviewed federal efforts to promote smart disclosure within sectors and in data types that crosscut sectors, such as location data, consumer feedback, enforcement and compliance data and unique identifiers. It also surveyed specific public-private partnerships on access to data, such as the Blue and Green Button and MyData initiatives in health, energy and education respectively.
  • The Task Force reviewed steps taken by the Federal Government to implement smart disclosure, including adoption of machine readable formats and standards for metadata, use of APIs, and making data available in an unstructured format rather than not releasing it at all. It also reviewed “choice engines” making use of the data to provide services to consumers across a range of sectors.
  • The Task Force recommended that smart disclosure should be a core component of efforts to institutionalize and operationalize open data practices, with agencies proactively identifying, tagging, and planning the release of candidate data. It also recommended that this be supported by a government-wide community of practice.

Nicholas, Gabriel Taking It With You: Platform Barriers to Entry and the Limits of Data Portability (2020)

  • This paper considers whether, as is often claimed, data portability offers a genuine solution to the lack of competition within the tech sector.
  • It concludes that current regulatory approaches to data portability, which focus on reducing switching costs through technical solutions such as one-off exports and API interoperability, are not sufficient to generate increased competition. This is because they fail to address other barriers to entry, including network effects, unique data access, and economies of scale.
  • The author proposes an alternative approach, which he terms collective portability, which would allow groups of users to coordinate the transfer of their data to a new platform. This model raises questions about how such collectives would make decisions regarding portability, but would enable new entrants to successfully target specific user groups and scale rapidly without having to reach users one by one.

OECD, Enhancing Access to and Sharing of Data: Reconciling Risks and Benefits for Data Re-use across Societies (2019)

  • This background paper to a 2017 expert workshop on risks and benefits of data reuse considers data portability as one strategy within a data openness continuum that also includes open data, market-based B2B contractual agreements, and restricted data-sharing agreements within research and data for social good applications.
  • It considers four rationales offered for data portability. These include empowering individuals towards the “informational self-determination” aspired to by GDPR, increased competition within digital and other markets through reductions in information asymmetries between individuals and providers, switching costs, and barriers to market entry; and facilitating increased data flows.
  • The report highlights the need for both syntactic and semantic interoperability standards to ensure data can be reused across systems, both of which may be fostered by increased rights to data portability. Data intermediaries have an important role to play in the development of these standards, through initiatives like the Data Transfer Project, a collaboration which brought together Facebook, Google, Microsoft, and Twitter to create an open-source data portability platform.

Personal Data Protection Commission Singapore Response to Feedback on the Public Consultation on Proposed Data Portability and Data Innovation Provisions (2020)

  • The report summarizes the findings of the 2019 PDPC public consultation on proposals to introduce provisions on data portability and data innovation in Singapore’s Personal Data Protection Act.
  • The proposed provision would oblige organizations to transmit an individual’s data to another organization in a commonly used machine-readable format, upon the individual’s request. The obligation does not extend to data intermediaries or organizations that do not have a presence in Singapore, although data holders may choose to honor those requests.
  • The obligation would apply to electronic data that is either provided by the individual or generated by the individual’s activities in using the organization’s service or product, but not derived data created by the processing of other data by the data holder. Respondents were concerned that including derived data could harm organizations’ competitiveness.
  • Respondents were concerned about how to honour data portability requests where the data of third parties was involved, as in the case of a joint account holder, for example. The PDPC opted for a “balanced, reasonable, and pragmatic approach,” allowing data involving third parties to be ported where it was under the requesting individual’s control, was to be used for domestic and personal purposes, and related only to the organization’s product or service.

Quinn, Paul Is the GDPR and Its Right to Data Portability a Major Enabler of Citizen Science? (2018)

  • This article explores the potential of data portability to advance citizen science by enabling participants to port their personal data from one research project to another. Citizen science — the collection and contribution of large amounts of data by private individuals for scientific research — has grown rapidly in response to the development of new digital means to capture, store, organize, analyze and share data.
  • The GDPR right to data portability aids citizen science by requiring transfer of data in machine-readable format and allowing data subjects to request its transfer to another data controller. This requirement of interoperability does not amount to compatibility, however, and data thus transferred would probably still require cleaning to be usable, acting as a disincentive to reuse.
  • The GDPR’s limitation of transferability to personal data provided by the data subject excludes some forms of data that might possess significant scientific potential, such as secondary personal data derived from further processing or analysis.
  • The GDPR right to data portability also potentially limits citizen science by restricting the grounds for processing data to which the right applies to data obtained through a subject’s express consent or through the performance of a contract. This limitation excludes other forms of data processing described in the GDPR, such as data processing for preventive or occupational medicine, scientific research, or archiving for reasons of public or scientific interest. It is also not clear whether the GDPR compels data controllers to transfer data outside the European Union.

Wong, Janis and Tristan Henderson, How Portable is Portable? Exercising the GDPR’s Right to Data Portability (2018)

  • This paper presents the results of 230 real-world requests for data portability in order to assess how — and how well — the GDPR right to data portability is being implemented. The authors were interested in establishing the kinds of file formats that were returned in response to requests, and to identify practical difficulties encountered in making and interpreting requests, over a three month period beginning on the day the GDPR came into effect.
  • The findings revealed continuing problems around ensuring portability for both data controllers and data subjects. Of the 230 requests, only 163 were successfully completed.
  • Data controllers frequently had difficulty understanding the requirements of GDPR, providing data in incomplete or inappropriate formats: only 40 percent of the files supplied were in a fully compliant format. Additionally, some data controllers were confused between the right to data portability and other rights conferred by the GDPR, such as the right to access or erasure.

Netnography: The Essential Guide to Qualitative Social Media Research


Book by Robert Kozinets: “Netnography is an adaptation of ethnography for the online world, pioneered by Robert Kozinets, and is concerned with the study of online cultures and communities as distinct social phenomena, rather than isolated content. In this landmark third edition, Netnography: The Essential Guide provides the theoretical and methodological groundwork as well as the practical applications, helping students both understand and do netnographic research projects of their own.

Packed with enhanced learning features throughout, linking concepts to structured activities in a step by step way, the book is also now accompanied by a striking new visual design and further case studies, offering the essential student resource to conducting online ethnographic research. Real world examples provided demonstrate netnography in practice across the social sciences, in media and cultural studies, anthropology, education, nursing, travel and tourism, and others….(More)”.