Meta is giving researchers more access to Facebook and Instagram data


Article by Tate Ryan-Mosley: “Meta is releasing a new transparency product called the Meta Content Library and API, according to an announcement from the company today. The new tools will allow select researchers to access publicly available data on Facebook and Instagram in an effort to give a more overarching view of what’s happening on the platforms. 

The move comes as social media companies are facing public and regulatory pressure to increase transparency about how their products—specifically recommendation algorithms—work and what impact they have. Academic researchers have long been calling for better access to data from social media platforms, including Meta. This new library is a step toward increased visibility about what is happening on its platforms and the effect that Meta’s products have on online conversations, politics, and society at large. 

In an interview, Meta’s president of global affairs, Nick Clegg, said the tools “are really quite important” in that they provide, in a lot of ways, “the most comprehensive access to publicly available content across Facebook and Instagram of anything that we’ve built to date.” The Content Library will also help the company meet new regulatory requirements and obligations on data sharing and transparency, as the company notes in a blog post Tuesday

The library and associated API were first released as a beta version several months ago and allow researchers to access near-real-time data about pages, posts, groups, and events on Facebook and creator and business accounts on Instagram, as well as the associated numbers of reactions, shares, comments, and post view counts. While all this data is publicly available—as in, anyone can see public posts, reactions, and comments on Facebook—the new library makes it easier for researchers to search and analyze this content at scale…(More)”.

Policy primer on non-personal data 


Primer by the International Chamber of Commerce: “Non-personal data plays a critical role in providing solutions to global challenges. Unlocking its full potential requires policymakers, businesses, and all other stakeholders to collaborate to construct policy environments that can capitalise on its benefits.  

This report gives insights into the different ways that non-personal data has a positive impact on society, with benefits including, but not limited to: 

  1. Tracking disease outbreaks; 
  2. Facilitating international scientific cooperation; 
  3. Understanding climate-related trends; 
  4.  Improving agricultural practices for increased efficiency; 
  5. Optimising energy consumption; 
  6. Developing evidence-based policy; 
  7. Enhancing cross-border cybersecurity cooperation. 

In addition, businesses of all sizes benefit from the transfer of data across borders, allowing companies to establish and maintain international supply chains and smaller businesses to enter new markets or reduce operating costs. 

Despite these benefits, international flows of non-personal data are frequently limited by restrictions and data localisation measures. A growing patchwork of regulations can also create barriers to realising the potential of non-personal data. This report explores the impact of data flow restrictions including: 

  • Hindering global supply chains; 
  • Limiting the use of AI reliant on large datasets; 
  • Disincentivising data sharing amongst companies; 
  • Preventing companies from analysing the data they hold…(More)”.

Indigenous Peoples and Local Communities Are Using Satellite Data to Fight Deforestation


Article by Katie Reytar, Jessica Webb and Peter Veit: “Indigenous Peoples and local communities hold some of the most pristine and resource-rich lands in the world — areas highly coveted by mining and logging companies and other profiteers.  Land grabs and other threats are especially severe in places where the government does not recognize communities’ land rights, or where anti-deforestation and other laws are weak or poorly enforced. It’s the reason many Indigenous Peoples and local communities often take land monitoring into their own hands — and some are now using digital tools to do it. 

Freely available satellite imagery and data from sites like Global Forest Watch and LandMark provide near-real-time information that tracks deforestation and land degradation. Indigenous and local communities are increasingly using tools like this to gather evidence that deforestation and degradation are happening on their lands, build their case against illegal activities and take legal action to prevent it from continuing.  

Three examples from Suriname, Indonesia and Peru illustrate a growing trend in fighting land rights violations with data…(More)”.

The Time is Now: Establishing a Mutual Commitment Framework (MCF) to Accelerate Data Collaboratives


Article by Stefaan Verhulst, Andrew Schroeder and William Hoffman: “The key to unlocking the value of data lies in responsibly lowering the barriers and shared risks of data access, re-use, and collaboration in the public interest. Data collaboratives, which foster responsible access and re-use of data among diverse stakeholders, provide a solution to these challenges.

Today, however, setting up data collaboratives takes too much time and is prone to multiple delays, hindering our ability to understand and respond swiftly and effectively to urgent global crises. The readiness of data collaboratives during crises faces key obstacles in terms of data use agreements, technical infrastructure, vetted and reproducible methodologies, and a clear understanding of the questions which may be answered more effectively with additional data.

Organizations aiming to create data collaboratives often face additional challenges, as they often lack established operational protocols and practices which can streamline implementation, reduce costs, and save time. New regulations are emerging that should help drive the adoption of standard protocols and processes. In particular, the EU Data Governance Act and the forthcoming Data Act aim to enable responsible data collaboration. Concepts like data spaces and rulebooks seek to build trust and strike a balance between regulation and technological innovation.

This working paper advances the case for creating a Mutual Commitment Framework (MCF) in advance of a crisis that can serve as a necessary and practical means to break through chronic choke points and shorten response times. By accelerating the establishment of operational (and legally cognizable) data collaboratives, duties of care can be defined and a stronger sense of trust, clarity, and purpose can be instilled among participating entities. This structured approach ensures that data sharing and processing are conducted within well-defined, pre-authorized boundaries, thereby lowering shared risks and promoting a conducive environment for collaboration…(More)”.

Private UK health data donated for medical research shared with insurance companies


Article by Shanti Das: “Sensitive health information donated for medical research by half a million UK citizens has been shared with insurance companies despite a pledge that it would not be.

An Observer investigation has found that UK Biobank opened up its vast biomedical database to insurance sector firms several times between 2020 and 2023. The data was provided to insurance consultancy and tech firms for projects to create digital tools that help insurers predict a person’s risk of getting a chronic disease. The findings have raised concerns among geneticists, data privacy experts and campaigners over vetting and ethical checks at Biobank.

Set up in 2006 to help researchers investigating diseases, the database contains millions of blood, saliva and urine samples, collected regularly from about 500,000 adult volunteers – along with medical records, scans, wearable device data and lifestyle information.

Approved researchers around the world can pay £3,000 to £9,000 to access records ranging from medical history and lifestyle information to whole genome sequencing data. The resulting research has yielded major medical discoveries and led to Biobank being considered a “jewel in the crown” of British science.

Biobank said it strictly guarded access to its data, only allowing access by bona fide researchers for health-related projects in the public interest. It said this included researchers of all stripes, whether employed by academic, charitable or commercial organisations – including insurance companies – and that “information about data sharing was clearly set out to participants at the point of recruitment and the initial assessment”.

But evidence gathered by the Observer suggests Biobank did not explicitly tell participants it would share data with insurance companies – and made several public commitments not to do so.

When the project was announced, in 2002, Biobank promised that data would not be given to insurance companies after concerns were raised that it could be used in a discriminatory way, such as by the exclusion of people with a particular genetic makeup from insurance.

In an FAQ section on the Biobank website, participants were told: “Insurance companies will not be allowed access to any individual results nor will they be allowed access to anonymised data.” The statement remained online until February 2006, during which time the Biobank project was subject to public scrutiny and discussed in parliament.

The promise was also reiterated in several public statements by backers of Biobank, who said safeguards would be built in to ensure that “no insurance company or police force or employer will have access”.

This weekend, Biobank said the pledge – made repeatedly over four years – no longer applied. It said the commitment had been made before recruitment formally began in 2007 and that when Biobank volunteers enrolled they were given revised information.

This included leaflets and consent forms that contained a provision that anonymised Biobank data could be shared with private firms for “health-related” research, but did not explicitly mention insurance firms or correct the previous assurances…(More)”

A standardised differential privacy framework for epidemiological modeling with mobile phone data


Paper by Merveille Koissi Savi et al: “During the COVID-19 pandemic, the use of mobile phone data for monitoring human mobility patterns has become increasingly common, both to study the impact of travel restrictions on population movement and epidemiological modeling. Despite the importance of these data, the use of location information to guide public policy can raise issues of privacy and ethical use. Studies have shown that simple aggregation does not protect the privacy of an individual, and there are no universal standards for aggregation that guarantee anonymity. Newer methods, such as differential privacy, can provide statistically verifiable protection against identifiability but have been largely untested as inputs for compartment models used in infectious disease epidemiology. Our study examines the application of differential privacy as an anonymisation tool in epidemiological models, studying the impact of adding quantifiable statistical noise to mobile phone-based location data on the bias of ten common epidemiological metrics. We find that many epidemiological metrics are preserved and remain close to their non-private values when the true noise state is less than 20, in a count transition matrix, which corresponds to a privacy-less parameter ϵ = 0.05 per release. We show that differential privacy offers a robust approach to preserving individual privacy in mobility data while providing useful population-level insights for public health. Importantly, we have built a modular software pipeline to facilitate the replication and expansion of our framework…(More)”.

Governing Urban Data for the Public Interest


Report by The New Hanse: “…This report represents the culmination of our efforts and offers actionable guidelines for European cities seeking to harness the power of data for the public good.

The key recommendations outlined in the report are:

1. Shift the Paradigm towards Democratic Control of Data: Advocate for a policy that defaults to making urban data accessible, requiring private data holders to share in the public interest.

2. Provide Legal Clarity in a Dynamic Environment: Address legal uncertainties by balancing privacy and confidentiality needs with the public interest in data accessibility, working collaboratively with relevant authorities at national and EU level.

3. Build a Data Commons Repository of Use cases: Streamline data sharing efforts by establishing a standardised use case repository with common technical frameworks, procedures, and contracts.

4. Set up an Urban Data Intermediary for the Public Interest: Institutionalise data sharing, by building urban data intermediaries to address complexities, following principles of public purpose, transparency, and accountability.

5. Learning from the Hamburg Experiment and Scale it across Europe: Embrace experimentation as a vital step, even if outcomes are uncertain, to adapt processes for future innovations. Experiments at the local level can inform policy and scale nationally and across Europe…(More)”.

Data collaboration to enable the EU Green Deal


Article by Justine Gangneux: “In the fight against climate change, local authorities are increasingly turning to cross-sectoral data sharing as a game-changing strategy.

This collaborative approach empowers cities and communities to harness a wealth of data from diverse sources, enabling them to pinpoint emission hotspots, tailor policies for maximum impact, and allocate resources wisely.

Data can also strengthen climate resilience by engaging local communities and facilitating real-time progress tracking…

In recent years, more and more local data initiatives aimed at tackling climate change have emerged, spanning from urban planning to mobility, adaptation and energy management.

Such is the case of Porto’s CityCatalyst – the project put five demonstrators in place to showcase smart cities infrastructure and develop data standards and models, contributing to the efficient and integrated management of urban flows…

In Latvia, Riga is also exploring data solutions such as visualisations, aggregation or analytics, as part of the Positive Energy District strategy.  Driven by the national Energy Efficiency Law, the city is developing a project to monitor energy consumption based on building utility use data (heat, electricity, gas, or water), customer and billing data, and Internet of Things smart metre data from individual buildings…

As these examples show, it is not just public data that holds the key; private sector data, from utilities as energy or water, to telecoms, offers cities valuable insights in their efforts to tackle climate change…(More)”.

Facilitating Data Flows through Data Collaboratives


A Practical Guide “to Designing Valuable, Accessible, and Responsible Data Collaboratives” by Uma Kalkar, Natalia González Alarcón, Arturo Muente Kunigami and Stefaan Verhulst: “Data is an indispensable asset in today’s society, but its production and sharing are subject to well-known market failures. Among these: neither economic nor academic markets efficiently reward costly data collection and quality assurance efforts; data providers cannot easily supervise the appropriate use of their data; and, correspondingly, users have weak incentives to pay for, acknowledge, and protect data that they receive from providers. Data collaboratives are a potential non-market solution to this problem, bringing together data providers and users to address these market failures. The governance frameworks for these collaboratives are varied and complex and their details are not widely known. This guide proposes a methodology and a set of common elements that facilitate experimentation and creation of collaborative environments. It offers guidance to governments on implementing effective data collaboratives as a means to promote data flows in Latin America and the Caribbean, harnessing their potential to design more effective services and improve public policies…(More)”.

The Good and Bad of Anticipating Migration


Article by Sara Marcucci, Stefaan Verhulst, María Esther Cervantes, Elena Wüllhorst: “This blog is the first in a series that will be published weekly, dedicated to exploring innovative anticipatory methods for migration policy. Over the coming weeks, we will delve into various aspects of these methods, delving into their value, challenges, taxonomy, and practical applications. 

This first blog serves as an exploration of the value proposition and challenges inherent in innovative anticipatory methods for migration policy. We delve into the various reasons why these methods hold promise for informing more resilient, and proactive migration policies. These reasons include evidence-based policy development, enabling policymakers to ground their decisions in empirical evidence and future projections. Decision-takers, users, and practitioners can benefit from anticipatory methods for policy evaluation and adaptation, resource allocation, the identification of root causes, and the facilitation of humanitarian aid through early warning systems. However, it’s vital to acknowledge the challenges associated with the adoption and implementation of these methods, ranging from conceptual concerns such as fossilization, unfalsifiability, and the legitimacy of preemptive intervention, to practical issues like interdisciplinary collaboration, data availability and quality, capacity building, and stakeholder engagement. As we navigate through these complexities, we aim to shed light on the potential and limitations of anticipatory methods in the context of migration policy, setting the stage for deeper explorations in the coming blogs of this series…(More)”.