How an Open-Source Disaster Map Helped Thousands of Earthquake Survivors


Article by Eray Gündoğmuş: “On February 6, 2023, earthquakes measuring 7.7 and 7.6 hit the Kahramanmaraş region of Turkey, affecting 10 cities and resulting in more than 42.000 deaths and 120.000 injuries as of February 21.

In the hours following the earthquake, a group of programmers quickly become together on the Discord server called “Açık Yazılım Ağı” , inviting IT professionals to volunteer and develop a project that could serve as a resource for rescue teams, earthquake survivors, and those who wanted to help: afetharita.com. It literally means “disaster map”.

As there was a lack of preparation for the first few days of such a huge earthquake, disaster victims in distress started making urgent aid requests on social media. With the help of thousands of volunteers, we utilized technologies such as artificial intelligence and machine learning to transform these aid requests into readable data and visualized them on afetharita.com. Later, we gathered critical data related to the disaster from necessary institutions and added them to the map.

Disaster Map, which received a total of 35 million requests and 627,000 unique visitors, played a significant role in providing software support during the most urgent and critical periods of the disaster, and helped NGOs, volunteers, and disaster victims to access important information. I wanted to share the process, our experiences, and technical details of this project clearly in writing…(More)”.

COVID isn’t going anywhere, neither should our efforts to increase responsible access to data


Article by Andrew J. Zahuranec, Hannah Chafetz and Stefaan Verhulst: “..Moving forward, institutions will need to consider how to embed non-traditional data capacity into their decision-making to better understand the world around them and respond to it.

For example, wastewater surveillance programmes that emerged during the pandemic continue to provide valuable insights about outbreaks before they are reported by clinical testing and have the potential to be used for other emerging diseases.

We need these and other programmes now more than ever. Governments and their partners need to maintain and, in many cases, strengthen the collaborations they established through the pandemic.

To address future crises, we need to institutionalize new data capacities – particularly those involving non-traditional datasets that may capture digital information that traditional health surveys and statistical methods often miss.

The figure above summarizes the types and sources of non-traditional data sources that stood out most during the COVID-19 response.

The types and sources of non-traditional data sources that stood out most during the COVID-19 response. Image: The GovLab

In our report, we suggest four pathways to advance the responsible access to non-traditional data during future health crises…(More)”.

Data solidarity: why sharing is not always caring 


Essay by Barbara Prainsack: “To solve these problems, we need to think about data governance in new ways. It is no longer enough to assume that asking people to consent to how their data is used is sufficient to prevent harm. In our example of telehealth, and in virtually all data-related scandals of the last decade, from Cambridge Analytica to Robodebt, informed consent did not, or could not, have avoided the problem. We all regularly agree to data uses that we know are problematic – not because we do not care about privacy. We agree because this is the only way to get access to benefits, a mortgage, or teachers and health professionals. In a world where face-to-face assessments are unavailable or excessively expensive, opting out of digital practices would no longer be an option (Prainsack, 2017, pp. 126-131; see also Oudshoorn, 2011).

Solidarity-based data governance (in short: data solidarity) can help us to distribute the risks and the benefits of digital practices more equitably. The details of the framework are spelled out in full elsewhere (Prainsack et al., 2022a, b). In short, data solidarity seeks to facilitate data uses that create significant public value, and at the same time prevent and mitigate harm (McMahon et al., 2020). One important step towards both goals is to stop ascribing risks to data types, and to distinguish between different types of data use instead. In some situations, harm can be prevented by making sure that data is not used for harmful purposes, such as online tracking. In other contexts, however, harm prevention can require that we do not collect the data in the first place. Not recording something, making it invisible and uncountable to others, can be the most responsible way to act in some contexts.

This means that recording and sharing data should not become a default. More data is not always better. Instead, policymakers need to consider carefully – in a dialogue with the people and communities that have a stake in it – what should be recorded, where it will be stored and who governs the data once it has been collected – if at all (see also Kukutai and Taylor, 2016)…(More)”.

Researchers scramble as Twitter plans to end free data access


Article by Heidi Ledford: “Akin Ünver has been using Twitter data for years. He investigates some of the biggest issues in social science, including political polarization, fake news and online extremism. But earlier this month, he had to set aside time to focus on a pressing emergency: helping relief efforts in Turkey and Syria after the devastating earthquake on 6 February.

Aid workers in the region have been racing to rescue people trapped by debris and to provide health care and supplies to those displaced by the tragedy. Twitter has been invaluable for collecting real-time data and generating crucial maps to direct the response, says Ünver, a computational social scientist at Özyeğin University in Istanbul.

So when he heard that Twitter was about to end its policy of providing free access to its application programming interface (API) — a pivotal set of rules that allows people to extract and process large amounts of data from the platform — he was dismayed. “Couldn’t come at a worse time,” he tweeted. “Most analysts and programmers that are building apps and functions for Turkey earthquake aid and relief, and are literally saving lives, are reliant on Twitter API.”..

Twitter has long offered academics free access to its API, an unusual approach that has been instrumental in the rise of computational approaches to studying social media. So when the company announced on 2 February that it would end that free access in a matter of days, it sent the field into a tailspin. “Thousands of research projects running over more than a decade would not be possible if the API wasn’t free,” says Patty Kostkova, who specializes in digital health studies at University College London…(More)”.

Data from satellites is starting to spur climate action


Miriam Kramer and Alison Snyder at Axios: “Data from space is being used to try to fight climate change by optimizing shipping lanes, adjusting rail schedules and pinpointing greenhouse gas emissions.

Why it matters: Satellite data has been used to monitor how human activities are changing Earth’s climate. Now it’s being used to attempt to alter those activities and take action against that change.

  • “Pixels are great but nobody really wants pixels except as a step to answering their questions about how the world is changing and how that should assess and inform their decisionmaking,” Steven Brumby, CEO and co-founder of Impact Observatory, which uses AI to create maps from satellite data, tells Axios in an email.

What’s happening: Several satellite companies are beginning to use their capabilities to guide on-the-ground actions that contribute to greenhouse gas emissions cuts.

  • UK-based satellite company Inmarsat, which provides telecommunications to the shipping and agriculture industries, is working with Brazilian railway operator Rumo to optimize train trips — and reduce fuel use.
  • Maritime shipping, which relies on heavy fuel oil, is another sector where satellites could help to reduce emissions by routing ships more efficiently and prevent communications-caused delays, says Inmarsat’s CEO Rajeev Suri. The industry contributes 3% of global greenhouse gas emissions.
  • Carbon capture, innovations in steel and cement production and other inventions are important for addressing climate change, Suri says. But using satellites is “potentially low-hanging fruit because these technologies are already available.”

Other satellites are also tracking emissions of methane — a strong greenhouse gas — from landfills and oil and gas production.

  • “It’s a needle in a haystack problem. There are literally millions of potential leak points all over the world,” says Stéphane Germain, founder and CEO of GHGSat, which monitors methane emissions from its six satellites in orbit.
  • A satellite dedicated to honing in on carbon dioxide emissions is due to launch later this year…(More)”.

Federated machine learning in data-protection-compliant research


Paper by Alissa Brauneck et al : “In recent years, interest in machine learning (ML) as well as in multi-institutional collaborations has grown, especially in the medical field. However, strict application of data-protection laws reduces the size of training datasets, hurts the performance of ML systems and, in the worst case, can prevent the implementation of research insights in clinical practice. Federated learning can help overcome this bottleneck through decentralised training of ML models within the local data environment, while maintaining the predictive performance of ‘classical’ ML. Thus, federated learning provides immense benefits for cross-institutional collaboration by avoiding the sharing of sensitive personal data(Fig. 1; refs.). Because existing regulations (especially the General Data Protection Regulation 2016/679 of the European Union, or GDPR) set stringent requirements for medical data and rather vague rules for ML systems, researchers are faced with uncertainty. In this comment, we provide recommendations for researchers who intend to use federated learning, a privacy-preserving ML technique, in their research. We also point to areas where regulations are lacking, discussing some fundamental conceptual problems with ML regulation through the GDPR, related especially to notions of transparency, fairness and error-free data. We then provide an outlook on how implications from data-protection laws can be directly incorporated into federated learning tools…(More)”.

Predicting Socio-Economic Well-being Using Mobile Apps Data: A Case Study of France


Paper by Rahul Goel, Angelo Furno, and Rajesh Sharma: “Socio-economic indicators provide context for assessing a country’s overall condition. These indicators contain information about education, gender, poverty, employment, and other factors. Therefore, reliable and accurate information is critical for social research and government policing. Most data sources available today, such as censuses, have sparse population coverage or are updated infrequently. Nonetheless, alternative data sources, such as call data records (CDR) and mobile app usage, can serve as cost-effective and up-to-date sources for identifying socio-economic indicators.
This work investigates mobile app data to predict socio-economic features. We present a large-scale study using data that captures the traffic of thousands of mobile applications by approximately 30 million users distributed over 550,000 km square and served by over 25,000 base stations. The dataset covers the whole France territory and spans more than 2.5 months, starting from 16th March 2019 to 6th June 2019. Using the app usage patterns, our best model can estimate socio-economic indicators (attaining an R-squared score upto 0.66). Furthermore, using models’ explainability, we discover that mobile app usage patterns have the potential to reveal socio-economic disparities in IRIS. Insights of this study provide several avenues for future interventions, including users’ temporal network analysis and exploration of alternative data sources…(More)”.

Global Renewables Watch


About: “The Global Renewables Watch is a first-of-its-kind living atlas intended to map and measure all utility-scale solar and wind installations on Earth using artificial intelligence (AI) and satellite imagery, allowing users to evaluate clean energy transition progress and track trends over time. It also provides unique spatial data on land use trends to help achieve the dual aims of the environmental protection and increasing renewable energy capacity….(More)”

Who owns the map? Data sovereignty and government spatial data collection, use, and dissemination


Paper by Peter A. Johnson and Teresa Scassa: “Maps, created through the collection, assembly, and analysis of spatial data are used to support government planning and decision-making. Traditionally, spatial data used to create maps are collected, controlled, and disseminated by government, although over time, this role has shifted. This shift has been driven by the availability of alternate sources of data collected by private sector companies, and data contributed by volunteers to open mapping platforms, such as OpenStreetMap. In theorizing this shift, we provide examples of how governments use data sovereignty as a tool to shape spatial data collection, use, and sharing. We frame four models of how governments may navigate shifting spatial data sovereignty regimes; first, with government retaining complete control over data collection; second, with government contracting a third party to provide specific data collection services, but with data ownership and dissemination responsibilities resting with government; third, with government purchasing data under terms of access set by third party data collectors, who disseminate data to several parties, and finally, with government retreating from or relinquishing data sovereignty altogether. Within this rapidly changing landscape of data providers, we propose that governments must consider how to address data sovereignty concerns to retain their ability to control data use in the public interest…(More)”.

Data Free Flow with Trust: Overcoming Barriers to Cross-Border Data Flows


Briefing Paper by the WEF: “The movement of data across country borders is essential to the global economy. When data flows across borders, it is possible to deliver more to more people and produce more benefits for people and planet. This briefing paper highlights the importance of such data flows and urges global leaders in the public and private sectors to take collective action to work towards a shared understanding of them with a view to implementing “Data Free Flow with Trust” (DFFT) – an umbrella concept for facilitating trust-based data exchanges. This paper reviews the current challenges facing DFFT, take stock of progress made so far, offer direction for policy mechanisms and concrete tools for businesses and, more importantly, promote global discussions about how to realize DFFT from the perspectives of policy and business…(More)”.