Paper by Mahsa Moghadas, Alexander Fekete, Abbas Rajabifard & Theo Kötter: “Transformative disaster resilience in times of climate change underscores the importance of reflexive governance, facilitation of socio-technical advancement, co-creation of knowledge, and innovative and bottom-up approaches. However, implementing these capacity-building processes by relying on census-based datasets and nomothetic (or top-down) approaches remains challenging for many jurisdictions. Web 2.0 knowledge sharing via online social networks, whereas, provides a unique opportunity and valuable data sources to complement existing approaches, understand dynamics within large communities of individuals, and incorporate collective intelligence into disaster resilience studies. Using Twitter data (passive crowdsourcing) and an online survey, this study draws on the wisdom of crowds and public judgment in near-real-time disaster phases when the flood disaster hit Germany in July 2021. Latent Dirichlet Allocation, an unsupervised machine learning technique for Topic Modeling, was applied to the corpora of two data sources to identify topics associated with different disaster phases. In addition to semantic (textual) analysis, spatiotemporal patterns of online disaster communication were analyzed to determine the contribution patterns associated with the affected areas. Finally, the extracted topics discussed online were compiled into five themes related to disaster resilience capacities (preventive, anticipative, absorptive, adaptive, and transformative). The near-real-time collective sensing approach reflected optimized diversity and a spectrum of people’s experiences and knowledge regarding flooding disasters and highlighted communities’ sociocultural characteristics. This bottom-up approach could be an innovative alternative to traditional participatory techniques of organizing meetings and workshops for situational analysis and timely unfolding of such events at a fraction of the cost to inform disaster resilience initiatives…(More)”.
Big data for whom? Data-driven estimates to prioritize the recovery needs of vulnerable populations after a disaster
Blog and paper by Sabine Loos and David Lallemant: “For years, international agencies have been effusing the benefits of big data for sustainable development. Emerging technology–such as crowdsourcing, satellite imagery, and machine learning–have the power to better inform decision-making, especially those that support the 17 Sustainable Development Goals. When a disaster occurs, overwhelming amounts of big data from emerging technology are produced with the intention to support disaster responders. We are seeing this now with the recent earthquakes in Turkey and Syria: space agencies are processing satellite imagery to map faults and building damage or digital humanitarians are crowdsourcing baseline data like roads and buildings.
Eight years ago, the Nepal 2015 earthquake was no exception–emergency managers received maps of shaking or crowdsourced maps of affected people’s needs from diverse sources. A year later, I began research with a team of folks involved during the response to the earthquake, and I was determined to understand how big data produced after disasters were connected to the long-term effects of the earthquake. Our research team found that a lot of data that was used to guide the recovery focused on building damage, which was often viewed as a proxy for population needs. While building damage information is useful, it does not capture the full array of social, environmental, and physical factors that will lead to disparities in long-term recovery. I assumed information would have been available immediately after the earthquake that was aimed at supporting vulnerable populations. However, as I spent time in Nepal during the years after the 2015 earthquake, speaking with government officials and nongovernmental organizations involved in the response and recovery, I found they lacked key information about the needs of the most vulnerable households–those who would face the greatest obstacles during the recovery from the earthquake. While governmental and nongovernmental actors prioritized the needs of vulnerable households as best as possible with the information available, I was inspired to pursue research that could provide better information more quickly after an earthquake, to inform recovery efforts.
In our paper published in Communications Earth and Environment [link], we develop a data-driven approach to rapidly estimate which areas are likely to fall behind during recovery due to physical, environmental, and social obstacles. This approach leverages survey data on recovery progress combined with geospatial datasets that would be readily available after an event that represent factors expected to impede recovery. To identify communities with disproportionate needs long after a disaster, we propose focusing on those who fall behind in recovery over time, or non-recovery. We focus on non-recovery since it places attention on those who do not recover rather than delineating the characteristics of successful recovery. In addition, in speaking to several groups in Nepal involved in the recovery, they understood vulnerability–a concept that is place-based and can change over time–as those who would not be able to recover due to the earthquake…(More)”
Elon Musk Has Broken Disaster-Response Twitter
Article by Juliette Kayyem: “For years, Twitter was at its best when bad things happened. Before Elon Musk bought it last fall, before it was overrun with scammy ads, before it amplified fake personas, and before its engineers were told to get more eyeballs on the owner’s tweets, Twitter was useful in saving lives during natural disasters and man-made crises. Emergency-management officials have used the platform to relate timely information to the public—when to evacuate during Hurricane Ian, in 2022; when to hide from a gunman during the Michigan State University shootings earlier this month—while simultaneously allowing members of the public to transmit real-time data. The platform didn’t just provide a valuable communications service; it changed the way emergency management functions.
That’s why Musk-era Twitter alarms so many people in my field. The platform has been downgraded in multiple ways: Service is glitchier; efforts to contain misleading information are patchier; the person at the top seems largely dismissive of outside input. But now that the platform has embedded itself so deeply in the disaster-response world, it’s difficult to replace. The rapidly deteriorating situation raises questions about platforms’ obligation to society—questions that prickly tech execs generally don’t want to consider…(More)”
How an Open-Source Disaster Map Helped Thousands of Earthquake Survivors
Article by Eray Gündoğmuş: “On February 6, 2023, earthquakes measuring 7.7 and 7.6 hit the Kahramanmaraş region of Turkey, affecting 10 cities and resulting in more than 42.000 deaths and 120.000 injuries as of February 21.
In the hours following the earthquake, a group of programmers quickly become together on the Discord server called “Açık Yazılım Ağı” , inviting IT professionals to volunteer and develop a project that could serve as a resource for rescue teams, earthquake survivors, and those who wanted to help: afetharita.com. It literally means “disaster map”.
As there was a lack of preparation for the first few days of such a huge earthquake, disaster victims in distress started making urgent aid requests on social media. With the help of thousands of volunteers, we utilized technologies such as artificial intelligence and machine learning to transform these aid requests into readable data and visualized them on afetharita.com. Later, we gathered critical data related to the disaster from necessary institutions and added them to the map.
Disaster Map, which received a total of 35 million requests and 627,000 unique visitors, played a significant role in providing software support during the most urgent and critical periods of the disaster, and helped NGOs, volunteers, and disaster victims to access important information. I wanted to share the process, our experiences, and technical details of this project clearly in writing…(More)”.
Data for emergencies
Report by the Royal Society: “As evidenced throughout the COVID-19 pandemic, scientists and decision-makers benefit from rapid access to high quality data in a fast-changing, emergency environment. Enabling this for future pandemics, and other events which threaten serious damage to human welfare or the environment, will require a robust data infrastructure and a continuous process of public engagement.
Creating resilient and trusted data systems (PDF) sets out five high level recommendations for the UK Government to achieve this. This project, chaired by Professor Chris Dye FRS FMedSci, builds on a public dialogue commissioned by the Royal Society and a workshop held in October 2022, the recommendations call for action on public engagement; data protection; stress testing; standardisation; and trusted research environments.
The Royal Society commissioned the public facilitation agency Hopkins Van Mil to deliver a public dialogue to explore the public’s views on data systems during emergencies and non-emergencies. The dialogue format was chosen to facilitate an immersive and informed discussion, where a full range of viewpoints could be shared, exploring nuanced views, trade-offs and ‘least-regret’ options. The public dialogue addressed the following questions:
a) Do the current systems in place support a trusted and effective response to emergencies?
b) Have the systems been established in ways that enable them to be used in a trusted way outside of emergencies?
c) Are we any better placed to put in place a data-led response to other emergencies?
There are seven key findings from the dialogue, covering the complexity of emotions, confidence in data protection enforcement, and expectations for emergency preparedness…(More)”.
Accelerate Aspirations: Moving Together to Achieve Systems Change
Report by Data.org: “To solve our greatest global challenges, we need to accelerate how we use data for good. But to truly make data-driven tools that serve society, we must re-imagine data for social impact more broadly, more inclusively, and in a more interdisciplinary way.
So, we face a choice. Business as usual can continue through funding and implementing under-resourced and siloed data projects that deliver incremental progress. Or we can think and act boldly to drive equitable and sustainable solutions.
Accelerate Aspirations: Moving Together to Achieve Systems Change is a comprehensive report on the key trends and tensions in the emerging field of data for social impact…(More)”.
Industry Data for Society Partnership
Press Release: “On Wednesday, a new Industry Data for Society Partnership (IDSP) was launched by GitHub, Hewlett Packard Enterprise (HPE), LinkedIn, Microsoft, Northumbrian Water Group, R2 Factory and UK Power Networks. The IDSP is a first-of-its-kind cross-industry partnership to help advance more open and accessible private-sector data for societal good. The founding members of the IDSP agree to provide greater access to their data, where appropriate, to help tackle some of the world’s most pressing challenges in areas such as sustainability and inclusive economic growth.
In the past few years, open data has played a critical role in enabling faster research and collaboration across industries and with the public sector. As we saw during COVID-19, pandemic data that was made more open enabled researchers to make faster progress and gave citizens more information to inform their day-to-day activities. The IDSP’s goal is to continue this model into new areas and help address other complex societal challenges. The IDSP will serve as a forum for the participating companies to foster collaboration, as well as a resource for other entities working on related issues.
IDSP members commit to the following:
- To open data or provide greater access to data, where appropriate, to help solve pressing societal problems in a usable, responsible and inclusive manner.
- To share knowledge and information for the effective use of open data and data collaboration for social benefit.
- To invest in skilling a broad class of professionals to use data effectively and responsibly for social impact.
- To protect individuals’ privacy in all these activities.
The IDSP will also bring in other organizations with expertise in societal issues. At launch, The GovLab’s Data Program based at New York University and the Open Data Institute will both be partnership Affiliates to provide guidance and expertise for partnership endeavors…(More)”.
Operationalizing Digital Self Determination
Paper by Stefaan G. Verhulst: “We live in an era of datafication, one in which life is increasingly quantified and transformed into intelligence for private or public benefit. When used responsibly, this offers new opportunities for public good. However, three key forms of asymmetry currently limit this potential, especially for already vulnerable and marginalized groups: data asymmetries, information asymmetries, and agency asymmetries. These asymmetries limit human potential, both in a practical and psychological sense, leading to feelings of disempowerment and eroding public trust in technology. Existing methods to limit asymmetries (e.g., consent) as well as some alternatives under consideration (data ownership, collective ownership, personal information management systems) have limitations to adequately address the challenges at hand. A new principle and practice of digital self-determination (DSD) is therefore required.
DSD is based on existing concepts of self-determination, as articulated in sources as varied as Kantian philosophy and the 1966 International Covenant on Economic, Social and Cultural Rights. Updated for the digital age, DSD contains several key characteristics, including the fact that it has both an individual and collective dimension; is designed to especially benefit vulnerable and marginalized groups; and is context-specific (yet also enforceable). Operationalizing DSD in this (and other) contexts so as to maximize the potential of data while limiting its harms requires a number of steps. In particular, a responsible operationalization of DSD would consider four key prongs or categories of action: processes, people and organizations, policies, and products and technologies…(More)”.
What is PeaceTech?
Report by Behruz Davletov, Uma Kalkar, Marine Ragnet, and Stefaan Verhulst: “From sensors to detect explosives to geographic data for disaster relief to artificial intelligence verifying misleading online content, data and technology are essential assets for peace efforts. Indeed, the ongoing Russia-Ukraine war is a direct example of data, data science, and technology as a whole has been mobilized to assist and monitor conflict responses and support peacebuilding.
Yet understanding the ways in which technology can be applied for peace, and what kinds of peace promotion they can serve, as well as their associated risks remain muddled. Thus, a framework for the governance of these peace technologies—#PeaceTech—is needed at an international and transnational level to guide the responsible and purposeful use of technology and data to strengthen peace and justice initiatives.
Today, The GovLab is proud to announce the release of the “PeaceTech Topic Map: A Research Base for an Emerging Field,” an overview of the key themes and challenges of technologies used by and created for peace efforts…(More)”.
Data for Social Good: Non-Profit Sector Data Projects
Open Access Book by Jane Farmer, Anthony McCosker, Kath Albury & Amir Aryani: “In February 2020, just pre-COVID, a group of managers from community organisations met with us researchers about data for social good. “We want to collaborate with data,” said one CEO. “We want to find the big community challenges, work together to fix them and monitor the change we make over ten years.” The managers created a small, pooled fund and, through the 2020–2021 COVID lockdowns, used Zoom to workshop. Together we identified organisations’ datasets, probed their strengths and weaknesses, and found ways to share and visualise data. There were early frustrations about what data was available, its ‘granularity’ and whether new insights about the community could be found, but about half-way through the project, there was a tipping point, and something changed. While still focused on discovery from visualisations comparing their data by suburb, the group started to talk about other benefits. Through drawing in staff from across their organisations, they saw how the work of departments could be integrated by using data, and they developed new confidence in using analytics techniques. Together, the organisations developed an understanding of each other’s missions and services, while developing new relationships, trust and awareness of the possibilities of collaborating to address community needs. Managers completed the pilot having codesigned an interactive Community Resilience Dashboard, which enabled them to visualise their own organisations’ data and open public data to reveal new landscapes about community financial wellbeing and social determinants of health. They agreed they also had so much more: a collective data-capable partnership, internally and across organisations, with new potential to achieve community social justice driven by data.
We use this story to signify how right now is a special—indeed critical—time for non-profit organisations and communities to build their capability to work with data. Certainly, in high-income countries, there is pressure on non-profits to operate like commercial businesses—prioritising efficiency and using data about their outputs and impacts to compete for funding. However, beyond the immediate operational horizon, non-profits can use data analytics techniques to drive community social justice and potentially impact on the institutional capability of the whole social welfare sector. Non-profits generate a lot of data but innovating with technology is not a traditional competence, and it demands infrastructure investment and specialist workforce. Given their meagre access to funding, this book examines how non-profits of different types and sizes can use data for social good and find a path to data capability. The aim is to inspire and give practical examples of how non-profits can make data useful. While there is an emerging range of novel data for social good cases around the world, the case studies featured in this book exemplify our research and developing thinking in experimental data projects with diverse non-profits that harnessed various types of data. We outline a way to gain data capability through collaborating internally across departments and with other external non-profits and skilled data analytics partners. We term this way of working collaborative data action…(More)”.