Paper by Christopher Loynes, Jamal Ouenniche & Johannes De Smedt: “This paper provides the humanitarian community with an automated tool that can detect a disaster using tweets posted on Twitter, alongside a portal to identify local and regional Non-Governmental Organisations (NGOs) that are best-positioned to provide support to people adversely affected by a disaster. The proposed disaster detection tool uses a linear Support Vector Classifier (SVC) to detect man-made and natural disasters, and a density-based spatial clustering of applications with noise (DBSCAN) algorithm to accurately estimate a disaster’s geographic location. This paper provides two original contributions. The first is combining the automated disaster detection tool with the prototype portal for NGO identification. This unique combination could help reduce the time taken to raise awareness of the disaster detected, improve the coordination of aid, increase the amount of aid delivered as a percentage of initial donations and improve aid effectiveness. The second contribution is a general framework that categorises the different approaches that can be adopted for disaster detection. Furthermore, this paper uses responses obtained from an on-the-ground survey with NGOs in the disaster-hit region of Uttar Pradesh, India, to provide actionable insights into how the portal can be developed further…(More)”.
Disinformation Tracker
Press Release: “Today, Global Partners Digital (GPD), ARTICLE 19, the Collaboration on International ICT Policy for East and Southern Africa (CIPESA), PROTEGE QV and the Centre for Human Rights of the University of Pretoria jointly launched an interactive map to track and analyse disinformation laws, policies and patterns of enforcement across Sub-Saharan Africa.
The map offers a birds-eye view of trends in state responses to disinformation across the region, as well as in-depth analysis of the state of play in individual countries, using a bespoke framework to assess whether laws, policies and other state responses are human rights-respecting.
Developed against a backdrop of rapidly accelerating state action on COVID-19 related disinformation, the map is an open, iterative product. At the time of launch, it covers 31 countries (see below for the full list), with an aim to expand this in the coming months. All data, analysis and insight on the map has been generated by groups and actors based in Africa….(More)”.
How Crowdsourcing Aided a Push to Preserve the Histories of Nazi Victims
Andrew Curry at the New York Times: “With people around the globe sheltering at home amid the pandemic, an archive of records documenting Nazi atrocities asked for help indexing them. Thousands joined the effort….
As the virus prompted lockdowns across Europe, the director of the Arolsen Archives — the world’s largest devoted to the victims of Nazi persecution — joined millions of others working remotely from home and spending lots more time in front of her computer.
“We thought, ‘Here’s an opportunity,’” said the director, Floriane Azoulay.
Two months later, the archive’s “Every Name Counts” project has attracted thousands of online volunteers to work as amateur archivists, indexing names from the archive’s enormous collection of papers. To date, they have added over 120,000 names, birth dates and prisoner numbers in the database.
“There’s been much more interest than we expected,” Ms. Azoulay said. “The fact that people were locked at home and so many cultural offerings have moved online has played a big role.”
It’s a big job: The Arolsen Archives are the largest collection of their kind in the world, with more than 30 million original documents. They contain information on the wartime experiences of as many as 40 million people, including Jews executed in extermination camps and forced laborers conscripted from across Nazi-occupied Europe.
The documents, which take up 16 miles of shelving, include things like train manifests, delousing records, work detail assignments and execution records…(More)”.
Saving Our Oceans: Scaling the Impact of Robust Action Through Crowdsourcing
Paper by Amanda J. Porter, Philipp Tuertscher, and Marleen Huysman: “One approach for tackling grand challenges that is gaining traction in recent management literature is robust action: by allowing diverse stakeholders to engage with novel ideas, initiatives can cultivate successful ideas that yield greater impact. However, a potential pitfall of robust action is the length of time it takes to generate momentum. Crowdsourcing, we argue, is a valuable tool that can scale the generation of impact from robust action.
We studied an award‐winning environmental sustainability crowdsourcing initiative and found that robust action principles were indeed successful in attracting a diverse stakeholder network to generate novel ideas and develop these into sustainable solutions. Yet we also observed that the momentum and novelty generated was at risk of getting lost as the actors and their roles changed frequently throughout the process. We show the vital importance of robust action principles for connecting ideas and actors across crowdsourcing phases. These observations allow us to make a contribution to extant theory by explaining the micro‐dynamics of scaling robust action’s impact over time…(More)”.
UK parliamentary select committees: crowdsourcing for evidence-based policy or grandstanding?
Paper by the The LSE GV314 Group: “In the United Kingdom, the influence of parliamentary select committees on policy depends substantially on the ‘seriousness’ with which they approach the task of gathering and evaluating a wide range of evidence and producing reports and recommendations based on it. However, select committees are often charged with being concerned with ‘political theatre’ and ‘grandstanding’ rather than producing evidence-based policy recommendations. This study, based on a survey of 919 ‘discretionary’ witnesses, including those submitting written and oral evidence, examines the case for arguing that there is political bias and grandstanding in the way select committees go about selecting witnesses, interrogating them and using their evidence to put reports together. While the research finds some evidence of such ‘grandstanding’ it does not appear to be strong enough to suggest that the role of select committees is compromised as a crowdsourcer of evidence….(More)”.
Adaptive social networks promote the wisdom of crowds
Paper by Abdullah Almaatouq et al: “Social networks continuously change as new ties are created and existing ones fade. It is widely acknowledged that our social embedding has a substantial impact on what information we receive and how we form beliefs and make decisions. However, most empirical studies on the role of social networks in collective intelligence have overlooked the dynamic nature of social networks and its role in fostering adaptive collective intelligence. Therefore, little is known about how groups of individuals dynamically modify their local connections and, accordingly, the topology of the network of interactions to respond to changing environmental conditions. In this paper, we address this question through a series of behavioral experiments and supporting simulations. Our results reveal that, in the presence of plasticity and feedback, social networks can adapt to biased and changing information environments and produce collective estimates that are more accurate than their best-performing member. To explain these results, we explore two mechanisms: 1) a global-adaptation mechanism where the structural connectivity of the network itself changes such that it amplifies the estimates of high-performing members within the group (i.e., the network “edges” encode the computation); and 2) a local-adaptation mechanism where accurate individuals are more resistant to social influence (i.e., adjustments to the attributes of the “node” in the network); therefore, their initial belief is disproportionately weighted in the collective estimate. Our findings substantiate the role of social-network plasticity and feedback as key adaptive mechanisms for refining individual and collective judgments….(More)”.
Using crowdsourcing for a safer society: When the crowd rules
Paper by Enrique Estellés-Arolas: “Neighbours sharing information about robberies in their district through social networking platforms, citizens and volunteers posting about the irregularities of political elections on the Internet, and internauts trying to identify a suspect of a crime: in all these situations, people who share different degrees of relationship collaborate through the Internet and other technologies to try to help with or solve an offence. T
he crowd, which is sometimes seen as a threat, in these cases becomes an invaluable resource that can complement law enforcement through collective intelligence. Owing to the increasing growth of such initiatives, this article conducts a systematic review of the literature to identify the elements that characterize them and to find the conditions that make them work successfully….(More)”.
Crowdsourcing a crisis response for COVID-19 in oncology
Aakash Desai et al in Nature Medicine: “Crowdsourcing efforts are currently underway to collect and analyze data from patients with cancer who are affected by the COVID-19 pandemic. These community-led initiatives will fill key knowledge gaps to tackle crucial clinical questions on the complexities of infection with the causative coronavirus SARS-Cov-2 in the large, heterogeneous group of vulnerable patients with cancer…(More)”
Doctors Turn to Social Media to Develop Covid-19 Treatments in Real Time
Michael Smith and Michelle Fay Cortez at Bloomberg: “There is a classic process for treating respiratory problems: First, give the patient an oxygen mask, or slide a small tube into the nose to provide an extra jolt of oxygen. If that’s not enough, use a “Bi-Pap” machine, which pushes air into the lungs more forcefully. If that fails, move to a ventilator, which takes over the patient’s breathing.
But these procedures tend to fail With Covid-19 patients. Physicians found that by the time they reached that last step, it was often too late; the patient was already dying.
In past pandemics like the 2003 global SARS outbreak, doctors sought answers to such mysteries from colleagues in hospital lounges or maybe penned articles for medical journals. It could take weeks or months for news of a breakthrough to reach the broader community.
For Covid-19, a kind of medical hive mind is on the case. By the tens of thousands, doctors are joining specialized social media groups to develop answers in real time. One of them, a Facebook group called the PMG COVID19 Subgroup, has 30,000 members worldwide….
Doctors are trying to fill an information void online. Sabry, an emergency-room doctor in two hospitals outside Los Angeles, found that the 70,000-strong, Physician Moms Group she started five years ago on Facebook was so overwhelmed by coronavirus threads that she created the Covid-19 offshoot. So many doctors tried to join the new subgroup that Facebook’s click-to-join code broke. Some 10,000 doctors waited in line as the social media company’s engineers devised a fix.
She’s not alone. The topic also consumed two Facebook groups started by Dr. Nisha Mehta, a 38-year-old radiologist from Charlotte, North Carolina. The 54,000-member Physician Side Gigs, intended for business discussions, and an 11,000-person group called Physician Community for more general topics, are also all coronavirus, all the time, with thousands waiting to join…(More)”.
Crowdsourcing hypothesis tests: making transparent how design choices shape research results
Paper by J.F. Landy and Leonid Tiokhin: “To what extent are research results influenced by subjective decisions that scientists make as they design studies?
Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses.
Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim….(More)”.