Saving Our Oceans: Scaling the Impact of Robust Action Through Crowdsourcing


Paper by Amanda J. Porter, Philipp Tuertscher, and Marleen Huysman: “One approach for tackling grand challenges that is gaining traction in recent management literature is robust action: by allowing diverse stakeholders to engage with novel ideas, initiatives can cultivate successful ideas that yield greater impact. However, a potential pitfall of robust action is the length of time it takes to generate momentum. Crowdsourcing, we argue, is a valuable tool that can scale the generation of impact from robust action.

We studied an award‐winning environmental sustainability crowdsourcing initiative and found that robust action principles were indeed successful in attracting a diverse stakeholder network to generate novel ideas and develop these into sustainable solutions. Yet we also observed that the momentum and novelty generated was at risk of getting lost as the actors and their roles changed frequently throughout the process. We show the vital importance of robust action principles for connecting ideas and actors across crowdsourcing phases. These observations allow us to make a contribution to extant theory by explaining the micro‐dynamics of scaling robust action’s impact over time…(More)”.

UK parliamentary select committees: crowdsourcing for evidence-based policy or grandstanding?


Paper by the The LSE GV314 Group: “In the United Kingdom, the influence of parliamentary select committees on policy depends substantially on the ‘seriousness’ with which they approach the task of gathering and evaluating a wide range of evidence and producing reports and recommendations based on it. However, select committees are often charged with being concerned with ‘political theatre’ and ‘grandstanding’ rather than producing evidence-based policy recommendations. This study, based on a survey of 919 ‘discretionary’ witnesses, including those submitting written and oral evidence, examines the case for arguing that there is political bias and grandstanding in the way select committees go about selecting witnesses, interrogating them and using their evidence to put reports together. While the research finds some evidence of such ‘grandstanding’ it does not appear to be strong enough to suggest that the role of select committees is compromised as a crowdsourcer of evidence….(More)”.

Adaptive social networks promote the wisdom of crowds


Paper by Abdullah Almaatouq et al: “Social networks continuously change as new ties are created and existing ones fade. It is widely acknowledged that our social embedding has a substantial impact on what information we receive and how we form beliefs and make decisions. However, most empirical studies on the role of social networks in collective intelligence have overlooked the dynamic nature of social networks and its role in fostering adaptive collective intelligence. Therefore, little is known about how groups of individuals dynamically modify their local connections and, accordingly, the topology of the network of interactions to respond to changing environmental conditions. In this paper, we address this question through a series of behavioral experiments and supporting simulations. Our results reveal that, in the presence of plasticity and feedback, social networks can adapt to biased and changing information environments and produce collective estimates that are more accurate than their best-performing member. To explain these results, we explore two mechanisms: 1) a global-adaptation mechanism where the structural connectivity of the network itself changes such that it amplifies the estimates of high-performing members within the group (i.e., the network “edges” encode the computation); and 2) a local-adaptation mechanism where accurate individuals are more resistant to social influence (i.e., adjustments to the attributes of the “node” in the network); therefore, their initial belief is disproportionately weighted in the collective estimate. Our findings substantiate the role of social-network plasticity and feedback as key adaptive mechanisms for refining individual and collective judgments….(More)”.

Using crowdsourcing for a safer society: When the crowd rules


Paper by Enrique Estellés-Arolas: “Neighbours sharing information about robberies in their district through social networking platforms, citizens and volunteers posting about the irregularities of political elections on the Internet, and internauts trying to identify a suspect of a crime: in all these situations, people who share different degrees of relationship collaborate through the Internet and other technologies to try to help with or solve an offence. T

he crowd, which is sometimes seen as a threat, in these cases becomes an invaluable resource that can complement law enforcement through collective intelligence. Owing to the increasing growth of such initiatives, this article conducts a systematic review of the literature to identify the elements that characterize them and to find the conditions that make them work successfully….(More)”.

Crowdsourcing a crisis response for COVID-19 in oncology


Aakash Desai et al in Nature Medicine: “Crowdsourcing efforts are currently underway to collect and analyze data from patients with cancer who are affected by the COVID-19 pandemic. These community-led initiatives will fill key knowledge gaps to tackle crucial clinical questions on the complexities of infection with the causative coronavirus SARS-Cov-2 in the large, heterogeneous group of vulnerable patients with cancer…(More)”

Doctors Turn to Social Media to Develop Covid-19 Treatments in Real Time


Michael Smith and Michelle Fay Cortez at Bloomberg: “There is a classic process for treating respiratory problems: First, give the patient an oxygen mask, or slide a small tube into the nose to provide an extra jolt of oxygen. If that’s not enough, use a “Bi-Pap” machine, which pushes air into the lungs more forcefully. If that fails, move to a ventilator, which takes over the patient’s breathing.

But these procedures tend to fail With Covid-19 patients. Physicians found that by the time they reached that last step, it was often too late; the patient was already dying.

In past pandemics like the 2003 global SARS outbreak, doctors sought answers to such mysteries from colleagues in hospital lounges or maybe penned articles for medical journals. It could take weeks or months for news of a breakthrough to reach the broader community.

For Covid-19, a kind of medical hive mind is on the case. By the tens of thousands, doctors are joining specialized social media groups to develop answers in real time. One of them, a Facebook group called the PMG COVID19 Subgroup, has 30,000 members worldwide….

Doctors are trying to fill an information void online. Sabry, an emergency-room doctor in two hospitals outside Los Angeles, found that the 70,000-strong, Physician Moms Group she started five years ago on Facebook was so overwhelmed by coronavirus threads that she created the Covid-19 offshoot. So many doctors tried to join the new subgroup that Facebook’s click-to-join code broke. Some 10,000 doctors waited in line as the social media company’s engineers devised a fix.

She’s not alone. The topic also consumed two Facebook groups started by Dr. Nisha Mehta, a 38-year-old radiologist from Charlotte, North Carolina. The 54,000-member Physician Side Gigs, intended for business discussions, and an 11,000-person group called Physician Community for more general topics, are also all coronavirus, all the time, with thousands waiting to join…(More)”.

Crowdsourcing hypothesis tests: making transparent how design choices shape research results


Paper by J.F. Landy and Leonid Tiokhin: “To what extent are research results influenced by subjective decisions that scientists make as they design studies?

Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses.

Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim….(More)”.

Crowdsourcing Geographic Information


Paper by Dieter Pfoser: “The crowdsourcing of geographic information addresses the collection of geospatial data contributed by non-expert users and the aggregation of these data into meaningful geospatial datasets. While crowdsourcing generally implies a coordinated bottom-up
grass-roots effort to contribute information, in the context of geospatial data the term volunteered geographic information (VGI) specifically refers to a dedicated collection effort inviting non-expert users to contribute. A prominent example here is the OpenStreetMap effort focusing on map datasets. Crowdsourcing geospatial data is an evolving research area that covers efforts ranging from mining GPS tracking data to using social media content to profile population dynamics…(More)”.

Mapping Wikipedia


Michael Mandiberg at The Atlantic: “Wikipedia matters. In a time of extreme political polarization, algorithmically enforced filter bubbles, and fact patterns dismissed as fake news, Wikipedia has become one of the few places where we can meet to write a shared reality. We treat it like a utility, and the U.S. and U.K. trust it about as much as the news.

But we know very little about who is writing the world’s encyclopedia. We do know that just because anyone can edit, doesn’t mean that everyone does: The site’s editors are disproportionately cis white men from the global North. We also know that, as with most of the internet, a small number of the editors do a large amount of the editing. But that’s basically it: In the interest of improving retention, the Wikimedia Foundation’s own research focuses on the motivations of people who do edit, not on those who don’t. The media, meanwhile, frequently focus on Wikipedia’s personality stories, even when covering the bigger questions. And Wikipedia’s own culture pushes back against granular data harvesting: The Wikimedia Foundation’s strong data-privacy rules guarantee users’ anonymity and limit the modes and duration of their own use of editor data.

But as part of my research in producing Print Wikipedia, I discovered a data set that can offer an entry point into the geography of Wikipedia’s contributors. Every time anyone edits Wikipedia, the software records the text added or removed, the time of the edit, and the username of the editor. (This edit history is part of Wikipedia’s ethos of radical transparency: Everyone is anonymous, and you can see what everyone is doing.) When an editor isn’t logged in with a username, the software records that user’s IP address. I parsed all of the 884 million edits to English Wikipedia to collect and geolocate the 43 million IP addresses that have edited English Wikipedia. I also counted 8.6 million username editors who have made at least one edit to an article.

The result is a set of maps that offer, for the first time, insight into where the millions of volunteer editors who build and maintain English Wikipedia’s 5 million pages are—and, maybe more important, where they aren’t….

Like the Enlightenment itself, the modern encyclopedia has a history entwined with colonialism. Encyclopédie aimed to collect and disseminate all the world’s knowledge—but in the end, it could not escape the biases of its colonial context. Likewise, Napoleon’s Description de l’Égypte augmented an imperial military campaign with a purportedly objective study of the nation, which was itself an additional form of conquest. If Wikipedia wants to break from the past and truly live up to its goal to compile the sum of all human knowledge, it requires the whole world’s participation….(More)”.

Wisdom or Madness? Comparing Crowds with Expert Evaluation in Funding the Arts


Paper by Ethan R. Mollick and Ramana Nanda: “In fields as diverse as technology entrepreneurship and the arts, crowds of interested stakeholders are increasingly responsible for deciding which innovations to fund, a privilege that was previously reserved for a few experts, such as venture capitalists and grant‐making bodies. Little is known about the degree to which the crowd differs from experts in judging which ideas to fund, and, indeed, whether the crowd is even rational in making funding decisions. Drawing on a panel of national experts and comprehensive data from the largest crowdfunding site, we examine funding decisions for proposed theater projects, a category where expert and crowd preferences might be expected to differ greatly.

We instead find significant agreement between the funding decisions of crowds and experts. Where crowds and experts disagree, it is far more likely to be a case where the crowd is willing to fund projects that experts may not. Examining the outcomes of these projects, we find no quantitative or qualitative differences between projects funded by the crowd alone, and those that were selected by both the crowd and experts. Our findings suggest that crowdfunding can play an important role in complementing expert decisions, particularly in sectors where the crowds are end users, by allowing projects the option to receive multiple evaluations and thereby lowering the incidence of “false negatives.”…(More)”.