Journal tries crowdsourcing peer reviews, sees excellent results


Chris Lee at ArsTechnica: “Peer review is supposed to act as a sanity check on science. A few learned scientists take a look at your work, and if it withstands their objective and entirely neutral scrutiny, a journal will happily publish your work. As those links indicate, however, there are some issues with peer review as it is currently practiced. Recently, Benjamin List, a researcher and journal editor in Germany, and his graduate assistant, Denis Höfler, have come up with a genius idea for improving matters: something called selected crowd-sourced peer review….

My central point: peer review is burdensome and sometimes barely functional. So how do we improve it? The main way is to experiment with different approaches to the reviewing process, which many journals have tried, albeit with limited success. Post-publication peer review, when scientists look over papers after they’ve been published, is also an option but depends on community engagement.

But if your paper is uninteresting, no one will comment on it after it is published. Pre-publication peer review is the only moment where we can be certain that someone will read the paper.

So, List (an editor for Synlett) and Höfler recruited 100 referees. For their trial, a forum-style commenting system was set up that allowed referees to comment anonymously on submitted papers but also on each other’s comments as well. To provide a comparison, the papers that went through this process also went through the traditional peer review process. The authors and editors compared comments and (subjectively) evaluated the pros and cons. The 100-person crowd of researchers was deemed the more effective of the two.

The editors found that it took a bit more time to read and collate all the comments into a reviewers’ report. But it was still faster, which the authors loved. Typically, it took the crowd just a few days to complete their review, which compares very nicely to the usual four to six weeks of the traditional route (I’ve had papers languish for six months in peer review). And, perhaps most important, the responses were more substantive and useful compared to the typical two-to-four-person review.

So far, List has not published the trial results formally. Despite that, Synlett is moving to the new system for all its papers.

Why does crowdsourcing work?

Here we get back to something more editorial. I’d suggest that there is a physical analog to traditional peer review, called noise. Noise is not just a constant background that must be overcome. Noise is also generated by the very process that creates a signal. The difference is how the amplitude of noise grows compared to the amplitude of signal. For very low-amplitude signals, all you measure is noise, while for very high-intensity signals, the noise is vanishingly small compared to the signal, even though it’s huge compared to the noise of the low-amplitude signal.

Our esteemed peers, I would argue, are somewhat random in their response, but weighted toward objectivity. Using this inappropriate physics model, a review conducted by four reviewers can be expected (on average) to contain two responses that are, basically, noise. By contrast, a review by 100 reviewers may only have 10 responses that are noise. Overall, a substantial improvement. So, adding the responses of a large number of peers together should produce a better picture of a scientific paper’s strengths and weaknesses.

Didn’t I just say that reviewers are overloaded? Doesn’t it seem that this will make the problem worse?

Well, no, as it turns out. When this approach was tested (with consent) on papers submitted to Synlett, it was discovered that review times went way down—from weeks to days. And authors reported getting more useful comments from their reviewers….(More)”.

Community Digital Storytelling for Collective Intelligence: towards a Storytelling Cycle of Trust


Sarah Copeland and Aldo de Moor in AI & SOCIETY: “Digital storytelling has become a popular method for curating community, organisational, and individual narratives. Since its beginnings over 20 years ago, projects have sprung up across the globe, where authentic voice is found in the narration of lived experiences. Contributing to a Collective Intelligence for the Common Good, the authors of this paper ask how shared stories can bring impetus to community groups to help identify what they seek to change, and how digital storytelling can be effectively implemented in community partnership projects to enable authentic voices to be carried to other stakeholders in society. The Community Digital Storytelling (CDST) method is introduced as a means for addressing community-of-place issues. There are five stages to this method: preparation, story telling, story digitisation, digital story sense-making, and digital story sharing. Additionally, a Storytelling Cycle of Trust framework is proposed. We identify four trust dimensions as being imperative foundations in implementing community digital media interventions for the common good: legitimacy, authenticity, synergy, and commons. This framework is concerned with increasing the impact that everyday stories can have on society; it is an engine driving prolonged storytelling. From this perspective, we consider the ability to scale up the scope and benefit of stories in civic contexts. To illustrate this framework, we use experiences from the CDST workshop in northern Britain and compare this with a social innovation project in the southern Netherlands….(More)”.

Citizen science volunteers driven by desire to learn


UoP News: “People who give up their time for online volunteering are mainly motivated by a desire to learn, a new study has found.

The research surveyed volunteers on ‘citizen science’ projects and suggests that this type of volunteering could be used to increase general knowledge of science within society.

The study, led by Dr Joe Cox from the Department of Economics and Finance, discovered that an appetite to learn more about the subject was the number one driver for online volunteers, followed by being part of a community. It also revealed that many volunteers are motivated by a desire for escapism.

Online volunteering and crowdsourcing projects typically involve input from large numbers of contributors working individually but towards a common goal. This study surveyed 2000 people who volunteer for ‘citizen science’ projects hosted by Zooniverse, a collection of research projects that rely on volunteers to help scientists with the challenge of interpreting massive amounts of data….“What was interesting was that characteristics such as age, gender and level of education had no correlation with the amount of time people give up and the length of time they stay on a project. These participants were relatively highly educated compared with the rest of the population, but those with the highest levels of education do not appear to contribute the most effort and information towards these projects.”

The study noticed pronounced changes in how people are motivated at different stages of the volunteer process. While a desire to learn is the most important motivation among contributors at the early stages, the opportunities for social interaction and escapism become more important motivations at later stages….

He suggests that online volunteering and citizen science projects could incentivise participation by offering clearly defined opportunities for learning, while representing an effective way of increasing scientific literacy and knowledge within society….(More)”.

Scientists Use Google Earth and Crowdsourcing to Map Uncharted Forests


Katie Fletcher, Tesfay Woldemariam and Fred Stolle at EcoWatch: “No single person could ever hope to count the world’s trees. But a crowd of them just counted the world’s drylands forests—and, in the process, charted forests never before mapped, cumulatively adding up to an area equivalent in size to the Amazon rainforest.

Current technology enables computers to automatically detect forest area through satellite data in order to adequately map most of the world’s forests. But drylands, where trees are fewer and farther apart, stymied these modern methods. To measure the extent of forests in drylands, which make up more than 40 percent of land surface on Earth, researchers from UN Food and Agriculture Organization, World Resources Institute and several universities and organizations had to come up with unconventional techniques. Foremost among these was turning to residents, who contributed their expertise through local map-a-thons….

Google Earth collects satellite data from several satellites with a variety of resolutions and technical capacities. The dryland satellite imagery collection compiled by Google from various providers, including Digital Globe, is of particularly high quality, as desert areas have little cloud cover to obstruct the views. So while difficult for algorithms to detect non-dominant land cover, the human eye has no problem distinguishing trees in the landscapes. Using this advantage, the scientists decided to visually count trees in hundreds of thousands of high-resolution images to determine overall dryland tree cover….

Armed with the quality images from Google that allowed researchers to see objects as small as half a meter (about 20 inches) across, the team divided the global dryland images into 12 regions, each with a regional partner to lead the counting assessment. The regional partners in turn recruited local residents with practical knowledge of the landscape to identify content in the sample imagery. These volunteers would come together in participatory mapping workshops, known colloquially as “map-a-thons.”…

Utilizing local landscape knowledge not only improved the map quality but also created a sense of ownership within each region. The map-a-thon participants have access to the open source tools and can now use these data and results to better engage around land use changes in their communities. Local experts, including forestry offices, can also use this easily accessible application to continue monitoring in the future.

Global Forest Watch uses medium resolution satellites (30 meters or about 89 feet) and sophisticated algorithms to detect near-real time deforestation in densely forested area. The dryland tree cover maps complement Global Forest Watch by providing the capability to monitor non-dominant tree cover and small-scale, slower-moving events like degradation and restoration. Mapping forest change at this level of detail is critical both for guiding land decisions and enabling government and business actors to demonstrate their pledges are being fulfilled, even over short periods of time.

The data documented by local participants will enable scientists to do many more analyses on both natural and man-made land changes including settlements, erosion features and roads. Mapping the tree cover in drylands is just the beginning….(More)”.

Crowd Research: Open and Scalable University Laboratories


Paper by Rajan Vaish et al: “Research experiences today are limited to a privileged few at select universities. Providing open access to research experiences would enable global upward mobility and increased diversity in the scientific workforce. How can we coordinate a crowd of diverse volunteers on open-ended research? How could a PI have enough visibility into each person’s contributions to recommend them for further study? We present Crowd Research, a crowdsourcing technique that coordinates open-ended research through an iterative cycle of open contribution, synchronous collaboration, and peer assessment. To aid upward mobility and recognize contributions in publications, we introduce a decentralized credit system: participants allocate credits to each other, which a graph centrality algorithm translates into a collectively-created author order. Over 1,500 people from 62 countries have participated, 74% from institutions with low access to research. Over two years and three projects, this crowd has produced articles at top-tier Computer Science venues, and participants have gone on to leading graduate programs….(More)”.

Innovation@DFID: Crowdsourcing New Ideas at the UK’s Department for International Development


Paper by Anke Schwittay and Paul Braund: “Over the last decade, traditional development institutions have joined market-based actors in embracing inclusive innovation to ensure the sector’s relevance and impacts. In 2014, the UK’s Department for International Development’s (DFID) Innovation Hub launched Amplify as its own flagship initiative. The programme, which is managed by IDEO, a Silicon Valley-based design consultancy, aims to crowdsource new ideas to various development challenges from a broad and diverse group of actors, including poor people themselves. By examining the direction, diversity and distribution of Amplify’s work, we argue that while development innovation can generate more inclusive practices, its transformative potential is constrained by broader developmental logics and policy regimes….(More)”

America is not a true democracy. But it could be with the help of technology


Nicole Softness at Quartz: “Many Americans aren’t aware they don’t live in a direct democracy. But with a little digital assistance, they could be….Once completely cut off from the global community, Estonia is now considered a world leader for its efforts to integrate technology with government administration. While standing in line for coffee, you could file your tax return, confirm sensitive personal medical information, and register a new company in just a few swipes, all on Estonia’s free wifi.

What makes this possible without the risk of fraud? Digital trust. Using a technology called blockchain, which verifies online communications and transactions at every step (and essentially eliminates the possibility of online fraud), Estonian leadership has moved the majority of citizenship processes online. Startups have now created new channels for democratic participation, like Rahvaalgatus, an online crowdsourcing platform that allows users to discuss and digitally vote on policy proposals submitted to the Estonian parliament.

Brazil has also utilized this trust quite valiantly. The country’s constitution, passed in 1988, legislated that signatures from 1% of a population could force the Brazilian leadership to recognize any signed document as an official draft bill and vote. Until recently, the notion of getting sufficient signatures on paper would have been laughable: that’s just over 2 million physical signatures. However, votes can now be cast online, which makes gathering digital signatures all the more easy. As a result, Brazilians now have more control over the legislature being brought before parliament.

 Blockchain technology creates an immutable record of signatures tied to the identities of voters. Again, blockchain technology is key here, as it creates an immutable record of signatures tied to the identities of voters. The government knows which voters are legitimate citizens, and citizens can be sure their votes remain accurate. When Brazilians are able to participate in this manner, their democracy shifts towards the sort of “direct” democracy that, until now, seemed logistically impossible in modern society.

Australian citizens have engaged in a slightly different experiment, dubbed “Government 2.0.” In March 2016, technology experts convened a new political party called Flux, which they describe as “democracy for the information age.” The party platform argues that bureaucracy stymies key government functions, which cannot process the requisite information required to govern.

If elected to government, members of Flux would vote on bills scheduled to appear before parliament based on the digital ballots of the supporters who voted them in. Voters could choose to participate in casting their vote for that bill themselves, or transfer their votes to trusted experts. Flux representatives in parliament would then cast their votes 100% based on the results of these member participants. (They are yet to win any seats in government, however.)

These solutions show us that bureaucratic boundaries no longer have to limit our access to a true democracy. The technology is here to make direct democracy the reality that the Greeks once imagined.

More so, increasing democratic participation will have positive ripple effects beyond participation in a direct democracy: Informed voting is the gateway to more active civic engagement and a more informed electorate, all of which raises the level of debate in a political environment desperately in need of participation….(More)”

The accuracy of farmer-generated data in an agricultural citizen science methodology


Jonathan Steinke, Jacob van Etten and Pablo Mejía Zelan in Agronomy for Sustainable Development: “Over the last decades, participatory approaches involving on-farm experimentation have become more prevalent in agricultural research. Nevertheless, these approaches remain difficult to scale because they usually require close attention from well-trained professionals. Novel large-N participatory trials, building on recent advances in citizen science and crowdsourcing methodologies, involve large numbers of participants and little researcher supervision. Reduced supervision may affect data quality, but the “Wisdom of Crowds” principle implies that many independent observations from a diverse group of people often lead to highly accurate results when taken together. In this study, we test whether farmer-generated data in agricultural citizen science are good enough to generate valid statements about the research topic. We experimentally assess the accuracy of farmer observations in trials of crowdsourced crop variety selection that use triadic comparisons of technologies (tricot). At five sites in Honduras, 35 farmers (women and men) participated in tricot experiments. They ranked three varieties of common bean (Phaseolus vulgaris L.) for Plant vigorPlant architecturePest resistance, and Disease resistance. Furthermore, with a simulation approach using the empirical data, we did an order-of-magnitude estimation of the sample size of participants needed to produce relevant results. Reliability of farmers’ experimental observations was generally low (Kendall’s W 0.174 to 0.676). But aggregated observations contained information and had sufficient validity (Kendall’s tau coefficient 0.33 to 0.76) to identify the correct ranking orders of varieties by fitting Mallows-Bradley-Terry models to the data. Our sample size simulation shows that low reliability can be compensated by engaging higher numbers of observers to generate statistically meaningful results, demonstrating the usefulness of the Wisdom of Crowds principle in agricultural research. In this first study on data quality from a farmer citizen science methodology, we show that realistic numbers of less than 200 participants can produce meaningful results for agricultural research by tricot-style trials….(More)”.

Crowdsourcing Expertise to Increase Congressional Capacity


Austin Seaborn at Beeck Center: “Members of Congress have close connections with their districts, and information arising from local organizations, such as professional groups, academia, industry as well as constituents with relevant expertise (like retirees, veterans or students) is highly valuable to them.  Today, congressional staff capacity is at a historic low, while at the same time, constituents in districts are often well equipped to address the underlying policy questions that Congress seeks to solve….

In meetings we have had with House and Senate staffers, they repeatedly express both the difficulty managing their substantial area-specific work loads and their interest in finding ways to substantively engage constituents to find good nuggets of information to help them in their roles as policymakers. At the same time, constituents are demanding more transparency and dialogue from their elected representatives. In many cases, our project brings these two together. It allows Members to tap the expertise in their districts while at the same time creating an avenue for constituents to contribute their knowledge and area expertise to the legislative process. It’s a win for constituents and a win for Member of Congress and their staffs.

It is important to note that the United States lags behind other democracies in experimenting with more inclusive methods during the policymaking process. In the United Kingdom, for example, the UK Parliament has experimented with a variety of new digital tools to engage with constituents. These methods range from Twitter hashtags, which are now quite common given the rise in social media use by governments and elected officials, to a variety of web forums on a variety of platforms. Since June of 2015, they have also been doing digital debates, where questions from the general public are crowdsourced and later integrated into a parliamentary debate by the Member of Parliament leading the debate. Estonia, South Africa, Taiwan, France also…notable examples.

One promising new development we hope to explore more thoroughly is the U.S. Library of Congress’s recently announced legislative data App Challenge. This competition is distinct from the many hackathons that have been held on behalf of Congress in the past, in that this challenge seeks new methods not only to innovate, but also to integrate and legislate. In his announcement, the Library’s Chief Information Officer, Bernard A. Barton, Jr., stated, “An informed citizenry is better able to participate in our democracy, and this is a very real opportunity to contribute to a better understanding of the work being done in Washington.  It may even provide insights for the people doing the work around the clock, both on the Hill, and in state and district offices.  Your innovation and integration may ultimately benefit the way our elected officials legislate for our future.” We believe these sorts of new methods will play a crucial role in the future of engaging citizens in their democracies….(More)”.

Crowdsourcing Government: Lessons from Multiple Disciplines


Helen Liu in the Public Administration Review: “Crowdsourcing has proliferated across disciplines and professional fields. Implementers in the public sector face practical challenges, however, in the execution of crowdsourcing. This review synthesizes prior crowdsourcing research and practices from a variety of disciplines and focuses to identify lessons for meeting the practical challenges of crowdsourcing in the public sector. It identifies three distinct categories of crowdsourcing: organizations, products and services, and holistic systems. Lessons about the fundamental logic of process design—alignment, motivation, and evaluation—identified across the three categories are discussed. Conclusions drawn from past studies and the resulting evidence can help public managers better design and implement crowdsourcing in the public sector.

Practitioner Points

  • Crowdsourcing studies in the public sector show that properly designed crowdsourcing platforms can empower citizens, create legitimacy for the government with the people, and enhance the effectiveness of public services and goods.
  • Research suggests that crowdsourcing decisions should be based on both solutions necessary to resolve public problems and appropriate tasks for participants who have knowledge or skills.
  • Evidence shows that prizes and rewards can increase participation rates, but opportunities for learning and skill building are essential for enhancing the quality of participants’ contributions.
  • Studies indicate that a crowdsourcing approach empowers participants through peer review by adopting constructive competition and supportive cooperation designs in the review process.
  • Studies illustrate that the establishment of an effective reputation system in the crowdsourcing process can ensure legitimate evaluation….(More)”.