Paper by CB. Jackson, C. Østerlund, G. Mugar, KDV. Hassman for the Proceedings of the Forty-eighth Hawai’i International Conference on System Science (HICSS-48): “The paper explores the motivations of volunteers in a large crowdsourcing project and contributes to our understanding of the motivational factors that lead to deeper engagement beyond initial participation. Drawing on the theory of legitimate peripheral participation (LPP) and the literature on motivation in crowdsourcing, we analyze interview and trace data from a large citizen science project. The analyses identify ways in which the technical features of the projects may serve as motivational factors leading participants towards sustained participation. The results suggest volunteers first engage in activities to support knowledge acquisition and later share knowledge with other volunteers and finally increase participation in Talk through a punctuated process of role discovery…(More)”
Turns Out the Internet Is Bad at Guessing How Many Coins Are in a Jar
Eric B. Steiner at Wired: “A few weeks ago, I asked the internet to guess how many coins were in a huge jar…The mathematical theory behind this kind of estimation game is apparently sound. That is, the mean of all the estimates will be uncannily close to the actual value, every time. James Surowiecki’s best-selling book, Wisdom of the Crowd, banks on this principle, and details several striking anecdotes of crowd accuracy. The most famous is a 1906 competition in Plymouth, England to guess the weight of an ox. As reported by Sir Francis Galton in a letter to Nature, no one guessed the actual weight of the ox, but the average of all 787 submitted guesses was exactly the beast’s actual weight….
So what happened to the collective intelligence supposedly buried in our disparate ignorance?
Most successful crowdsourcing projects are essentially the sum of many small parts: efficiently harvested resources (information, effort, money) courtesy of a large group of contributors. Think Wikipedia, Google search results, Amazon’s Mechanical Turk, and KickStarter.
But a sum of parts does not wisdom make. When we try to produce collective intelligence, things get messy. Whether we are predicting the outcome of an election, betting on sporting contests, or estimating the value of coins in a jar, the crowd’s take is vulnerable to at least three major factors: skill, diversity, and independence.
A certain amount of skill or knowledge in the crowd is obviously required, while crowd diversity expands the number of possible solutions or strategies. Participant independence is important because it preserves the value of individual contributors, which is another way of saying that if everyone copies their neighbor’s guess, the data are doomed.
Failure to meet any one of these conditions can lead to wildly inaccurate answers, information echo, or herd-like behavior. (There is more than a little irony with the herding hazard: The internet makes it possible to measure crowd wisdom and maybe put it to use. Yet because people tend to base their opinions on the opinions of others, the internet ends up amplifying the social conformity effect, thereby preventing an accurate picture of what the crowd actually thinks.)
What’s more, even when these conditions—skill, diversity, independence—are reasonably satisfied, as they were in the coin jar experiment, humans exhibit a whole host of other cognitive biases and irrational thinking that can impede crowd wisdom. True, some bias can be positive; all that Gladwellian snap-judgment stuff. But most biases aren’t so helpful, and can too easily lead us to ignore evidence, overestimate probabilities, and see patterns where there are none. These biases are not vanquished simply by expanding sample size. On the contrary, they get magnified.
Given the last 60 years of research in cognitive psychology, I submit that Galton’s results with the ox weight data were outrageously lucky, and that the same is true of other instances of seemingly perfect “bean jar”-styled experiments….”
Democratizing Inequalities: Dilemmas of the New Public Participation
Uncle Sam Wants You…To Crowdsource Science
Neal Ungerleider at Co-Labs: “It’s not just for the private sector anymore: Government scientists are embracing crowdsourcing. At a White House-sponsored workshop in late November, representatives from more than 20 different federal agencies gathered to figure out how to integrate crowdsourcing and citizen scientists into various government efforts. The workshop is part of a bigger effort with a lofty goal: Building a set of best practices for the thousands of citizens who are helping federal agencies gather data, from the Environmental Protection Agency (EPA) to NASA….Perhaps the best known federal government crowdsourcing project is Nature’s Notebook, a collaboration between the U.S. Geological Survey and the National Park Service which asks ordinary citizens to take notes on plant and animal species during different times of year. These notes are then cleansed and collated into a massive database on animal and plant phenology that’s used for decision-making by national and local governments. The bulk of the observations, recorded through smartphone apps, are made by ordinary people who spend a lot of time outdoors….Dozens of government agencies are now asking the public for help. The Centers for Disease Control and Prevention runs a student-oriented, Mechanical Turk-style “micro-volunteering” service called CDCology, the VA crowdsources design of apps for homeless veterans, while the National Weather Service distributes a mobile app called mPING that asks ordinary citizens to help fine-tune public weather reports by giving information on local conditions. The Federal Communication Commission’s Measuring Broadband America app, meanwhile, allows citizens to volunteer information on their Internet broadband speeds, and the Environmental Protection Agency’s Air Sensor Toolbox asks users to track local air pollution….
As of now, however, when it comes to crowdsourcing data for government scientific research, there’s no unified set of standards or best practices. This can lead to wild variations in how various agencies collect data and use it. For officials hoping to implement citizen science projects within government, the roadblocks to crowdsourcing include factors that crowdsourcing is intended to avoid: limited budgets, heavy bureaucracy, and superiors who are skeptical about the value of relying on the crowd for data.
Benforado and Shanley also pointed out that government agencies are subject to additional regulations, such as the Paperwork Reduction Act, which can make implementation of crowdsourcing projects more challenging than they would be in academia or the private sector… (More)”
Making Futures – Marginal Notes on Innovation, Design, and Democracy
Book edited by Pelle Ehn, Elisabet M. Nilsson and Richard Topgaard: “Innovation and design need not be about the search for a killer app. Innovation and design can start in people’s everyday activities. They can encompass local services, cultural production, arenas for public discourse, or technological platforms. The approach is participatory, collaborative, and engaging, with users and consumers acting as producers and creators. It is concerned less with making new things than with making a socially sustainable future. This book describes experiments in innovation, design, and democracy, undertaken largely by grassroots organizations, non-governmental organizations, and multi-ethnic working-class neighborhoods.
These stories challenge the dominant perception of what constitutes successful innovations. They recount efforts at social innovation, opening the production process, challenging the creative class, and expanding the public sphere. The wide range of cases considered include a collective of immigrant women who perform collaborative services, the development of an open-hardware movement, grassroots journalism, and hip-hop performances on city buses. They point to the possibility of democratized innovation that goes beyond solo entrepreneurship and crowdsourcing in the service of corporations to include multiple futures imagined and made locally by often-marginalized publics. (More) “
Gamifying Cancer Research Crowdsources the Race for the Cure
Jason Brick at PSFK: “Computer time and human hours are among of the biggest obstacles in the face of progress in the fight against cancer. Researchers have terabytes of data, but only so many processors and people with which to analyze it. Much like the SETI program (Search for Extra Terrestrial Intelligence), it’s likely that big answers are already in the information we’ve collected. They’re just waiting for somebody to find them.
Reverse the Odds, a free mobile game from Cancer Research UK, accesses the combined resources of geeks and gamers worldwide. It’s a simple app game, the kind you play in line at the bank or while waiting at the dentist’s office, in which you complete mini puzzles and buy upgrades to save an imaginary world.
Each puzzle of the game is a repurposing of cancer data. Players find patterns in the data — the exact kind of analysis grad students and volunteers in a lab look for — and the results get compiled by Cancer Research UK for use in finding a cure. Errors are expected and accounted for because the thousands of players expected will round out the occasional mistake….(More)”
Crowdsourcing Data to Fight Air Pollution
Jason Brick at PSFK: “Air pollution is among the most serious environmental problems of the modern age. Although pollution in developed nations like the USA and Germany has fallen since the 1980s, air quality in growing technological countries — especially in the BRIC (Brazil, Russia, India and China) group — grows worse with each year. In 2012, 3.7 million people died as a direct result of problems caused by chronic exposure to bad air, and tens of millions more were made ill.
There is no easy solution to such a complex and widespread problem, but Breathe offers a fix for one aspect and solves it in two ways.
The first way is the device itself: a portable plastic brick smaller than a bar of soap that monitors the presence and concentration of toxic gases and other harmful substances in the air, in real time throughout your day. It records the quality and, if it reaches unacceptably dangerous levels, warns you immediately with an emergency signal. Plug the device into your smart phone, and it keeps a record of air quality by time and location you can use to avoid the most polluted times of day and places in your area.
The second solution is the truly innovative aspect of this project. Via the Breathe app, any user who wants to can add her data to a central database that keeps statistics worldwide. Individuals can then use that data to plan vacations, time outdoor activities or schedule athletic events. Given enough time, Breathe could accumulate enough data to be used to affect policy by identifying the most polluted areas in a city, county or nation so the authorities can work on a more robust solution….(More)”
Opening Government: Designing Open Innovation Processes to Collaborate With External Problem Solvers
New paper by Ines Mergel in Social Science Computer Review: “Open government initiatives in the U.S. government focus on three main aspects: transparency, participation, and collaboration. Especially the collaboration mandate is relatively unexplored in the literature. In practice, government organizations recognize the need to include external problem solvers into their internal innovation creation processes. This is partly derived from a sense of urgency to improve the efficiency and quality of government service delivery. Another formal driver is the America Competes Act that instructs agencies to search for opportunities to meaningfully promote excellence in technology, education, and science. Government agencies are responding to these requirements by using open innovation (OI) approaches to invite citizens to crowdsource and peer produce solutions to public management problems. These distributed innovation processes occur at all levels of the U.S. government and it is important to understand what design elements are used to create innovative public management ideas. This article systematically reviews existing government crowdsourcing and peer production initiatives and shows that after agencies have defined their public management problem, they go through four different phases of the OI process: (1) idea generation through crowdsourcing, (2) incubation of submitted ideas with peer voting and collaborative improvements of favorite solutions, (3) validation with a proof of concept of implementation possibilities, and (4) reveal of the selected solution and the (internal) implementation of the winning idea. Participation and engagement are incentivized both with monetary and nonmonetary rewards, which lead to tangible solutions as well as intangible innovation outcomes, such as increased public awareness.”
Designing a Citizen Science and Crowdsourcing Toolkit for the Federal Government
2013 Second Open Government National Action Plan, President Obama called on Federal agencies to harness the ingenuity of the public by accelerating and scaling the use of open innovation methods, such as citizen science and crowdsourcing, to help address a wide range of scientific and societal problems.
Citizen science is a form of open collaboration in which members of the public participate in the scientific process, including identifying research questions, collecting and analyzing data, interpreting results, and solving problems. Crowdsourcing is a process in which individuals or organizations submit an open call for voluntary contributions from a large group of unknown individuals (“the crowd”) or, in some cases, a bounded group of trusted individuals or experts.
Citizen science and crowdsourcing are powerful tools that can help Federal agencies:
- Advance and accelerate scientific research through group discovery and co-creation of knowledge. For instance, engaging the public in data collection can provide information at resolutions that would be difficult for Federal agencies to obtain due to time, geographic, or resource constraints.
- Increase science literacy and provide students with skills needed to excel in science, technology, engineering, and math (STEM). Volunteers in citizen science or crowdsourcing projects gain hands-on experience doing real science, and take that learning outside of the classroom setting.
- Improve delivery of government services with significantly lower resource investments.
- Connect citizens to the missions of Federal agencies by promoting a spirit of open government and volunteerism.
To enable effective and appropriate use of these new approaches, the Open Government National Action Plan specifically commits the Federal government to “convene an interagency group to develop an Open Innovation Toolkit for Federal agencies that will include best practices, training, policies, and guidance on authorities related to open innovation, including approaches such as incentive prizes, crowdsourcing, and citizen science.”
On November 21, 2014, the Office of Science and Technology Policy (OSTP) kicked off development of the Toolkit with a human-centered design workshop. Human-centered design is a multi-stage process that requires product designers to engage with different stakeholders in creating, iteratively testing, and refining their product designs. The workshop was planned and executed in partnership with the Office of Personnel Management’s human-centered design practice known as “The Lab” and the Federal Community of Practice on Crowdsourcing and Citizen Science (FCPCCS), a growing network of more than 100 employees from more than 20 Federal agencies….
The Toolkit will help further the culture of innovation, learning, sharing, and doing in the Federal citizen science and crowdsourcing community: indeed, the development of the Toolkit is a collaborative and community-building activity in and of itself.
The following successful Federal projects illustrate the variety of possible citizen science and crowdsourcing applications:
- The Citizen Archivist Dashboard (NARA) coordinates crowdsourced archival record tagging and document transcription. Recently, more than 170,000 volunteers indexed 132 million names of the 1940 Census in only five months, which NARA could not have done alone.
- Through Measuring Broadband America (FCC), 2 million volunteers collected and provided the FCC with data on their Internet speeds, data that FCC used to create a National Broadband Map revealing digital divides.
- In 2014, Nature’s Notebook (USGS, NSF) volunteers recorded more than 1 million observations on plants and animals that scientists use to analyze environmental change.
- Did You Feel It? (USGS) has enabled more than 3 million people worldwide to share their experiences during and immediately after earthquakes. This information facilitates rapid damage assessments and scientific research, particularly in areas without dense sensor networks.
- The mPING (NOAA) mobile app has collected more than 600,000 ground-based observations that help verify weather models.
- USAID anonymized and opened its loan guarantee data to volunteer mappers. Volunteers mapped 10,000 data points in only 16 hours, compared to the 60 hours officials expected.
- The Air Sensor Toolbox (EPA), together with training workshops, scientific partners, technology evaluations, and a scientific instrumentation loan program, empowers communities to monitor and report local air pollution.
In early 2015, OSTP, in partnership with the Challenges and Prizes Community of Practice, will convene Federal practitioners to develop the other half of the Open Innovation Toolkit for prizes and challenges. Stay tuned!”
Crowdsourcing and Humanitarian Action: Analysis of the Literature
Patrick Meier: “Raphael Hörler from Zurich’s ETH University has just completed his thesis on the role of crowdsourcing in humanitarian action. His valuable research offers one of the most up-to-date and comprehensive reviews of the principal players and humanitarian technologies in action today. In short, I highly recommend this important resource. Raphael’s full thesis is available here (PDF).”