Selected Readings on Crowdsourcing Tasks and Peer Production


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.

Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.

  • In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
  • In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”

Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.

  • In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
  • The proposed four-part typology describes the following types of crowdsourcing in government:
    • Knowledge Discovery and Management
    • Distributed Human Intelligence Tasking
    • Broadcast Search
    • Peer-Vetted Creative Production
  • In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
    • Governments and scholars transcribing historical document scans
    • Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
    • Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
    • Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”

Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.

  • In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
  • The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
  • The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.

Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.

  • In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
  • Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
    • Knowledge-related benefits
    • Benefits from open participation
    • Benefits from the open disclosure of intermediate inputs
    • Motivational benefits
  • The authors also identify a number of challenges:
    • Organizational challenges
    • Matching projects and people
    • Division of labor and integration of contributions
    • Project leadership
    • Motivational challenges
    • Sustaining contributor involvement
    • Supporting a broader set of motivations
    • Reconciling conflicting motivations

Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.

  • In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
  • The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”

Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.

  • In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
  • The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
    • Designing workflows
    • Assigning tasks
    • Supporting hierarchical structure
    • Enabling real-time crowd work
    • Supporting synchronous collaboration
    • Controlling quality

Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.

  • This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
  • Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”

Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.

  • In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
  • The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
    • Who is performing the task? Why are they doing it?
    • What is being accomplished? How is it being done?
  • The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”

Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.

  • In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
  • With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
  • He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”

Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.

  • In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
  • The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
  • They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”

Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.

  • In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
  • Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”

Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.

  • In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
  • Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”

Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.

  • This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
  • Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
    • “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
    • A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
    • Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”

Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.

  • In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
  • The authors focus on six types of collaborative governance projects:
    • Services initiated by government built on government data;
    • Services initiated by government and making use of citizens’ data;
    • Services initiated by civil society built on open government data;
    • Collaborative e-government services; and
    • Services run by civil society and based on citizen data.
  • The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”

CollaborativeScience.org: Sustaining Ecological Communities Through Citizen Science and Online Collaboration


David Mellor at CommonsLab: “In any endeavor, there can be a tradeoff between intimacy and impact. The same is true for science in general and citizen science in particular. Large projects with thousands of collaborators can have incredible impact and robust, global implications. On the other hand, locally based projects can foster close-knit ties that encourage collaboration and learning, but face an uphill battle when it comes to creating rigorous and broadly relevant investigations. Online collaboration has the potential to harness the strengths of both of these strategies if a space can be created that allows for the easy sharing of complex ideas and conservation strategies.
CollaborativeScience.org was created by researchers from five different universities to train Master Naturalists in ecology, scientific modeling and adaptive management, and then give these capable volunteers a space to put their training to work and create conservation plans in collaboration with researchers and land managers.
We are focusing on scientific modeling throughout this process because environmental managers and ecologists have been trained to intuitively create explanations based on a very large number of related observations. As new data are collected, these explanations are revised and are put to use in generating new, testable hypotheses. The modeling tools that we are providing to our volunteers allow them to formalize this scientific reasoning by adding information, sources and connections, then making predictions based on possible changes to the system. We integrate their projects into the well-established citizen science tools at CitSci.org and guide them through the creation of an adaptive management plan, a proven conservation project framework…”

The Weird, Wild World of Citizen Science Is Already Here


David Lang in Wired: “Up and down the west coast of North America, countless numbers of starfish are dying. The affliction, known as Sea Star Wasting Syndrome, is already being called the biggest die-off of sea stars in recorded history, and we’re still in the dark as to what’s causing it or what it means. It remains an unsolved scientific mystery. The situation is also shaping up as a case study of an unsung scientific opportunity: the rise of citizen science and exploration.
The sea star condition was first noticed by Laura James, a diver and underwater videographer based in Seattle. As they began washing up on the shore near her home with lesions and missing limbs, she became concerned and notified scientists. Similar sightings started cropping up all along the West Coast, with gruesome descriptions of sea stars that were disintegrating in a matter of days, and populations that had been decimated. As scientists race to understand what’s happening, they’ve enlisted the help of amateurs like James, to move faster. Pete Raimondi’s lab at UC Santa Cruz has created the Sea Star Wasting Map, the baseline for monitoring the issue, to capture the diverse set of contributors and collaborators.
The map is one of many new models of citizen-powered science–a blend of amateurs and professionals, looking and learning together–that are beginning to emerge. Just this week, NASA endorsed a group of amateur astronomers to attempt to rescue a vintage U.S. spacecraft. NASA doesn’t have the money to do it, and this passionate group of citizen scientists can handle it.
Unfortunately, the term “citizen science” is terrible. It’s vague enough to be confusing, yet specific enough to seem exclusive. It’s too bad, too, because the idea of citizen science is thrilling. I love the notion that I can participate in the expanding pool of human knowledge and understanding, even though the extent of my formal science education is a high school biology class. To me, it seemed a genuine invitation to be curious. A safe haven for beginners. A license to explore.
Not everyone shares my romantic perspective, though. If you ask a university researcher, they’re likely to explain citizen science as a way for the public to contribute data points to larger, professionally run studies, like participating in the galaxy-spotting website Zooniverse or taking part in the annual Christmas Bird Count with the Audubon Society. It’s a model on the scientific fringes; using broad participation to fill the gaps in necessary data.
There’s power in this diffuse definition, though, as long as new interpretations are welcomed and encouraged. By inviting and inspiring people to ask their own questions, citizen science can become much more than a way of measuring bird populations. From the drone-wielding conservationists in South Africa to the makeshift biolabs in Brooklyn, a widening circle of participants are wearing the amateur badge with honor. And all of these groups–the makers, the scientists, the hobbyists–are converging to create a new model for discovery. In other words, the maker movement and the traditional science world are on a collision course.
To understand the intersection, it helps to know where each of those groups is coming from….”

Paying Farmers to Welcome Birds


Jim Robbins in The New York Times: “The Central Valley was once one of North America’s most productive wildlife habitats, a 450-mile-long expanse marbled with meandering streams and lush wetlands that provided an ideal stop for migratory shorebirds on their annual journeys from South America and Mexico to the Arctic and back.

Farmers and engineers have long since tamed the valley. Of the wetlands that existed before the valley was settled, about 95 percent are gone, and the number of migratory birds has declined drastically. But now an unusual alliance of conservationists, bird watchers and farmers have joined in an innovative plan to restore essential habitat for the migrating birds.

The program, called BirdReturns, starts with data from eBird, the pioneering citizen science project that asks birders to record sightings on a smartphone app and send the information to the Cornell Lab of Ornithology in upstate New York.

By crunching data from the Central Valley, eBird can generate maps showing where virtually every species congregates in the remaining wetlands. Then, by overlaying those maps on aerial views of existing surface water, it can determine where the birds’ need for habitat is greatest….

BirdReturns is an example of the growing movement called reconciliation ecology, in which ecosystems dominated by humans are managed to increase biodiversity.

“It’s a new ‘Moneyball,’ ” said Eric Hallstein, an economist with the Nature Conservancy and a designer of the auctions, referring to the book and movie about the Oakland Athletics’ data-driven approach to baseball. “We’re disrupting the conservation industry by taking a new kind of data, crunching it differently and contracting differently.”

Building a More Open Government


Corinna Zarek at the White House: “It’s Sunshine Week again—a chance to celebrate transparency and participation in government and freedom of information. Every year in mid-March, we take stock of our progress and where we are headed to make our government more open for the benefit of citizens.
In December, 2013, the Administration announced 23 ambitious commitments to further open up government over the next two years in U.S. Government’s  second Open Government National Action Plan. Those commitments are now all underway or in development, including:
·         Launching an improved Data.gov: The updated Data.gov debuted in January, 2014, and continues to grow with thousands of updated or new government data sets being proactively made available to the public.
·         Increasing public collaboration: Through crowdsourcing, citizen science, and other methods, Federal agencies continue to expand the ways they collaborate with the public. For example, the National Aeronautics and Space Administration, for instance, recently launched its third Asteroid Grand Challenge, a broad call to action, seeking the best and brightest ideas from non-traditional partners to enhance and accelerate the work NASA is already doing for planetary defense.
·         Improving We the People: The online petition platform We the People gives the public a direct way to participate in their government and is currently incorporating improvements to make it easier for the public to submit petitions and signatures.”

The Open Data/Environmental Justice Connection


Jeffrey Warren for Wilson’s Commons Lab: “… Open data initiatives seem to assume that all data is born in the hallowed halls of government, industry and academia, and that open data is primarily about convincing such institutions to share it to the public.
It is laudable when institutions with important datasets — such as campaign finance, pollution or scientific data — see the benefit of opening it to the public. But why do we assume unilateral control over data production?
The revolution in user-generated content shows the public has a great deal to contribute – and to gain—from the open data movement. Likewise, citizen science projects that solicit submissions or “task completion” from the public rarely invite higher-level participation in research –let alone true collaboration.
This has to change. Data isn’t just something you’re given if you ask nicely, or a kind of community service we perform to support experts. Increasingly, new technologies make it possible for local groups to generate and control data themselves — especially in environmental health. Communities on the front line of pollution’s effects have the best opportunities to monitor it and the most to gain by taking an active role in the research process.
DIY Data
Luckily, an emerging alliance between the maker/Do-It-Yourself (DIY) movement and watchdog groups is starting to challenge the conventional model.
The Smart Citizen project, the Air Quality Egg and a variety of projects in the Public Lab network are recasting members of the general public as actors in the framing of new research questions and designers of a new generation of data tools.
The Riffle, a <$100 water quality sensor built inside of hardware-store pipe, can be left in a creek near an industrial site to collect data around the clock for weeks or months. In the near future, when pollution happens – like the ash spill in North Carolina or the chemical spill in West Virginia – the public will be alerted and able to track its effects without depending on expensive equipment or distant labs.
This emerging movement is recasting environmental issues not as intractably large problems, but up-close-and-personal health issues — just what environmental justice (EJ) groups have been arguing for years. The difference is that these new initiatives hybridize such EJ community organizers and the technology hackers of the open hardware movement. Just as the Homebrew Computer Club’s tinkering with early prototypes led to the personal computer, a new generation of tinkerers sees that their affordable, accessible techniques can make an immediate difference in investigating lead in their backyard soil, nitrates in their tap water and particulate pollution in the air they breathe.
These practitioners see that environmental data collection is not a distant problem in a developing country, but an issue that anyone in a major metropolitan area, or an area affected by oil and gas extraction, faces on a daily basis. Though underserved communities are often disproportionally affected, these threats often transcend socioeconomic boundaries…”

Coordinating the Commons: Diversity & Dynamics in Open Collaborations


Dissertation by Jonathan T. Morgan: “The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia’s history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia editors—the volunteers who contribute most of the encyclopedia’s content and ensure that articles are correct and consistent — has been gradually shrinking since 2007, in part because Wikipedia’s social climate has become increasingly inhospitable for newcomers, female editors, and editors from other underrepresented demographics. Previous research studies of change over time within other work contexts, such as corporations, suggests that incremental processes such as bureaucratic formalization can make organizations more rule-bound and less adaptable — in effect, less open— as they grow and age. There has been little research on how open collaborations like Wikipedia change over time, and on the impact of those changes on the social dynamics of the collaborating community and the way community members prioritize and perform work. Learning from Wikipedia’s successes and failures can help researchers and designers understand how to support open collaborations in other domains — such as Free/Libre Open Source Software, Citizen Science, and Citizen Journalism.

In this dissertation, I examine the role of openness, and the potential antecedents and consequences of formalization, within Wikipedia through an analysis of three distinct but interrelated social structures: community-created rules within the Wikipedia policy environment, coordination work and group dynamics within self-organized open teams called WikiProjects, and the socialization mechanisms that Wikipedia editors use to teach new community members how to participate.To inquire further, I have designed a new editor peer support space, the Wikipedia Teahouse, based on the findings from my empirical studies. The Teahouse is a volunteer-driven project that provides a welcoming and engaging environment in which new editors can learn how to be productive members of the Wikipedia community, with the goal of increasing the number and diversity of newcomers who go on to make substantial contributions to Wikipedia …”

L’intelligence d’une ville : ses citoyens


Michel Dumais: “Tic toc! disions-nous. Bientôt la centième. Et avec la cent-unième, de nouveaux défis. Ville intelligente, disiez-vous? Je subodore le traditionnel appel de pied aux trois lettres et à une logique administrative archaïque. Et si on faisait plutôt appel à l’intelligence de ceux qui connaissent le plus leur ville, ses citoyens?

Pour régler un problème (et même à l’occasion, un «pas d’problème»), les administrations regardent du côté de ces logiciels mammouth qui, sur papier, sont censés faire tout, qui engloutissent des centaines de millions de dollars, mais qui, finalement, font les manchettes des médias parce qu’il faut y injecter encore plus d’argent. Et qui permettent aux TI d’asseoir encore plus leur contrôle sur une administration.

Bref, lorsque l’on parle de ville intelligente, plusieurs y voient le pactole. Ah! Reste que ce qui était «acceptable», hier, ne l’est plus aujourd’hui. Et que la réalisation d’une ville intelligente n’est surtout pas un défi technologique, loin de là.

LA QUESTION DU SANS-FIL
Il y a des années de cela, la simple logique eut voulu que la Ville cesse de penser «big telcos» afin de conclure rapidement une alliance avec l’organisme communautaire «Île sans fil» et ainsi favoriser le déploiement rapide sur l’île de la technologie sans fil.

Une telle alliance, un modèle dans le genre, existe.

Mais pas à Montréal. Plutôt à Québec, alors que la Ville et l’organisme communautaire «Zap Québec» travaillent main dans la main pour le plus grand bénéfice des citoyens de Québec et des touristes. Et à Montréal? On jase, on jase.

Donc, une ville intelligente. C’est une ville qui sait, à l’aide des technologies, comment harnacher ses infrastructures et les mettre au service de ses citoyens tout en réalisant des économies et en favorisant le développement durable.

C’est aussi une ville qui sait écouter et mobiliser ses citoyens, ses militants et ses entrepreneurs, tout en leur donnant des outils (comme des données utilisables) afin qu’ils puissent eux aussi créer des services destinés à leur organisation et à tous les citoyens de la ville. Sans compter que tous ces outils facilitent la prise de décisions chez les maires d’arrondissement et le comité exécutif.

Bref, une ville intelligente selon le professeur Rudolf Giffinger, c’est ça: «une économie intelligente, une mobilité intelligente, un environnement intelligent, des habitants intelligents, un mode de vie intelligent et, enfin, une administration intelligente».

J’invite le lecteur à regarder LifeApps, une extraordinaire série télé diffusée sur le site de la chaîne AlJazeera. Le sujet: des jeunes et de moins jeunes militants, bidouilleurs, qui s’impliquent et créent des services pour leur communauté.”

NatureNet: a model for crowdsourcing the design of citizen science systems


Paper in CSCW Companion ’14, the companion publication of the 17th ACM conference on Computer supported cooperative work & social computing: “NatureNet is citizen science system designed for collecting bio-diversity data in nature park settings. Park visitors are encouraged to participate in the design of the system in addition to collecting bio-diversity data. Our goal is to increase the motivation to participate in citizen science via crowdsourcing: the hypothesis is that when the crowd plays a role in the design and development of the system, they become stakeholders in the project and work to ensure its success. This paper presents a model for crowdsourcing design and citizen science data collection, and the results from early trials with users that illustrate the potential of this approach.”

House Bill Raises Questions about Crowdsourcing


Anne Bowser for Commons Lab (Wilson Center):”A new bill in the House is raising some key questions about how crowdsourcing is understood by scientists, government agencies, policymakers and the public at large.
Robin Bravender’s recent article in Environment & Energy Daily, “House Republicans Push Crowdsourcing on Agency Science,” (subscription required) neatly summarizes the debate around H.R. 4012, a bill introduced to the House of Representatives earlier this month. The House Science, Space and Technology Committe earlier this week held a hearing on the bill, which could see a committee vote as early as next month.
Dubbed the “Secret Science Reform Act of 2014,” the bill prohibits the Environmental Protection Agency (EPA) from “proposing, finalizing, or disseminating regulations or assessments based upon science that is not transparent or reproducible.” If the bill is passed, EPA would be unable to base assessments or regulations on any information not “publicly available in a manner that is sufficient for independent analysis.” This would include all information published in scholarly journals based on data that is not available as open source.
The bill is based on the premise that forcing EPA to use public data will inspire greater transparency by allowing “the crowd” to conduct independent analysis and interpretation. While the premise of involving the public in scientific research is sound, this characterization of crowdsourcing as a process separate from traditional scientific research is deeply problematic.
This division contrasts the current practices of many researchers, who use crowdsourcing to directly involve the public in scientific processes. Galaxy Zoo, for example, enlists digital volunteers (called “citizen scientists”) help classify more than 40 million photographs of galaxies taken by the Hubble Telescope. These crowdsourced morphological classifications are a powerful form of data analysis, a key aspect of the scientific process. Galaxy Zoo then publishes a catalogue of these classifications as an open-source data set. And the data reduction techniques and measures of confidence and bias for the data catalogue are documented in MNRAS, a peer-reviewed journal. A recent Google Scholar search shows that the data set published in MNRAS has been cited a remarkable 121 times.
As this example illustrates, crowdsourcing is often embedded in the process of formal scientific research. But prior to being published in a scientific journal, the crowdsourced contributions of non-professional volunteers are subject to the scrutiny of professional scientists through the rigorous process of peer review. Because peer review was designed as an institution to ensure objective and unbiased research, peer-reviewed scientific work is widely accepted as the best source of information for any science-based decision.
Separating crowdsourcing from the peer review process, as this legislation intends, means that there will be no formal filters in place to ensure that open data will not be abused by special interests. Ellen Silbergeld, a professor at John Hopkins University who testified at the hearing this week, made exactly this point when she pointed to data manipulation commonly practiced by tobacco lobbyists in the United States.
Contributing to scientific research is one goal of crowdsourcing for science. Involving the public in scientific research also increases volunteer understanding of research topics and the scientific process and inspires heightened community engagement. These goals are supported by President Obama’s Second Open Government National Action Plan, which calls for “increased crowdsourcing and citizen science programs” to support “an informed and active citizenry.” But H.R. 4012 does not support these goals. Rather, this legislation could further degrade the public’s understanding of science by encouraging the public to distrust professional scientists rather than collaborate with them.
Crowdsourcing benefits organizations by bringing in the unique expertise held by external volunteers, which can augment and enhance the traditional scientific process. In return, these volunteers benefit from exposure to new and exciting processes, such as scientific research. This mutually beneficial relationship depends on collaboration, not opposition. Supporting an antagonistic relationship between science-based organizations like the EPA and members of “the crowd” will benefit neither institutions, nor volunteers, nor the country as a whole.