Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo

New paper by Michael J. Madison: “The knowledge commons research framework is applied to a case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis via the Internet. In the second place Galaxy Zoo is a highly successful example of peer production, some times known colloquially as crowdsourcing, by which data are gathered, supplied, and/or analyzed by very large numbers of anonymous and pseudonymous contributors to an enterprise that is centrally coordinated or managed. In the third place Galaxy Zoo is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies. This chapter synthesizes these three perspectives on Galaxy Zoo via the knowledge commons framework.”

The Contours of Crowd Capability

New paper by Prashant Shukla and John Prpi: “The existence of dispersed knowledge has been a subject of inquiry for more than six decades. Despite the longevity of this rich research tradition, the “knowledge problem” has remained largely unresolved both in research and practice, and remains “the central theoretical problem of all social science”. However, in the 21st century, organizations are presented with opportunities through technology to potentially benefit from the dispersed knowledge problem to some extent. One such opportunity is represented by the recent emergence of a variety of crowd-engaging information systems (IS).
In this vein, Crowdsourcing  is being widely studied in numerous contexts, and the knowledge generated from these IS phenomena is well-documented. At the same time, other organizations are leveraging dispersed knowledge by putting in place IS-applications such as Predication Markets to gather large sample-size forecasts from within and without the organization. Similarly, we are also observing many organizations using IS-tools such as “Wikis” to access the knowledge of dispersed populations within the boundaries of the organization. Further still, other organizations are applying gamification techniques to accumulate Citizen Science knowledge from the public at large through IS.
Among these seemingly disparate phenomena, a complex ecology of crowd- engaging IS has emerged, involving millions of people all around the world generating knowledge for organizations through IS. However, despite the obvious scale and reach of this emerging crowd-engagement paradigm, there are no examples of research (as far as we know), that systematically compares and contrasts a large variety of these existing crowd-engaging IS-tools in one work. Understanding this current state of affairs, we seek to address this significant research void by comparing and contrasting a number of the crowd-engaging forms of IS currently available for organizational use.

To achieve this goal, we employ the Theory of Crowd Capital as a lens to systematically structure our investigation of crowd-engaging IS. Employing this parsimonious lens, we first explain how Crowd Capital is generated through Crowd Capability in organizations. Taking this conceptual platform as a point of departure, in Section 3, we offer an array of examples of IS currently in use in modern practice to generate Crowd Capital. We compare and contrast these emerging IS techniques using the Crowd Capability construct, therein highlighting some important choices that organizations face when entering the crowd- engagement fray. This comparison, which we term “The Contours of Crowd Capability”, can be used by decision-makers and researchers alike, to differentiate among the many extant methods of Crowd Capital generation. At the same time, our comparison also illustrates some important differences to be found in the internal organizational processes that accompany each form of crowd-engaging IS. In section 4, we conclude with a discussion of the limitations of our work.”

Citizen science versus NIMBY?

Ethan Zuckerman’s latest blog: “Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.

Screen Shot 2013-08-29 at 10.25.44 AM
The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.
This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.
For me, Safecast raises two interesting questions:
– If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
– How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?
To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility…

Projects like Safecast – and the projects I’m exploring this coming year under the heading of citizen infrastructure monitoring – have a challenge. Most participants aren’t going to uncover Ed Snowden-calibre information by driving around with a geiger counter or mapping wells in their communities. Lots of data collected is going to reveal that governments and corporations are doing their jobs, as my data suggests. It’s easy to track a path between collecting groundbreaking data and getting involved with deeper civic and political issues – will collecting data that the local nuclear plant is apparently safe get me more involved with issues of nuclear waste disposal?
It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine….”

How to do scientific research without even trying (much)

Ars Technica: “To some extent, scientific research requires expensive or specialized equipment—some work just requires a particle accelerator or a virus containment facility. But plenty of other research has very simple requirements: a decent camera, a bit of patience, or being in the right place at the right time. Since that sort of work is open to anyone, getting the public involved can be a huge win for scientists, who can then obtain much more information than they could have gathered on their own.
A group of Spanish researchers has now written an article that is a mixture of praise for this sort of citizen science, a resource list for people hoping to get involved, and a how-to guide for anyone inspired to join in. The researchers focus on their own area of interest—insects, specifically the hemiptera or “true bugs”—but a lot of what they say applies to other areas of research.

The paper also lists a variety of regional-specific sites that focus on insect identification and tracking, such as ones for the UK, Belgium, and Slovenia. But a dedicated system isn’t required for this sort of resource. In the researchers’ home base on the Iberian Peninsula, insects are tracked via a Flickr group. (If you’re interested in insect research and based in the US, you can also find dozens of projects at the SciStarter site.) We’ve uploaded some of the most amazing images into a gallery that accompanies this article.
ZooKeys, 2013. DOI: 10.3897/zookeys.319.4342

Citizen Science Profile: SeaSketch

Blog entry from the Commons Lab within the  Science and Technology Innovation Program of the Woodrow Wilson International Center for Scholars: “As part of the Commons Lab’s ongoing initiative to highlight the intersection of emerging technologies and citizen science, we present a profile of SeaSketch, a marine management software that makes complex spatial planning tools accessible to everyone. This was prepared with the gracious assistance of Will McClintock, director of the McClintock Lab.
The SeaSketch initiative highlights key components of successful citizen science projects. The end product is a result of an iterative process where the developers applied previous successes and learned from mistakes. The tool was designed to allow people without technical training to participate, expanding access to stakeholders. MarineMap had a quantifiable impact on California marine protected areas, increasing their size from 1 percent to 16 percent of the coastline. The subsequent version, SeaSketch, is uniquely suited to scale out worldwide, addressing coastal and land management challenges. By emphasizing iterative development, non-expert accessibility and scalability, SeaSketch offers a model of successful citizen science….
SeaSketch succeeded as a citizen science initiative by focusing on three project priorities:

  • Iterative Development: The current version of SeaSketch’s PGIS software is the result of seven years of trial and error. Doris and MarineMap helped the project team learn what worked and adjust accordingly. The final result would have been impossible without a sustained commitment to the project and regular product assessments.
  • Non-Expert Accessibility: GIS software is traditionally limited to those with technical expertise. SeaSketch was developed anticipating that stakeholders without GIS training would use the software. New features allow users to contribute spatial surveys, sharing their knowledge of the area to better inform planning. This ease of use means the project is outward facing: More people can participate, meaning the analyses better reflect community priorities.
  • Scalability: Although MarineMap was built specifically to guide the MLPA process, the concept is highly flexible. SeaSketch  is being used to support oceanic management issues worldwide, including in areas of international jurisdiction. The software can support planning with legal implications as well as cooperative agreements. SeaSketch’s project team believes it can also be used for freshwater and terrestrial management issues.”