Neal Ungerleider at Co-Labs: “It’s not just for the private sector anymore: Government scientists are embracing crowdsourcing. At a White House-sponsored workshop in late November, representatives from more than 20 different federal agencies gathered to figure out how to integrate crowdsourcing and citizen scientists into various government efforts. The workshop is part of a bigger effort with a lofty goal: Building a set of best practices for the thousands of citizens who are helping federal agencies gather data, from the Environmental Protection Agency (EPA) to NASA….Perhaps the best known federal government crowdsourcing project is Nature’s Notebook, a collaboration between the U.S. Geological Survey and the National Park Service which asks ordinary citizens to take notes on plant and animal species during different times of year. These notes are then cleansed and collated into a massive database on animal and plant phenology that’s used for decision-making by national and local governments. The bulk of the observations, recorded through smartphone apps, are made by ordinary people who spend a lot of time outdoors….Dozens of government agencies are now asking the public for help. The Centers for Disease Control and Prevention runs a student-oriented, Mechanical Turk-style “micro-volunteering” service called CDCology, the VA crowdsources design of apps for homeless veterans, while the National Weather Service distributes a mobile app called mPING that asks ordinary citizens to help fine-tune public weather reports by giving information on local conditions. The Federal Communication Commission’s Measuring Broadband America app, meanwhile, allows citizens to volunteer information on their Internet broadband speeds, and the Environmental Protection Agency’s Air Sensor Toolbox asks users to track local air pollution….
As of now, however, when it comes to crowdsourcing data for government scientific research, there’s no unified set of standards or best practices. This can lead to wild variations in how various agencies collect data and use it. For officials hoping to implement citizen science projects within government, the roadblocks to crowdsourcing include factors that crowdsourcing is intended to avoid: limited budgets, heavy bureaucracy, and superiors who are skeptical about the value of relying on the crowd for data.
Benforado and Shanley also pointed out that government agencies are subject to additional regulations, such as the Paperwork Reduction Act, which can make implementation of crowdsourcing projects more challenging than they would be in academia or the private sector… (More)”
Making Futures – Marginal Notes on Innovation, Design, and Democracy
Book edited by Pelle Ehn, Elisabet M. Nilsson and Richard Topgaard: “Innovation and design need not be about the search for a killer app. Innovation and design can start in people’s everyday activities. They can encompass local services, cultural production, arenas for public discourse, or technological platforms. The approach is participatory, collaborative, and engaging, with users and consumers acting as producers and creators. It is concerned less with making new things than with making a socially sustainable future. This book describes experiments in innovation, design, and democracy, undertaken largely by grassroots organizations, non-governmental organizations, and multi-ethnic working-class neighborhoods.
These stories challenge the dominant perception of what constitutes successful innovations. They recount efforts at social innovation, opening the production process, challenging the creative class, and expanding the public sphere. The wide range of cases considered include a collective of immigrant women who perform collaborative services, the development of an open-hardware movement, grassroots journalism, and hip-hop performances on city buses. They point to the possibility of democratized innovation that goes beyond solo entrepreneurship and crowdsourcing in the service of corporations to include multiple futures imagined and made locally by often-marginalized publics. (More) “
Gamifying Cancer Research Crowdsources the Race for the Cure
Jason Brick at PSFK: “Computer time and human hours are among of the biggest obstacles in the face of progress in the fight against cancer. Researchers have terabytes of data, but only so many processors and people with which to analyze it. Much like the SETI program (Search for Extra Terrestrial Intelligence), it’s likely that big answers are already in the information we’ve collected. They’re just waiting for somebody to find them.
Reverse the Odds, a free mobile game from Cancer Research UK, accesses the combined resources of geeks and gamers worldwide. It’s a simple app game, the kind you play in line at the bank or while waiting at the dentist’s office, in which you complete mini puzzles and buy upgrades to save an imaginary world.
Each puzzle of the game is a repurposing of cancer data. Players find patterns in the data — the exact kind of analysis grad students and volunteers in a lab look for — and the results get compiled by Cancer Research UK for use in finding a cure. Errors are expected and accounted for because the thousands of players expected will round out the occasional mistake….(More)”
Crowdsourcing Data to Fight Air Pollution
Jason Brick at PSFK: “Air pollution is among the most serious environmental problems of the modern age. Although pollution in developed nations like the USA and Germany has fallen since the 1980s, air quality in growing technological countries — especially in the BRIC (Brazil, Russia, India and China) group — grows worse with each year. In 2012, 3.7 million people died as a direct result of problems caused by chronic exposure to bad air, and tens of millions more were made ill.
There is no easy solution to such a complex and widespread problem, but Breathe offers a fix for one aspect and solves it in two ways.
The first way is the device itself: a portable plastic brick smaller than a bar of soap that monitors the presence and concentration of toxic gases and other harmful substances in the air, in real time throughout your day. It records the quality and, if it reaches unacceptably dangerous levels, warns you immediately with an emergency signal. Plug the device into your smart phone, and it keeps a record of air quality by time and location you can use to avoid the most polluted times of day and places in your area.
The second solution is the truly innovative aspect of this project. Via the Breathe app, any user who wants to can add her data to a central database that keeps statistics worldwide. Individuals can then use that data to plan vacations, time outdoor activities or schedule athletic events. Given enough time, Breathe could accumulate enough data to be used to affect policy by identifying the most polluted areas in a city, county or nation so the authorities can work on a more robust solution….(More)”
Opening Government: Designing Open Innovation Processes to Collaborate With External Problem Solvers
New paper by Ines Mergel in Social Science Computer Review: “Open government initiatives in the U.S. government focus on three main aspects: transparency, participation, and collaboration. Especially the collaboration mandate is relatively unexplored in the literature. In practice, government organizations recognize the need to include external problem solvers into their internal innovation creation processes. This is partly derived from a sense of urgency to improve the efficiency and quality of government service delivery. Another formal driver is the America Competes Act that instructs agencies to search for opportunities to meaningfully promote excellence in technology, education, and science. Government agencies are responding to these requirements by using open innovation (OI) approaches to invite citizens to crowdsource and peer produce solutions to public management problems. These distributed innovation processes occur at all levels of the U.S. government and it is important to understand what design elements are used to create innovative public management ideas. This article systematically reviews existing government crowdsourcing and peer production initiatives and shows that after agencies have defined their public management problem, they go through four different phases of the OI process: (1) idea generation through crowdsourcing, (2) incubation of submitted ideas with peer voting and collaborative improvements of favorite solutions, (3) validation with a proof of concept of implementation possibilities, and (4) reveal of the selected solution and the (internal) implementation of the winning idea. Participation and engagement are incentivized both with monetary and nonmonetary rewards, which lead to tangible solutions as well as intangible innovation outcomes, such as increased public awareness.”
Designing a Citizen Science and Crowdsourcing Toolkit for the Federal Government
2013 Second Open Government National Action Plan, President Obama called on Federal agencies to harness the ingenuity of the public by accelerating and scaling the use of open innovation methods, such as citizen science and crowdsourcing, to help address a wide range of scientific and societal problems.
Citizen science is a form of open collaboration in which members of the public participate in the scientific process, including identifying research questions, collecting and analyzing data, interpreting results, and solving problems. Crowdsourcing is a process in which individuals or organizations submit an open call for voluntary contributions from a large group of unknown individuals (“the crowd”) or, in some cases, a bounded group of trusted individuals or experts.
Citizen science and crowdsourcing are powerful tools that can help Federal agencies:
- Advance and accelerate scientific research through group discovery and co-creation of knowledge. For instance, engaging the public in data collection can provide information at resolutions that would be difficult for Federal agencies to obtain due to time, geographic, or resource constraints.
- Increase science literacy and provide students with skills needed to excel in science, technology, engineering, and math (STEM). Volunteers in citizen science or crowdsourcing projects gain hands-on experience doing real science, and take that learning outside of the classroom setting.
- Improve delivery of government services with significantly lower resource investments.
- Connect citizens to the missions of Federal agencies by promoting a spirit of open government and volunteerism.
To enable effective and appropriate use of these new approaches, the Open Government National Action Plan specifically commits the Federal government to “convene an interagency group to develop an Open Innovation Toolkit for Federal agencies that will include best practices, training, policies, and guidance on authorities related to open innovation, including approaches such as incentive prizes, crowdsourcing, and citizen science.”
On November 21, 2014, the Office of Science and Technology Policy (OSTP) kicked off development of the Toolkit with a human-centered design workshop. Human-centered design is a multi-stage process that requires product designers to engage with different stakeholders in creating, iteratively testing, and refining their product designs. The workshop was planned and executed in partnership with the Office of Personnel Management’s human-centered design practice known as “The Lab” and the Federal Community of Practice on Crowdsourcing and Citizen Science (FCPCCS), a growing network of more than 100 employees from more than 20 Federal agencies….
The Toolkit will help further the culture of innovation, learning, sharing, and doing in the Federal citizen science and crowdsourcing community: indeed, the development of the Toolkit is a collaborative and community-building activity in and of itself.
The following successful Federal projects illustrate the variety of possible citizen science and crowdsourcing applications:
- The Citizen Archivist Dashboard (NARA) coordinates crowdsourced archival record tagging and document transcription. Recently, more than 170,000 volunteers indexed 132 million names of the 1940 Census in only five months, which NARA could not have done alone.
- Through Measuring Broadband America (FCC), 2 million volunteers collected and provided the FCC with data on their Internet speeds, data that FCC used to create a National Broadband Map revealing digital divides.
- In 2014, Nature’s Notebook (USGS, NSF) volunteers recorded more than 1 million observations on plants and animals that scientists use to analyze environmental change.
- Did You Feel It? (USGS) has enabled more than 3 million people worldwide to share their experiences during and immediately after earthquakes. This information facilitates rapid damage assessments and scientific research, particularly in areas without dense sensor networks.
- The mPING (NOAA) mobile app has collected more than 600,000 ground-based observations that help verify weather models.
- USAID anonymized and opened its loan guarantee data to volunteer mappers. Volunteers mapped 10,000 data points in only 16 hours, compared to the 60 hours officials expected.
- The Air Sensor Toolbox (EPA), together with training workshops, scientific partners, technology evaluations, and a scientific instrumentation loan program, empowers communities to monitor and report local air pollution.
In early 2015, OSTP, in partnership with the Challenges and Prizes Community of Practice, will convene Federal practitioners to develop the other half of the Open Innovation Toolkit for prizes and challenges. Stay tuned!”
Crowdsourcing and Humanitarian Action: Analysis of the Literature
Patrick Meier: “Raphael Hörler from Zurich’s ETH University has just completed his thesis on the role of crowdsourcing in humanitarian action. His valuable research offers one of the most up-to-date and comprehensive reviews of the principal players and humanitarian technologies in action today. In short, I highly recommend this important resource. Raphael’s full thesis is available here (PDF).”
We’re All Pirates Now
Book Review by Edward Kosner of “Information Doesn’t Want to Be Free”in the Wall Street Journal: “Do you feel like a thief when you click on a website link and find yourself reading an article or listening to a song you haven’t paid for? Should you? Are you annoyed when you can’t copy a movie you’ve paid for onto your computer’s hard drive? Should you be? Should copyright, conceived in England three centuries ago to protect writers from unscrupulous printers, apply the same way to creators and consumers in the digital age?
The sci-fi writer, blogger and general man-about-the-Web Cory Doctorow tries to answer some of these questions—and introduces others—in “Information Doesn’t Want to Be Free.” Billed as a guide for perplexed creators about how to make a living in the Internet Era, the book is actually a populist manifesto for the information revolution.
Mr. Doctorow is a confident and aphoristic writer—his book is like one long TED talk—and his basic advice to creators is easy to grasp: Aspiring novelists, journalists, musicians and other artists and would-be artists should recognize the Web as an unprecedented promotional medium rather than a revenue source. Creators, writes Mr. Doctorow, need to get known before they can expect to profit from their work. So they should welcome having their words, music or images reproduced online without permission to pave the way for a later payoff.
Even if they manage to make a name, he warns, they’re likely to be ripped off by the entertainment-industrial complex—big book publishers, record companies, movie studios, Google , Apple and Microsoft. But they can monetize their creativity by, among other things, selling tickets to public shows, peddling “swag”—T-shirts, ball caps, posters and recordings—and taking commissions for new work.
He cites the example of a painter named Molly Crabapple, who, inspired by the Occupy Wall Street movement, raised $55,000 on the crowdsourcing site Kickstarter, rented a storefront and created nine huge canvases, seven of which she sold for $8,000 each. Not the easiest way to become the next Jeff Koons, Taylor Swift or Gillian Flynn.
But Mr. Doctorow turns out to be less interested in mentoring unrealized talent than in promulgating a new regime for copyright regulation on the Internet. Copyright has been enshrined in American law since 1790, but computer technology, he argues, has rendered the concept obsolete: “We can’t stop copying on the Internet because the Internet is a copying machine.” And the whole debate, he complains, “is filled with lies, damn lies and piracy statistics.”
There’s lots of technical stuff here about digital locks—he calls devices like the Kindle “roach motels” that allow content to be loaded but never offloaded elsewhere—as well as algorithms, embedded keys and such. And the book is clotted with acronyms: A diligent reader who finishes this slim volume should be able to pass a test on the meaning of ACTA, WIPO, WPPT, WCT, DMCA, DNS, SOPA and PIPA, not to mention NaTD (techspeak for “Notice and Take Down”).
The gist of Mr. Doctorow’s argument is that the bad guys of the content game use copyright protection and antipiracy protocols not to help creators but to enrich themselves at the expense of the talent and the consumers of content. Similarly, he contends that the crusade against “net neutrality”—the principle that Internet carriers must treat all data and users the same way—is actually a ploy to elevate big players in the digital world by turning the rest of us into second-class Netizens.
“The future of the Internet,” he writes, “should not be a fight about whether Google (or Apple or Microsoft) gets to be in charge or whether Hollywood gets to be in charge. Left to their own devices, Big Tech and Big Content are perfectly capable of coming up with a position that keeps both ‘sides’ happy at the expense of everyone else.”…”
Measuring the Impact of Public Innovation in the Wild
Beth Noveck at Governing: “With complex, seemingly intractable problems such as inequality, climate change and affordable access to health care plaguing contemporary society, traditional institutions such as government agencies and nonprofit organizations often lack strategies for tackling them effectively and legitimately. For this reason, this year the MacArthur Foundation launched its Research Network on Opening Governance.
The Network, which I chair and which also is supported by Google.org, is what MacArthur calls a “research institution without walls.” It brings together a dozen researchers across universities and disciplines, with an advisory network of academics, technologists, and current and former government officials, to study new ways of addressing public problems using advances in science and technology.
Through regular meetings and collaborative projects, the Network is exploring, for example, the latest techniques for more open and transparent decision-making, the uses of data to transform how we govern, and the identification of an individual’s skills and experiences to improve collaborative problem-solving between government and citizen.
One of the central questions we are grappling with is how to accelerate the pace of research so we can learn better and faster when an innovation in governance works — for whom, in which contexts and under which conditions. With better methods for doing fast-cycle research in collaboration with government — in the wild, not in the lab — our hope is to be able to predict with accuracy, not just know after the fact, whether innovations such as opening up an agency’s data or consulting with citizens using a crowdsourcing platform are likely to result in real improvements in people’s lives.
An example of such an experiment is the work that members of the Network are undertaking with the Food and Drug Administration. As one of its duties, the FDA manages the process of pre-market approval of medical devices to ensure that patients and providers have timely access to safe, effective and high-quality technology, as well as the post-market review of medical devices to ensure that unsafe ones are identified and recalled from the market. In both of these contexts, the FDA seeks to provide the medical-device industry with productive, consistent, transparent and efficient regulatory pathways.
With thousands of devices, many of them employing cutting-edge technology, to examine each year, the FDA is faced with the challenge of finding the right internal and external expertise to help it quickly study a device’s safety and efficacy. Done right, lives can be saved and companies can prosper from bringing innovations quickly to market. Done wrong, bad devices can kill…”
Code of Conduct: Cyber Crowdsourcing for Good
Patrick Meier at iRevolution: “There is currently no unified code of conduct for digital crowdsourcing efforts in the development, humanitarian or human rights space. As such, we propose the following principles (displayed below) as a way to catalyze a conversation on these issues and to improve and/or expand this Code of Conduct as appropriate.
This initial draft was put together by Kate Chapman, Brooke Simons and myself. The link above points to this open, editable Google Doc. So please feel free to contribute your thoughts by inserting comments where appropriate. Thank you.
An organization that launches a digital crowdsourcing project must:
- Provide clear volunteer guidelines on how to participate in the project so that volunteers are able to contribute meaningfully.
- Test their crowdsourcing platform prior to any project or pilot to ensure that the system will not crash due to obvious bugs.
- Disclose the purpose of the project, exactly which entities will be using and/or have access to the resulting data, to what end exactly, over what period of time and what the expected impact of the project is likely to be.
- Disclose whether volunteer contributions to the project will or may be used as training data in subsequent machine learning research
- ….
An organization that launches a digital crowdsourcing project should:
- Share as much of the resulting data with volunteers as possible without violating data privacy or the principle of Do No Harm.
- Enable volunteers to opt out of having their tasks contribute to subsequent machine learning research. Provide digital volunteers with the option of having their contributions withheld from subsequent machine learning studies
- … “