Every citizen can be a scientist
The project helps usher in new advances in everything from astronomy to social science.
‘One breakthrough is our increased capacity to reproduce, analyse and understand complex issues thanks to the engagement of large groups of volunteers,’ says Mr Fermin Serrano Sanz, researcher at the University of Zaragoza and Project Coordinator of SOCIENTIZE. ‘And everyone can be a neuron in our digitally-enabled brain.’
But how can ordinary citizens help with such extraordinary science? The key, says Mr Serrano Sanz, is in harnessing the efforts of thousands of volunteers to collect and classify data. ‘We are already gathering huge amounts of user-generated data from the participants using their mobile phones and surrounding knowledge,’ he says.
For example, the experiment ‘SavingEnergy@Home’ asks users to submit data about the temperatures in their homes and neighbourhoods in order to build up a clearer picture of temperatures in cities across the EU, while in Spain, GripeNet.es asks citizens to report when they catch the flu in order to monitor outbreaks and predict possible epidemics.
Many Hands Make Light Work
But citizens can also help analyse data. Even the most advanced computers are not very good at recognising things like sun spots or cells, whereas people can tell the difference between living and dying cells very easily, given only a short training.
The SOCIENTIZE projects ‘Sun4All’ and ‘Cell Spotting’ ask volunteers to label images of solar activity and cancer cells from an application on their phone or computer. With Cell Spotting, for instance, participants can observe cell cultures being studied with a microscope in order to determine their state and the effectiveness of medicines. Analysing this data would take years and cost hundreds of thousands of euros if left to a small team of scientists – but with thousands of volunteers helping the effort, researchers can make important breakthroughs quickly and more cheaply than ever before.
But in addition to bringing citizens closer to science, SOCIENTIZE also brings science closer to citizens. On 12-14 June, the project participated in the SONAR festival with ‘A Collective Music Experiment’ (CME). ‘Two hundred people joined professional DJs and created musical patterns using a web tool; participants shared their creations and re-used other parts in real time. The activity in the festival also included a live show of RdeRumba and Mercadal playing amateurs rhythms’ Mr. Serrano Sanz explains.
The experiment – which will be presented in a mini-documentary to raise awareness about citizen science – is expected to help understand other innovation processes observed in emergent social, technological, economic or political transformations. ‘This kind of event brings together a really diverse set of participants. The diversity does not only enrich the data; it improves the dialogue between professionals and volunteers. As a result, we see some new and innovative approaches to research.’
The EUR 0.7 million project brings together 6 partners from 4 countries: Spain (University of Zaragoza and TECNARA), Portugal (Museu da Ciência-Coimbra, MUSC ; Universidade de Coimbra), Austria (Zentrum für Soziale Innovation) and Brazil (Universidade Federal de Campina Grande, UFCG).
SOCIENTIZE will end in October 2104 after bringing together 12000 citizens in different phases of research activities for 24 months.”
Government, Foundations Turn to Cash Prizes to Generate Solutions
Megan O’Neil at the Chronicle of Philanthropy: “Government agencies and philanthropic organizations are increasingly staging competitions as a way generate interest in solving difficult technological, social, and environmental problems, according to a new report.
“The Craft of Prize Design: Lessons From the Public Sector” found that well-designed competitions backed by cash incentives can help organizations attract new ideas, mobilize action, and stimulate markets.
“Incentive prizes have transformed from an exotic open innovation to a proven innovation strategy for the public, private, and philanthropic sectors,” the report says.
Produced by Deloitte Consulting’s innovation practice, the report was financially supported by Bloomberg Philanthropies and the Case; Joyce; John S. and James L. Knight; Kresge; and Rockefeller foundations.
The federal government has staged more than 350 prize competitions during the past five years to stimulate innovation and crowdsource solutions, according to the report. And philanthropic organizations are also fronting prizes for competitions promoting innovative responses to questions such as how to strengthen communities and encourage sustainable energy consumption.
One example cited by the report is the Talent Dividend Prize, sponsored by CEOs for Cities and the Kresge Foundation, which awards $1-million to the city that most increases its college graduation rate during a four-year period. A second example is the MIT Clean Energy Prize, co-sponsored by the U.S. Department of Energy, which offered a total of $1 million in prize money. Submissions generated $85 million in capital and research grants, according to the report.
A prize-based project should not be adopted when an established approach to solve a problem already exists or if potential participants don’t have the interest or time to work on solving a problem, the report concludes. Instead, prize designers must gauge the capacity of potential participants before announcing a prize, and make sure that it will spur the discovery of new solutions.”
How collective intelligence emerges: knowledge creation process in Wikipedia from microscopic viewpoint
Kyungho Lee for the 2014 International Working Conference on Advanced Visual Interfaces: “The Wikipedia, one of the richest human knowledge repositories on the Internet, has been developed by collective intelligence. To gain insight into Wikipedia, one asks how initial ideas emerge and develop to become a concrete article through the online collaborative process? Led by this question, the author performed a microscopic observation of the knowledge creation process on the recent article, “Fukushima Daiichi nuclear disaster.” The author collected not only the revision history of the article but also investigated interactions between collaborators by making a user-paragraph network to reveal an intellectual intervention of multiple authors. The knowledge creation process on the Wikipedia article was categorized into 4 major steps and 6 phases from the beginning to the intellectual balance point where only revisions were made. To represent this phenomenon, the author developed a visaphor (digital visual metaphor) to digitally represent the article’s evolving concepts and characteristics. Then the author created a dynamic digital information visualization using particle effects and network graph structures. The visaphor reveals the interaction between users and their collaborative efforts as they created and revised paragraphs and debated aspects of the article.”
Crowdsourcing moving beyond the fringe
Bob Brown in Networked World: ” Depending up on how you look at it, crowdsourcing is all the rage these days — think Wikipedia, X Prize and Kickstarter — or at the other extreme, greatly underused.
To the team behind the new “insight network” Yegii, crowdsourcing has not nearly reached its potential despite having its roots as far back as the early 1700s and a famous case of the British Government seeking a solution to “The Longitude Problem” in order to make sailing less life threatening. (I get the impression that mention of this example is obligatory at any crowdsourcing event.)
This angel-funded startup, headed by an MIT Sloan School of Management senior lecturer and operating from a Boston suburb, is looking to exploit crowdsourcing’s potential through a service that connects financial, healthcare, technology and other organizations seeking knowledge with experts who can provide it – and fairly fast. To CEO Trond Undheim, crowdsourcing is “no longer for fringe freelance work,” and the goal is to get more organizations and smart individuals involved.
“Yegii is essentially a network of networks, connecting people, organizations, and knowledge in new ways,” says Undheim, who explains that the name Yegii is Korean for “talk” or “discussion”. “Our focus is laser sharp: we only rank and rate knowledge that says something essential about what I see as the four forces of industry disruption: technology, policy, user dynamics and business models. We tackle challenging business issues across domains, from life sciences to energy to finance. The point is that today’s industry classification is falling apart. We need more specific insight than in-house strategizing or generalist consulting advice.”
Undheim attempted to drum up interest in the new business last week at an event at Babson College during which a handful of crowdsourcing experts spoke. Harvard Business School adjunct professor Alan MacCormack discussed the X Prize, Netflix Prize and other examples of spurring competition through crowdsourcing. MIT’s Peter Gloor extolled the virtue of collaborative and smart swarms of people vs. stupid crowds (such as football hooligans). A couple of advertising/marketing execs shared stories of how clients and other brands are increasingly tapping into their customer base and the general public for new ideas from slogans to products, figuring that potential new customers are more likely to trust their peers than corporate ads. Another speaker dove into more details about how to run a crowdsourcing challenge, which includes identifying motivation that goes beyond money.
All of this was to frame Yegii’s crowdsourcing plan, which is at the beta stage with about a dozen clients (including Akamai and Santander bank) and is slated for mass production later this year. Yegii’s team consists of five part-timers, plus a few interns, who are building a web-based platform that consists of “knowledge assets,” that is market research, news reports and datasets from free and paid sources. That content – on topics that range from Bitcoin’s impact on banks to telecom bandwidth costs — is reviewed and ranked through a combination of machine learning and human peers. Information seekers would pay Yegii up to hundreds of dollars per month or up to tens of thousands of dollars per project, and then multidisciplinary teams would accept the challenge of answering their questions via customized reports within staged deadlines.
“We are focused on building partnerships with other expert networks and associations that have access to smart people with spare capacity, wherever they are,” Undheim says.
One reason organizations can benefit from crowdsourcing, Undheim says, is because of the “ephemeral nature of expertise in today’s society.” In other words, people within your organization might think of themselves as experts in this or that, but when they really think about it, they might realize their level of expertise has faded. Yegii will strive to narrow down the best sources of information for those looking to come up to speed on a subject over a weekend, whereas hunting for that information across a vast search engine would not be nearly as efficient….”
Let's amplify California's collective intelligence
Gavin Newsom and Ken Goldberg at the SFGate: “Although the results of last week’s primary election are still being certified, we already know that voter turnout was among the lowest in California’s history. Pundits will rant about the “cynical electorate” and wag a finger at disengaged voters shirking their democratic duties, but we see the low turnout as a symptom of broader forces that affect how people and government interact.
The methods used to find out what citizens think and believe are limited to elections, opinion polls, surveys and focus groups. These methods may produce valuable information, but they are costly, infrequent and often conducted at the convenience of government or special interests.
We believe that new technology has the potential to increase public engagement by tapping the collective intelligence of Californians every day, not just on election day.
While most politicians already use e-mail and social media, these channels are easily dominated by extreme views and tend to regurgitate material from mass media outlets.
We’re exploring an alternative.
The California Report Card is a mobile-friendly web-based platform that streamlines and organizes public input for the benefit of policymakers and elected officials. The report card allows participants to assign letter grades to key issues and to suggest new ideas for consideration; public officials then can use that information to inform their decisions.
In an experimental version of the report card released earlier this year, residents from all 58 counties assigned more than 20,000 grades to the state of California and also suggested issues they feel deserve priority at the state level. As one participant noted: “This platform allows us to have our voices heard. The ability to review and grade what others suggest is important. It enables elected officials to hear directly how Californians feel.”
Initial data confirm that Californians approve of our state’s rollout of Obamacare, but are very concerned about the future of our schools and universities.
There was also a surprise. California Report Card suggestions for top state priorities revealed consistently strong interest and support for more attention to disaster preparedness. Issues related to this topic were graded as highly important by a broad cross section of participants across the state. In response, we’re testing new versions of the report card that can focus on topics related to wildfires and earthquakes.
The report card is part of an ongoing collaboration between the CITRIS Data and Democracy Initiative at UC Berkeley and the Office of the Lieutenant Governor to explore how technology can improve public communication and bring the government closer to the people. Our hunch is that engineering concepts can be adapted for public policy to rapidly identify real insights from constituents and resist gaming by special interests.
You don’t have to wait for the next election to have your voice heard by officials in Sacramento. The California Report Card is now accessible from cell phones, desktop and tablet computers. We encourage you to contribute your own ideas to amplify California’s collective intelligence. It’s easy, just click “participate” on this website: CaliforniaReportCard.org”
Crowdsourcing and social search
Lyndsey Gilpin at Techcrunch: “When we think of the sharing economy, what often comes to mind are sites like Airbnb, Lyft, or Feastly — the platforms that allow us to meet people for a specific reason, whether that’s a place to stay, a ride, or a meal.
But what about sharing something much simpler than that, like answers to our questions about the world around us? Sharing knowledge with strangers can offer us insight into a place we are curious about or trying to navigate, and in a more personal, efficient way than using traditional web searches.
“Sharing an answer or response to question, that is true sharing. There’s no financial or monetary exchange based on that. It’s the true meaning of [the word],” said Maxime Leroy, co-founder and CEO of a new app called Enquire.
Enquire is a new question-and-answer app, but it is unlike others in the space. You don’t have to log in via Facebook or Twitter, use SMS messaging like on Quest, or upload an image like you do on Jelly. None of these apps have taken off yet, which could be good or bad for Enquire just entering the space.
With Enquire, simply log in with a username and password and it will unlock the neighborhood you are in (the app only works in San Francisco, New York, and Paris right now). There are lists of answers to other questions, or you can post your own. If 200 people in a city sign up, the app will become available to them, which is an effort to make sure there is a strong community to gather answers from.
Leroy, who recently made a documentary about the sharing economy, realized there was “one tool missing for local communities” in the space, and decided to create this app.
“We want to build a more local-based network, and empower and increase trust without having people share all their identity,” he said.
Different social channels look at search in different ways, but the trend is definitely moving to more social searching or location-based searching, according to according to Altimeter social media analyst Rebecca Lieb. Arguably, she said, Yelp, Groupon, and even Google Maps are vertical search engines. If you want to find a nearby restaurant, pharmacy, or deal, you look to these platforms.
However, she credits Aardvark as one of the first in the space, which was a social search engine founded in 2007 that used instant messaging and email to get answers from your existing contacts. Google bought the company in 2010. It shows the idea of crowdsourcing answers isn’t new, but the engines have become “appified,” she said.
“Now it’s geo-local specific,” she said. “We’re asking a lot more of those geo-local questions because of location-based immediacy [that we want].”
Think Seamless, with which you find the food nearby that most satisfies your appetite. Even Tinder and Grindr are social search engines, Lieb said. You want to meet up with the people that are closest to you, geographically….
His challenge is to offer rewards to incite people to sign up for the app. Eventually, Leroy would like to strengthen the networks and scale Enquire to cities and neighborhoods all over the world. Once that’s in place, people can start creating their own neighborhoods — around a school or workplace, where they hang out regularly — instead of using the existing constraints.
“I may be an expert in one area, and a newbie in another. I want to emphasize the activity and content from users to give them credit to other users and build that trust,” he said.
Usually, our first instinct is to open Yelp to find the best sushi restaurant or Google to search the closest concert venue, and it will probably stay that way for some time. But the idea that the opinions and insights of other human beings, even of strangers, is becoming much more valuable because of the internet is not far-fetched.
Admit it: haven’t you had a fleeting thought of starting a Kickstarter campaign for an idea? Looked for a cheaper place to stay on Airbnb than that hotel you normally book in New York? Or considered financing someone’s business idea across the world using Kiva? If so, then you’ve engaged in social search.
Suddenly, crowdsourcing answers for the things that pique your interest on your morning walk may not seem so strange after all.”
Selected Readings on Crowdsourcing Tasks and Peer Production
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.
Selected Reading List (in alphabetical order)
- Yochai Benkler — The Wealth of Networks: How Social Production Transforms Markets and Freedom — a book on the ways commons-based peer-production is transforming modern society.
- Daren C. Brabham — Using Crowdsourcing in Government — a report describing the diversity of methods crowdsourcing could be greater utilized by governments, including through the leveraging of micro-tasking platforms.
- Kevin J. Boudreau, Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley – From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers – a working paper exploring the conditions,
- including incentives, that affect online collaboration.
- Chiara Franzoni and Henry Sauermann — Crowd Science: The Organization of Scientific Research in Open Collaborative Projects — a paper describing the potential advantages of deploying crowd science in a variety of contexts.
- Aniket Kittur, Ed H. Chi and Bongwon Suh — Crowdsourcing User Studies with Mechanical Turk — a paper proposing potential benefits beyond simple task completion for microtasking platforms like Mechanical Turk.
- Aniket Kittur, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton — The Future of Crowd Work — a paper describing the promise of increased and evolved crowd work’s effects on the global economy.
- Michael J. Madison — Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo — an in-depth case study of the Galaxy Zoo containing insights regarding the importance of clear objectives and institutional and/or professional collaboration in citizen science initiatives.
- Thomas W. Malone, Robert Laubacher and Chrysanthos Dellarocas – Harnessing Crowds: Mapping the Genome of Collective Intelligence – an article proposing a framework for understanding collective intelligence efforts.
- Geoff Mulgan – True Collective Intelligence? A Sketch of a Possible New Field – a paper proposing theoretical building blocks and an experimental and research agenda around the field of collective intelligence.
- Henry Sauermann and Chiara Franzoni – Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation – a paper exploring the role of interest-based motivation in collaborative knowledge production.
- Catherine E. Schmitt-Sands and Richard J. Smith – Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk – an article describing an experiment using Mechanical Turk to crowdsource public policy research microtasks.
- Clay Shirky — Here Comes Everybody: The Power of Organizing Without Organizations — a book exploring the ways largely unstructured collaboration is remaking practically all sectors of modern life.
- Jonathan Silvertown — A New Dawn for Citizen Science — a paper examining the diverse factors influencing the emerging paradigm of “science by the people.”
- Katarzyna Szkuta, Roberto Pizzicannella, David Osimo – Collaborative approaches to public sector innovation: A scoping study – an article studying success factors and incentives around the collaborative delivery of online public services.
Annotated Selected Reading List (in alphabetical order)
Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.
- In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
- In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”
Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.
- In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
- The proposed four-part typology describes the following types of crowdsourcing in government:
- Knowledge Discovery and Management
- Distributed Human Intelligence Tasking
- Broadcast Search
- Peer-Vetted Creative Production
- In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
- Governments and scholars transcribing historical document scans
- Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
- Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
- Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”
Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.
- In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
- The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
- The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.
Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.
- In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
- Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
- Knowledge-related benefits
- Benefits from open participation
- Benefits from the open disclosure of intermediate inputs
- Motivational benefits
- The authors also identify a number of challenges:
- Organizational challenges
- Matching projects and people
- Division of labor and integration of contributions
- Project leadership
- Motivational challenges
- Sustaining contributor involvement
- Supporting a broader set of motivations
- Reconciling conflicting motivations
Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.
- In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
- The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”
Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.
- In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
- The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
- Designing workflows
- Assigning tasks
- Supporting hierarchical structure
- Enabling real-time crowd work
- Supporting synchronous collaboration
- Controlling quality
Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.
- This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
- Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”
Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.
- In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
- The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
- Who is performing the task? Why are they doing it?
- What is being accomplished? How is it being done?
- The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”
Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.
- In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
- With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
- He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”
Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.
- In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
- The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
- They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”
Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.
- In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
- Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”
Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.
- In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
- Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”
Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.
- This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
- Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
- “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
- A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
- Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”
Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.
- In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
- The authors focus on six types of collaborative governance projects:
- Services initiated by government built on government data;
- Services initiated by government and making use of citizens’ data;
- Services initiated by civil society built on open government data;
- Collaborative e-government services; and
- Services run by civil society and based on citizen data.
- The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”
Why Governments Should Adopt a Digital Engagement Strategy
Lindsay Crudele at StateTech: “Government agencies increasingly value digital engagement as a way to transform a complaint-based relationship into one of positive, proactive constituent empowerment. An engaged community is a stronger one.
Creating a culture of participatory government, as we strive to do in Boston, requires a data-driven infrastructure supported by IT solutions. Data management and analytics solutions translate a huge stream of social media data, drive conversations and creative crowdsourcing, and support transparency.
More than 50 departments across Boston host public conversations using a multichannel, multidisciplinary portfolio of accounts. We integrate these using an enterprise digital engagement management tool that connects and organizes them to break down silos and boost collaboration. Moreover, the technology provides a lens into ways to expedite workflow and improve service delivery.
A Vital Link in Times of Need
Committed and creative daily engagement builds trusting collaboration that, in turn, is vital in an inevitable crisis. As we saw during the tragic events of the 2013 Boston Marathon bombings and recent major weather events, rapid response through digital media clarifies the situation, provides information about safety and manages constituent expectations.
Boston’s enterprise model supports coordinated external communication and organized monitoring, intake and response. This provides a superadmin with access to all accounts for governance and the ability to easily amplify central messaging across a range of cultivated communities. These communities will later serve in recovery efforts.
The conversations must be seeded by a keen, creative and data-driven content strategy. For an agency to determine the correct strategy for the organization and the community it serves, a growing crop of social analytics tools can provide efficient insight into performance factors: type of content, deployment schedule, sentiment, service-based response time and team performance, to name a few. For example, in February, the city of Boston learned that tweets from our mayor with video saw 300 percent higher engagement than those without.
These insights can inform resource deployment, eliminating guesswork to more directly reach constituents by their preferred methods. Being truly present in a conversation demonstrates care and awareness and builds trust. This increased positivity can be measured through sentiment analysis, including change over time, and should be monitored for fluctuation.
During a major event, engagement managers may see activity reach new peaks in volume. IT solutions can interpret Big Data and bring a large-scale digital conversation back into perspective, identifying public safety alerts and emerging trends, needs and community influencers who can be engaged as amplifying partners.
Running Strong One Year Later
Throughout the 2014 Boston Marathon, we used three monitoring tools to deliver smart alerts to key partners across the organization:
• An engagement management tool organized conversations for account performance and monitoring.
• A brand listening tool scanned for emerging trends across the city and uncovered related conversations.
• A location-based predictive tool identified early alerts to discover potential problems along the marathon route.
With the team and tools in place, policy-based training supports the sustained growth and operation of these conversation channels. A data-driven engagement strategy unearths all of our stories, where we, as public servants and neighbors, build better communities together….”
Lessons in Mass Collaboration
Elizabeth Walker, Ryan Siegel, Todd Khozein, Nick Skytland, Ali Llewellyn, Thea Aldrich, and Michael Brennan in the Stanford Social Innovation Review: “significant advances in technology in the last two decades have opened possibilities to engage the masses in ways impossible to imagine centuries ago. Beyond coordination, today’s technological capability permits organizations to leverage and focus public interest, talent, and energy through mass collaborative engagement to better understand and solve today’s challenges. And given the rising public awareness of a variety of social, economic, and environmental problems, organizations have seized the opportunity to leverage and lead mass collaborations in the form of hackathons.
Hackathons emerged in the mid-2000s as a popular approach to leverage the expertise of large numbers of individuals to address social issues, often through the creation of online technological solutions. Having led hundreds of mass collaboration initiatives for organizations around the world in diverse cultural contexts, we at SecondMuse offer the following lessons as a starting point for others interested in engaging the masses, as well as challenges others’ may face.
What Mass Collaboration Looks Like
An early example of a mass collaborative endeavor was Random Hacks of Kindness (RHoK), which formed in 2009. RHoK was initially developed in collaboration with Google, Microsoft, Yahoo!, NASA, the World Bank, and later, HP as a volunteer mobilization effort; it aimed to build technology that would enable communities to respond better to crises such as natural disasters. In 2012, nearly 1,000 participants attended 30 events around the world to address 176 well-defined problems.
In 2013, NASA and SecondMuse led the International Space Apps Challenge, which engaged six US federal agencies, 400 partner institutions, and 9,000 global citizens through a variety of local and global team configurations; it aimed to address 58 different challenges to improve life on Earth and in space. In Athens, Greece, for example, in direct response to the challenge of creating a space-deployable greenhouse, a team developed a modular spinach greenhouse designed to survive the harsh Martian climate. Two months later, 11,000 citizens across 95 events participated in the National Day of Civic Hacking in 83 different US cities, ultimately contributing about 150,000 person-hours and addressing 31 federal and several state and local challenges over a single weekend. One result was Keep Austin Fed from Austin, Texas, which leveraged local data to coordinate food donations for those in need.
Strong interest on the part of institutions and an enthusiastic international community has paved the way for follow-up events in 2014.
Benefits of Mass Collaboration
The benefits of this approach to problem-solving are many, including:
- Incentivizing the use of government data. As institutions push to make data available to the public, mass collaboration can increase the usefulness of that data by creating products from it, as well as inform and streamline future data collection processes.
- Increasing transparency. Engaging citizens in the process of addressing public concerns educates them about the work that institutions do and advances efforts to meet public expectations of transparency.
- Increasing outcome ownership. When people engage in a collaborative process of problem solving, they naturally have a greater stake in the outcome. Put simply, the more people who participate in the process, the greater the sense of community ownership. Also, when spearheading new policies or initiatives, the support of a knowledgeable community can be important to long-term success.
- Increasing awareness. Engaging the populace in addressing challenges of public concern increases awareness of issues and helps develop an active citizenry. As a result, improved public perception and license to operate bolster governmental and non-governmental efforts to address challenges.
- Saving money. By providing data and structures to the public, and allowing them to build and iterate on plans and prototypes, mass collaboration gives agencies a chance to harness the power of open innovation with minimal time and funds.
- Harnessing cognitive surplus. The advent of online tools allowing for distributed collaboration enables citizens to use their free time incrementally toward collective endeavors that benefit local communities and the nation.
Challenges of Mass Collaboration
Although the benefits can be significant, agencies planning to lead mass collaborations should be aware of several challenges:
- Investing time and effort. A mass collaboration is most effective when it is not a one-time event. The up-front investment in building a collaboration of supporting partner organizations, creating a robust framework for action, developing the necessary tools and defining the challenges, and investing in implementation and scaling of the most promising results all require substantial time to secure long-term commitment and strong relationships.
- Forging an institution-community relationship. Throughout the course of most engagements, the power dynamic between the organization providing the frameworks and challenges and the groupings of individuals responding to the call to action can shift dramatically as the community incorporates the endeavor into their collective identity. Everyone involved should embrace this as they lay the foundation for self-sustaining mass collaboration communities. Once participants develop a firmly entrenched collective identity and sense of ownership, the convening organization can fully tap into its collective genius, as they can work together based on trust and shared vision. Without community ownership, organizers need to allot more time, energy, and resources to keep their initiative moving forward, and to battle against volunteer fatigue, diminished productivity, and substandard output.
- Focusing follow-up. Turning a massive infusion of creative ideas, concepts, and prototypes into concrete solutions requires a process of focused follow-up. Identifying and nurturing the most promising seeds to fruition requires time, discrete skills, insight, and—depending on the solutions you scale—support from a variety of external organizations.
- Understanding ROI. Any resource-intensive endeavor where only a few of numerous resulting products ever see the light of day demands deep consideration of what constitutes a reasonable return on investment. For mass collaborations, this means having an initial understanding of the potential tangible and intangible outcomes, and making a frank assessment of whether those outcomes meet the needs of the collaborators.
Technological developments in the last century have enabled relationships between individuals and institutions to blossom into a rich and complex tapestry…”
CollaborativeScience.org: Sustaining Ecological Communities Through Citizen Science and Online Collaboration
David Mellor at CommonsLab: “In any endeavor, there can be a tradeoff between intimacy and impact. The same is true for science in general and citizen science in particular. Large projects with thousands of collaborators can have incredible impact and robust, global implications. On the other hand, locally based projects can foster close-knit ties that encourage collaboration and learning, but face an uphill battle when it comes to creating rigorous and broadly relevant investigations. Online collaboration has the potential to harness the strengths of both of these strategies if a space can be created that allows for the easy sharing of complex ideas and conservation strategies.
CollaborativeScience.org was created by researchers from five different universities to train Master Naturalists in ecology, scientific modeling and adaptive management, and then give these capable volunteers a space to put their training to work and create conservation plans in collaboration with researchers and land managers.
We are focusing on scientific modeling throughout this process because environmental managers and ecologists have been trained to intuitively create explanations based on a very large number of related observations. As new data are collected, these explanations are revised and are put to use in generating new, testable hypotheses. The modeling tools that we are providing to our volunteers allow them to formalize this scientific reasoning by adding information, sources and connections, then making predictions based on possible changes to the system. We integrate their projects into the well-established citizen science tools at CitSci.org and guide them through the creation of an adaptive management plan, a proven conservation project framework…”