Paper by Davami, Erfan; Sukthankar, Gita available at ASE@360: “Participatory sensing is a specialized form of crowdsourcing for mobile devices in which the users act as sensors to report on local environmental conditions. • This poster describes the process of prototyping a mobile phone crowdsourcing app for monitoring parking availability on a large university campus. • We present a case study of how an agent-based urban model can be used to perform a sensitivity analysis of the comparative susceptibility of different data fusion paradigms to potentially troublesome user behaviors: 1. Poor user enrollment, 2. Infrequent usage, 3. A preponderance of untrustworthy users.”
Selected Readings on Crowdsourcing Expertise
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Crowdsourcing enables leaders and citizens to work together to solve public problems in new and innovative ways. New tools and platforms enable citizens with differing levels of knowledge, expertise, experience and abilities to collaborate and solve problems together. Identifying experts, or individuals with specialized skills, knowledge or abilities with regard to a specific topic, and incentivizing their participation in crowdsourcing information, knowledge or experience to achieve a shared goal can enhance the efficiency and effectiveness of problem solving.
Selected Reading List (in alphabetical order)
- Katy Börner, Michael Conlon, Jon Corson-Rikert, and Ying Ding — VIVO: A Semantic Approach to Scholarly Networking and Discovery — an introduction to VIVO, a tool for representing information about researchers’ expertise and organizational relationships.
- Alessandro Bozzon, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci — Choosing the Right Crowd: Expert Finding in Social Networks — a paper exploring the challenge of identifying the expertise needed for a given problem through the use of social networks.
- Daren C. Brabham — The Myth of Amateur Crowds — a paper arguing that, contrary to popular belief, experts are more prevalent in crowdsourcing projects than hobbyists and amateurs.
- William H. Dutton — Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government — a paper arguing for more structured and well-managed crowdsourcing efforts within government to help harness the distributed expertise of citizens.
- Gagan Goel, Afshin Nikzad and Adish Singla – Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets – a paper exploring the intelligent tasking of Mechanical Turk workers based on varying levels of expertise.
- D. Gubanov, N. Korgin, D. Novikov and A. Kalkov – E-Expertise: Modern Collective Intelligence – an ebook focusing on the organizations and mechanisms of expert decisionmaking.
- Cathrine Holst – Expertise and Democracy – a collection of papers on the role of knowledge and expertise in modern democracies.
- Andrew King and Karim R. Lakhani — Using Open Innovation to Identify the Best Ideas — a paper examining different methods for opening innovation and tapping the “ideas cloud” of external expertise.
- Chengjiang Long, Gang Hua and Ashish Kapoor – Active Visual Recognition with Expertise Estimation in Crowdsourcing – a paper proposing a mechanism for identifying experts in a Mechanical Turk project.
- Beth Simone Noveck — “Peer to Patent”: Collective Intelligence, Open Review, and Patent Reform — a law review article introducing the idea of crowdsourcing expertise to mitigate the challenge of patent processing.
- Josiah Ober — Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment — a paper discussing the Relevant Expertise Aggregation (REA) model for improving democratic decision-making.
- Max H. Sims, Jeffrey Bigham, Henry Kautz and Marc W. Halterman – Crowdsourcing medical expertise in near real time – a paper describing the development of a mobile application to give healthcare providers with better access to expertise.
- Alessandro Spina – Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies – a paper proposing increased crowdsourcing of expertise within the European Food Safety Authority.
Annotated Selected Reading List (in alphabetical order)
Börner, Katy, Michael Conlon, Jon Corson-Rikert, and Ying Ding. “VIVO: A Semantic Approach to Scholarly Networking and Discovery.” Synthesis Lectures on the Semantic Web: Theory and Technology 2, no. 1 (October 17, 2012): 1–178. http://bit.ly/17huggT.
- This e-book “provides an introduction to VIVO…a tool for representing information about research and researchers — their scholarly works, research interests, and organizational relationships.”
- VIVO is a response to the fact that, “Information for scholars — and about scholarly activity — has not kept pace with the increasing demands and expectations. Information remains siloed in legacy systems and behind various access controls that must be licensed or otherwise negotiated before access. Information representation is in its infancy. The raw material of scholarship — the data and information regarding previous work — is not available in common formats with common semantics.”
- Providing access to structured information on the work and experience of a diversity of scholars enables improved expert finding — “identifying and engaging experts whose scholarly works is of value to one’s own. To find experts, one needs rich data regarding one’s own work and the work of potential related experts. The authors argue that expert finding is of increasing importance since, “[m]ulti-disciplinary and inter-disciplinary investigation is increasingly required to address complex problems.
Bozzon, Alessandro, Marco Brambilla, Stefano Ceri, Matteo Silvestri, and Giuliano Vesci. “Choosing the Right Crowd: Expert Finding in Social Networks.” In Proceedings of the 16th International Conference on Extending Database Technology, 637–648. EDBT ’13. New York, NY, USA: ACM, 2013. http://bit.ly/18QbtY5.
- This paper explores the challenge of selecting experts within the population of social networks by considering the following problem: “given an expertise need (expressed for instance as a natural language query) and a set of social network members, who are the most knowledgeable people for addressing that need?”
- The authors come to the following conclusions:
- “profile information is generally less effective than information about resources that they directly create, own or annotate;
- resources which are produced by others (resources appearing on the person’s Facebook wall or produced by people that she follows on Twitter) help increasing the assessment precision;
- Twitter appears the most effective social network for expertise matching, as it very frequently outperforms all other social networks (either combined or alone);
- Twitter appears as well very effective for matching expertise in domains such as computer engineering, science, sport, and technology & games, but Facebook is also very effective in fields such as locations, music, sport, and movies & tv;
- surprisingly, LinkedIn appears less effective than other social networks in all domains (including computer science) and overall.”
Brabham, Daren C. “The Myth of Amateur Crowds.” Information, Communication & Society 15, no. 3 (2012): 394–410. http://bit.ly/1hdnGJV.
- Unlike most of the related literature, this paper focuses on bringing attention to the expertise already being tapped by crowdsourcing efforts rather than determining ways to identify more dormant expertise to improve the results of crowdsourcing.
- Brabham comes to two central conclusions: “(1) crowdsourcing is discussed in the popular press as a process driven by amateurs and hobbyists, yet empirical research on crowdsourcing indicates that crowds are largely self-selected professionals and experts who opt-in to crowdsourcing arrangements; and (2) the myth of the amateur in crowdsourcing ventures works to label crowds as mere hobbyists who see crowdsourcing ventures as opportunities for creative expression, as entertainment, or as opportunities to pass the time when bored. This amateur/hobbyist label then undermines the fact that large amounts of real work and expert knowledge are exerted by crowds for relatively little reward and to serve the profit motives of companies.
Dutton, William H. Networking Distributed Public Expertise: Strategies for Citizen Sourcing Advice to Government. One of a Series of Occasional Papers in Science and Technology Policy, Science and Technology Policy Institute, Institute for Defense Analyses, February 23, 2011. http://bit.ly/1c1bpEB.
- In this paper, a case is made for more structured and well-managed crowdsourcing efforts within government. Specifically, the paper “explains how collaborative networking can be used to harness the distributed expertise of citizens, as distinguished from citizen consultation, which seeks to engage citizens — each on an equal footing.” Instead of looking for answers from an undefined crowd, Dutton proposes “networking the public as advisors” by seeking to “involve experts on particular public issues and problems distributed anywhere in the world.”
- Dutton argues that expert-based crowdsourcing can be successfully for government for a number of reasons:
- Direct communication with a diversity of independent experts
- The convening power of government
- Compatibility with open government and open innovation
- Synergy with citizen consultation
- Building on experience with paid consultants
- Speed and urgency
- Centrality of documents to policy and practice.
- He also proposes a nine-step process for government to foster bottom-up collaboration networks:
- Do not reinvent the technology
- Focus on activities, not the tools
- Start small, but capable of scaling up
- Modularize
- Be open and flexible in finding and going to communities of experts
- Do not concentrate on one approach to all problems
- Cultivate the bottom-up development of multiple projects
- Experience networking and collaborating — be a networked individual
- Capture, reward, and publicize success.
Goel, Gagan, Afshin Nikzad and Adish Singla. “Matching Workers with Tasks: Incentives in Heterogeneous Crowdsourcing Markets.” Under review by the International World Wide Web Conference (WWW). 2014. http://bit.ly/1qHBkdf
- Combining the notions of crowdsourcing expertise and crowdsourcing tasks, this paper focuses on the challenge within platforms like Mechanical Turk related to intelligently matching tasks to workers.
- The authors’ call for more strategic assignment of tasks in crowdsourcing markets is based on the understanding that “each worker has certain expertise and interests which define the set of tasks she can and is willing to do.”
- Focusing on developing meaningful incentives based on varying levels of expertise, the authors sought to create a mechanism that, “i) is incentive compatible in the sense that it is truthful for agents to report their true cost, ii) picks a set of workers and assigns them to the tasks they are eligible for in order to maximize the utility of the requester, iii) makes sure total payments made to the workers doesn’t exceed the budget of the requester.
Gubanov, D., N. Korgin, D. Novikov and A. Kalkov. E-Expertise: Modern Collective Intelligence. Springer, Studies in Computational Intelligence 558, 2014. http://bit.ly/U1sxX7
- In this book, the authors focus on “organization and mechanisms of expert decision-making support using modern information and communication technologies, as well as information analysis and collective intelligence technologies (electronic expertise or simply e-expertise).”
- The book, which “addresses a wide range of readers interested in management, decision-making and expert activity in political, economic, social and industrial spheres, is broken into five chapters:
- Chapter 1 (E-Expertise) discusses the role of e-expertise in decision-making processes. The procedures of e-expertise are classified, their benefits and shortcomings are identified, and the efficiency conditions are considered.
- Chapter 2 (Expert Technologies and Principles) provides a comprehensive overview of modern expert technologies. A special emphasis is placed on the specifics of e-expertise. Moreover, the authors study the feasibility and reasonability of employing well-known methods and approaches in e-expertise.
- Chapter 3 (E-Expertise: Organization and Technologies) describes some examples of up-to-date technologies to perform e-expertise.
- Chapter 4 (Trust Networks and Competence Networks) deals with the problems of expert finding and grouping by information and communication technologies.
- Chapter 5 (Active Expertise) treats the problem of expertise stability against any strategic manipulation by experts or coordinators pursuing individual goals.
Holst, Cathrine. “Expertise and Democracy.” ARENA Report No 1/14, Center for European Studies, University of Oslo. http://bit.ly/1nm3rh4
- This report contains a set of 16 papers focused on the concept of “epistocracy,” meaning the “rule of knowers.” The papers inquire into the role of knowledge and expertise in modern democracies and especially in the European Union (EU). Major themes are: expert-rule and democratic legitimacy; the role of knowledge and expertise in EU governance; and the European Commission’s use of expertise.
- Expert-rule and democratic legitimacy
- Papers within this theme concentrate on issues such as the “implications of modern democracies’ knowledge and expertise dependence for political and democratic theory.” Topics include the accountability of experts, the legitimacy of expert arrangements within democracies, the role of evidence in policy-making, how expertise can be problematic in democratic contexts, and “ethical expertise” and its place in epistemic democracies.
- The role of knowledge and expertise in EU governance
- Papers within this theme concentrate on “general trends and developments in the EU with regard to the role of expertise and experts in political decision-making, the implications for the EU’s democratic legitimacy, and analytical strategies for studying expertise and democratic legitimacy in an EU context.”
- The European Commission’s use of expertise
- Papers within this theme concentrate on how the European Commission uses expertise and in particular the European Commission’s “expertgroup system.” Topics include the European Citizen’s Initiative, analytic-deliberative processes in EU food safety, the operation of EU environmental agencies, and the autonomy of various EU agencies.
- Expert-rule and democratic legitimacy
King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. http://bit.ly/HjVOpi.
- In this paper, King and Lakhani examine different methods for opening innovation, where, “[i]nstead of doing everything in-house, companies can tap into the ideas cloud of external expertise to develop new products and services.”
- The three types of open innovation discussed are: opening the idea-creation process, competitions where prizes are offered and designers bid with possible solutions; opening the idea-selection process, ‘approval contests’ in which outsiders vote to determine which entries should be pursued; and opening both idea generation and selection, an option used especially by organizations focused on quickly changing needs.
Long, Chengjiang, Gang Hua and Ashish Kapoor. “Active Visual Recognition with Expertise Estimation in Crowdsourcing.” 2013 IEEE International Conference on Computer Vision. December 2013. http://bit.ly/1lRWFur.
- This paper is focused on improving the crowdsourced labeling of visual datasets from platforms like Mechanical Turk. The authors note that, “Although it is cheap to obtain large quantity of labels through crowdsourcing, it has been well known that the collected labels could be very noisy. So it is desirable to model the expertise level of the labelers to ensure the quality of the labels. The higher the expertise level a labeler is at, the lower the label noises he/she will produce.”
- Based on the need for identifying expert labelers upfront, the authors developed an “active classifier learning system which determines which users to label which unlabeled examples” from collected visual datasets.
- The researchers’ experiments in identifying expert visual dataset labelers led to findings demonstrating that the “active selection” of expert labelers is beneficial in cutting through the noise of crowdsourcing platforms.
Noveck, Beth Simone. “’Peer to Patent’: Collective Intelligence, Open Review, and Patent Reform.” Harvard Journal of Law & Technology 20, no. 1 (Fall 2006): 123–162. http://bit.ly/HegzTT.
- This law review article introduces the idea of crowdsourcing expertise to mitigate the challenge of patent processing. Noveck argues that, “access to information is the crux of the patent quality problem. Patent examiners currently make decisions about the grant of a patent that will shape an industry for a twenty-year period on the basis of a limited subset of available information. Examiners may neither consult the public, talk to experts, nor, in many cases, even use the Internet.”
- Peer-to-Patent, which launched three years after this article, is based on the idea that, “The new generation of social software might not only make it easier to find friends but also to find expertise that can be applied to legal and policy decision-making. This way, we can improve upon the Constitutional promise to promote the progress of science and the useful arts in our democracy by ensuring that only worth ideas receive that ‘odious monopoly’ of which Thomas Jefferson complained.”
Ober, Josiah. “Democracy’s Wisdom: An Aristotelian Middle Way for Collective Judgment.” American Political Science Review 107, no. 01 (2013): 104–122. http://bit.ly/1cgf857.
- In this paper, Ober argues that, “A satisfactory model of decision-making in an epistemic democracy must respect democratic values, while advancing citizens’ interests, by taking account of relevant knowledge about the world.”
- Ober describes an approach to decision-making that aggregates expertise across multiple domains. This “Relevant Expertise Aggregation (REA) enables a body of minimally competent voters to make superior choices among multiple options, on matters of common interest.”
Sims, Max H., Jeffrey Bigham, Henry Kautz and Marc W. Halterman. “Crowdsourcing medical expertise in near real time.” Journal of Hospital Medicine 9, no. 7, July 2014. http://bit.ly/1kAKvq7.
- In this article, the authors discuss the develoment of a mobile application called DocCHIRP, which was developed due to the fact that, “although the Internet creates unprecedented access to information, gaps in the medical literature and inefficient searches often leave healthcare providers’ questions unanswered.”
- The DocCHIRP pilot project used a “system of point-to-multipoint push notifications designed to help providers problem solve by crowdsourcing from their peers.”
- Healthcare providers (HCPs) sought to gain intelligence from the crowd, which included 85 registered users, on questions related to medication, complex medical decision making, standard of care, administrative, testing and referrals.
- The authors believe that, “if future iterations of the mobile crowdsourcing applications can address…adoption barriers and support the organic growth of the crowd of HCPs,” then “the approach could have a positive and transformative effect on how providers acquire relevant knowledge and care for patients.”
Spina, Alessandro. “Scientific Expertise and Open Government in the Digital Era: Some Reflections on EFSA and Other EU Agencies.” in Foundations of EU Food Law and Policy, eds. A. Alemmano and S. Gabbi. Ashgate, 2014. http://bit.ly/1k2EwdD.
- In this paper, Spina “presents some reflections on how the collaborative and crowdsourcing practices of Open Government could be integrated in the activities of EFSA [European Food Safety Authority] and other EU agencies,” with a particular focus on “highlighting the benefits of the Open Government paradigm for expert regulatory bodies in the EU.”
- Spina argues that the “crowdsourcing of expertise and the reconfiguration of the information flows between European agencies and teh public could represent a concrete possibility of modernising the role of agencies with a new model that has a low financial burden and an almost immediate effect on the legal governance of agencies.”
- He concludes that, “It is becoming evident that in order to guarantee that the best scientific expertise is provided to EU institutions and citizens, EFSA should strive to use the best organisational models to source science and expertise.”
Is Crowdsourcing the Future for Legislation?
Brian Heaton in GovTech: “…While drafting legislation is traditionally the job of elected officials, an increasing number of lawmakers are using digital platforms such as Wikispaces and GitHub to give constituents a bigger hand in molding the laws they’ll be governed by. The practice has been used this year in both California and New York City, and shows no signs of slowing down anytime soon, experts say.
Trond Undheim, crowdsourcing expert and founder of Yegii Inc., a startup company that provides and ranks advanced knowledge assets in the areas of health care, technology, energy and finance, said crowdsourcing was “certainly viable” as a tool to help legislators understand what constituents are most passionate about.
“I’m a big believer in asking a wide variety of people the same question and crowdsourcing has become known as the long-tail of answers,” Undheim said. “People you wouldn’t necessarily think of have something useful to say.”
California Assemblyman Mike Gatto, D-Los Angeles, agreed. He’s spearheaded an effort this year to let residents craft legislation regarding probate law — a measure designed to allow a court to assign a guardian to a deceased person’s pet. Gatto used the online Wikispaces platform — which allows for Wikipedia-style editing and content contribution — to let anyone with an Internet connection collaborate on the legislation over a period of several months.
The topic of the bill may not have been headline news, but Gatto was encouraged by the media attention his experiment received. As a result, he’s committed to running another crowdsourced bill next year — just on a bigger, more mainstream public issue.
New York City Council Member Ben Kallos has a plethora of technology-related legislation being considered in the Big Apple. Many of the bills are open for public comment and editing on GitHub. In an interview with Government Technology last month, Kallos said he believes using crowdsourcing to comment on and edit legislation is empowering and creates a different sense of democracy where people can put forward their ideas.
County governments also are joining the crowdsourcing trend. The Catawba Regional Council of Governments in South Carolina and the Centralia Council of Governments in North Carolina are gathering opinions on how county leaders should plan for future growth in the region.
At a public forum earlier this year, attendees were given iPads to go online and review four growth options and record their views on which they preferred. The priorities outlined by citizens will be taken back to decision-makers in each of the counties to see how well existing plans match up with what the public wants.
Gatto said he’s encouraged by how quickly the crowdsourcing of policy has spread throughout the U.S. He said there’s a disconnect between governments and their constituencies who believe elected officials don’t listen. But that could change as crowdsourcing continues to make its impact on lawmakers.
“When you put out a call like I did and others have done and say ‘I’m going to let the public draft a law and whatever you draft, I’m committed to introducing it … I think that’s a powerful message,” Gatto said. “I think the public appreciates it because it makes them understand that the government still belongs to them.”
Protecting the Process
Despite the benefits crowdsourcing brings to the legislative process, there remain some question marks about whether it truly provides insight into the public’s feelings on an issue. For example, because many political issues are driven by the influence of special interest groups, what’s preventing those groups from manipulating the bill-drafting process?
Not much, according to Undheim. He cautioned policymakers to be aware of the motivations from people taking part in crowdsourcing efforts to write and edit laws. Gatto shared Undheim’s concerns, but noted that the platform he used for developing his probate law – Wikispaces – has safeguards in place so that a member of his staff can revert language of a crowdsourced bill back to a previous version if it’s determined that someone was trying to unduly influence the drafting process….”
Crowdsourcing moving beyond the fringe
Bob Brown in Networked World: ” Depending up on how you look at it, crowdsourcing is all the rage these days — think Wikipedia, X Prize and Kickstarter — or at the other extreme, greatly underused.
To the team behind the new “insight network” Yegii, crowdsourcing has not nearly reached its potential despite having its roots as far back as the early 1700s and a famous case of the British Government seeking a solution to “The Longitude Problem” in order to make sailing less life threatening. (I get the impression that mention of this example is obligatory at any crowdsourcing event.)
This angel-funded startup, headed by an MIT Sloan School of Management senior lecturer and operating from a Boston suburb, is looking to exploit crowdsourcing’s potential through a service that connects financial, healthcare, technology and other organizations seeking knowledge with experts who can provide it – and fairly fast. To CEO Trond Undheim, crowdsourcing is “no longer for fringe freelance work,” and the goal is to get more organizations and smart individuals involved.
“Yegii is essentially a network of networks, connecting people, organizations, and knowledge in new ways,” says Undheim, who explains that the name Yegii is Korean for “talk” or “discussion”. “Our focus is laser sharp: we only rank and rate knowledge that says something essential about what I see as the four forces of industry disruption: technology, policy, user dynamics and business models. We tackle challenging business issues across domains, from life sciences to energy to finance. The point is that today’s industry classification is falling apart. We need more specific insight than in-house strategizing or generalist consulting advice.”
Undheim attempted to drum up interest in the new business last week at an event at Babson College during which a handful of crowdsourcing experts spoke. Harvard Business School adjunct professor Alan MacCormack discussed the X Prize, Netflix Prize and other examples of spurring competition through crowdsourcing. MIT’s Peter Gloor extolled the virtue of collaborative and smart swarms of people vs. stupid crowds (such as football hooligans). A couple of advertising/marketing execs shared stories of how clients and other brands are increasingly tapping into their customer base and the general public for new ideas from slogans to products, figuring that potential new customers are more likely to trust their peers than corporate ads. Another speaker dove into more details about how to run a crowdsourcing challenge, which includes identifying motivation that goes beyond money.
All of this was to frame Yegii’s crowdsourcing plan, which is at the beta stage with about a dozen clients (including Akamai and Santander bank) and is slated for mass production later this year. Yegii’s team consists of five part-timers, plus a few interns, who are building a web-based platform that consists of “knowledge assets,” that is market research, news reports and datasets from free and paid sources. That content – on topics that range from Bitcoin’s impact on banks to telecom bandwidth costs — is reviewed and ranked through a combination of machine learning and human peers. Information seekers would pay Yegii up to hundreds of dollars per month or up to tens of thousands of dollars per project, and then multidisciplinary teams would accept the challenge of answering their questions via customized reports within staged deadlines.
“We are focused on building partnerships with other expert networks and associations that have access to smart people with spare capacity, wherever they are,” Undheim says.
One reason organizations can benefit from crowdsourcing, Undheim says, is because of the “ephemeral nature of expertise in today’s society.” In other words, people within your organization might think of themselves as experts in this or that, but when they really think about it, they might realize their level of expertise has faded. Yegii will strive to narrow down the best sources of information for those looking to come up to speed on a subject over a weekend, whereas hunting for that information across a vast search engine would not be nearly as efficient….”
Crowdsourcing and social search
Lyndsey Gilpin at Techcrunch: “When we think of the sharing economy, what often comes to mind are sites like Airbnb, Lyft, or Feastly — the platforms that allow us to meet people for a specific reason, whether that’s a place to stay, a ride, or a meal.
But what about sharing something much simpler than that, like answers to our questions about the world around us? Sharing knowledge with strangers can offer us insight into a place we are curious about or trying to navigate, and in a more personal, efficient way than using traditional web searches.
“Sharing an answer or response to question, that is true sharing. There’s no financial or monetary exchange based on that. It’s the true meaning of [the word],” said Maxime Leroy, co-founder and CEO of a new app called Enquire.
Enquire is a new question-and-answer app, but it is unlike others in the space. You don’t have to log in via Facebook or Twitter, use SMS messaging like on Quest, or upload an image like you do on Jelly. None of these apps have taken off yet, which could be good or bad for Enquire just entering the space.
With Enquire, simply log in with a username and password and it will unlock the neighborhood you are in (the app only works in San Francisco, New York, and Paris right now). There are lists of answers to other questions, or you can post your own. If 200 people in a city sign up, the app will become available to them, which is an effort to make sure there is a strong community to gather answers from.
Leroy, who recently made a documentary about the sharing economy, realized there was “one tool missing for local communities” in the space, and decided to create this app.
“We want to build a more local-based network, and empower and increase trust without having people share all their identity,” he said.
Different social channels look at search in different ways, but the trend is definitely moving to more social searching or location-based searching, according to according to Altimeter social media analyst Rebecca Lieb. Arguably, she said, Yelp, Groupon, and even Google Maps are vertical search engines. If you want to find a nearby restaurant, pharmacy, or deal, you look to these platforms.
However, she credits Aardvark as one of the first in the space, which was a social search engine founded in 2007 that used instant messaging and email to get answers from your existing contacts. Google bought the company in 2010. It shows the idea of crowdsourcing answers isn’t new, but the engines have become “appified,” she said.
“Now it’s geo-local specific,” she said. “We’re asking a lot more of those geo-local questions because of location-based immediacy [that we want].”
Think Seamless, with which you find the food nearby that most satisfies your appetite. Even Tinder and Grindr are social search engines, Lieb said. You want to meet up with the people that are closest to you, geographically….
His challenge is to offer rewards to incite people to sign up for the app. Eventually, Leroy would like to strengthen the networks and scale Enquire to cities and neighborhoods all over the world. Once that’s in place, people can start creating their own neighborhoods — around a school or workplace, where they hang out regularly — instead of using the existing constraints.
“I may be an expert in one area, and a newbie in another. I want to emphasize the activity and content from users to give them credit to other users and build that trust,” he said.
Usually, our first instinct is to open Yelp to find the best sushi restaurant or Google to search the closest concert venue, and it will probably stay that way for some time. But the idea that the opinions and insights of other human beings, even of strangers, is becoming much more valuable because of the internet is not far-fetched.
Admit it: haven’t you had a fleeting thought of starting a Kickstarter campaign for an idea? Looked for a cheaper place to stay on Airbnb than that hotel you normally book in New York? Or considered financing someone’s business idea across the world using Kiva? If so, then you’ve engaged in social search.
Suddenly, crowdsourcing answers for the things that pique your interest on your morning walk may not seem so strange after all.”
Selected Readings on Crowdsourcing Tasks and Peer Production
The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.
Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.
Selected Reading List (in alphabetical order)
- Yochai Benkler — The Wealth of Networks: How Social Production Transforms Markets and Freedom — a book on the ways commons-based peer-production is transforming modern society.
- Daren C. Brabham — Using Crowdsourcing in Government — a report describing the diversity of methods crowdsourcing could be greater utilized by governments, including through the leveraging of micro-tasking platforms.
- Kevin J. Boudreau, Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley – From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers – a working paper exploring the conditions,
- including incentives, that affect online collaboration.
- Chiara Franzoni and Henry Sauermann — Crowd Science: The Organization of Scientific Research in Open Collaborative Projects — a paper describing the potential advantages of deploying crowd science in a variety of contexts.
- Aniket Kittur, Ed H. Chi and Bongwon Suh — Crowdsourcing User Studies with Mechanical Turk — a paper proposing potential benefits beyond simple task completion for microtasking platforms like Mechanical Turk.
- Aniket Kittur, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton — The Future of Crowd Work — a paper describing the promise of increased and evolved crowd work’s effects on the global economy.
- Michael J. Madison — Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo — an in-depth case study of the Galaxy Zoo containing insights regarding the importance of clear objectives and institutional and/or professional collaboration in citizen science initiatives.
- Thomas W. Malone, Robert Laubacher and Chrysanthos Dellarocas – Harnessing Crowds: Mapping the Genome of Collective Intelligence – an article proposing a framework for understanding collective intelligence efforts.
- Geoff Mulgan – True Collective Intelligence? A Sketch of a Possible New Field – a paper proposing theoretical building blocks and an experimental and research agenda around the field of collective intelligence.
- Henry Sauermann and Chiara Franzoni – Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation – a paper exploring the role of interest-based motivation in collaborative knowledge production.
- Catherine E. Schmitt-Sands and Richard J. Smith – Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk – an article describing an experiment using Mechanical Turk to crowdsource public policy research microtasks.
- Clay Shirky — Here Comes Everybody: The Power of Organizing Without Organizations — a book exploring the ways largely unstructured collaboration is remaking practically all sectors of modern life.
- Jonathan Silvertown — A New Dawn for Citizen Science — a paper examining the diverse factors influencing the emerging paradigm of “science by the people.”
- Katarzyna Szkuta, Roberto Pizzicannella, David Osimo – Collaborative approaches to public sector innovation: A scoping study – an article studying success factors and incentives around the collaborative delivery of online public services.
Annotated Selected Reading List (in alphabetical order)
Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.
- In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
- In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”
Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.
- In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
- The proposed four-part typology describes the following types of crowdsourcing in government:
- Knowledge Discovery and Management
- Distributed Human Intelligence Tasking
- Broadcast Search
- Peer-Vetted Creative Production
- In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
- Governments and scholars transcribing historical document scans
- Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
- Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
- Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”
Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.
- In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
- The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
- The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.
Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.
- In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
- Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
- Knowledge-related benefits
- Benefits from open participation
- Benefits from the open disclosure of intermediate inputs
- Motivational benefits
- The authors also identify a number of challenges:
- Organizational challenges
- Matching projects and people
- Division of labor and integration of contributions
- Project leadership
- Motivational challenges
- Sustaining contributor involvement
- Supporting a broader set of motivations
- Reconciling conflicting motivations
Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.
- In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
- The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”
Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.
- In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
- The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
- Designing workflows
- Assigning tasks
- Supporting hierarchical structure
- Enabling real-time crowd work
- Supporting synchronous collaboration
- Controlling quality
Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.
- This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
- Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”
Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.
- In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
- The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
- Who is performing the task? Why are they doing it?
- What is being accomplished? How is it being done?
- The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”
Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.
- In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
- With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
- He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”
Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.
- In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
- The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
- They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”
Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.
- In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
- Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”
Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.
- In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
- Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”
Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.
- This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
- Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
- “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
- A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
- Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”
Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.
- In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
- The authors focus on six types of collaborative governance projects:
- Services initiated by government built on government data;
- Services initiated by government and making use of citizens’ data;
- Services initiated by civil society built on open government data;
- Collaborative e-government services; and
- Services run by civil society and based on citizen data.
- The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”
Why Governments Should Adopt a Digital Engagement Strategy
Lindsay Crudele at StateTech: “Government agencies increasingly value digital engagement as a way to transform a complaint-based relationship into one of positive, proactive constituent empowerment. An engaged community is a stronger one.
Creating a culture of participatory government, as we strive to do in Boston, requires a data-driven infrastructure supported by IT solutions. Data management and analytics solutions translate a huge stream of social media data, drive conversations and creative crowdsourcing, and support transparency.
More than 50 departments across Boston host public conversations using a multichannel, multidisciplinary portfolio of accounts. We integrate these using an enterprise digital engagement management tool that connects and organizes them to break down silos and boost collaboration. Moreover, the technology provides a lens into ways to expedite workflow and improve service delivery.
A Vital Link in Times of Need
Committed and creative daily engagement builds trusting collaboration that, in turn, is vital in an inevitable crisis. As we saw during the tragic events of the 2013 Boston Marathon bombings and recent major weather events, rapid response through digital media clarifies the situation, provides information about safety and manages constituent expectations.
Boston’s enterprise model supports coordinated external communication and organized monitoring, intake and response. This provides a superadmin with access to all accounts for governance and the ability to easily amplify central messaging across a range of cultivated communities. These communities will later serve in recovery efforts.
The conversations must be seeded by a keen, creative and data-driven content strategy. For an agency to determine the correct strategy for the organization and the community it serves, a growing crop of social analytics tools can provide efficient insight into performance factors: type of content, deployment schedule, sentiment, service-based response time and team performance, to name a few. For example, in February, the city of Boston learned that tweets from our mayor with video saw 300 percent higher engagement than those without.
These insights can inform resource deployment, eliminating guesswork to more directly reach constituents by their preferred methods. Being truly present in a conversation demonstrates care and awareness and builds trust. This increased positivity can be measured through sentiment analysis, including change over time, and should be monitored for fluctuation.
During a major event, engagement managers may see activity reach new peaks in volume. IT solutions can interpret Big Data and bring a large-scale digital conversation back into perspective, identifying public safety alerts and emerging trends, needs and community influencers who can be engaged as amplifying partners.
Running Strong One Year Later
Throughout the 2014 Boston Marathon, we used three monitoring tools to deliver smart alerts to key partners across the organization:
• An engagement management tool organized conversations for account performance and monitoring.
• A brand listening tool scanned for emerging trends across the city and uncovered related conversations.
• A location-based predictive tool identified early alerts to discover potential problems along the marathon route.
With the team and tools in place, policy-based training supports the sustained growth and operation of these conversation channels. A data-driven engagement strategy unearths all of our stories, where we, as public servants and neighbors, build better communities together….”
Closing the Feedback Loop: Can Technology Bridge the Accountability Gap
(WorldBank) Book edited by Björn-Sören Gigler and Savita Bailur: “This book is a collection of articles, written by both academics and practitioners as an evidence base for citizen engagement through information and communication technologies (ICTs). In it, the authors ask: how do ICTs empower through participation, transparency and accountability? Specifically, the authors examine two principal questions: Are technologies an accelerator to closing the “accountability gap” – the space between the supply (governments, service providers) and demand (citizens, communities, civil society organizations or CSOs) that requires bridging for open and collaborative governance? And under what conditions does this occur? The introductory chapters lay the theoretical groundwork for understanding the potential of technologies to achieving intended goals. Chapter 1 takes us through the theoretical linkages between empowerment, participation, transparency and accountability. In Chapter 2, the authors devise an informational capability framework, relating human abilities and well-being to the use of ICTs. The chapters to follow highlight practical examples that operationalize ICT-led initiatives. Chapter 3 reviews a sample of projects targeting the goals of transparency and accountability in governance to make preliminary conclusions around what evidence exists to date, and where to go from here. In chapter 4, the author reviews the process of interactive community mapping (ICM) with examples that support general local development and others that mitigate natural disasters. Chapter 5 examines crowdsourcing in fragile states to track aid flows, report on incitement or organize grassroots movements. In chapter 6, the author reviews Check My School (CMS), a community monitoring project in the Philippines designed to track the provision of services in public schools. Chapter 7 introduces four key ICT-led, citizen-governance initiatives in primary health care in Karnataka, India. Chapter 8 analyzes the World Bank Institute’s use of ICTs in expanding citizen project input to understand the extent to which technologies can either engender a new “feedback loop” or ameliorate a “broken loop”. The authors’ analysis of the evidence signals ICTs as an accelerator to closing the “accountability gap”. In Chapter 9, the authors conclude with the Loch Ness model to illustrate how technologies contribute to shrinking the gap, why the gap remains open in many cases, and what can be done to help close it. This collection is a critical addition to existing literature on ICTs and citizen engagement for two main reasons: first, it is expansive, covering initiatives that leverage a wide range of technology tools, from mobile phone reporting to crowdsourcing to interactive mapping; second, it is the first of its kind to offer concrete recommendations on how to close feedback loops.”
User motivation and knowledge sharing in idea crowdsourcing
Citizen participation and technology
ICTlogy: “The recent, rapid rise in the use of digital technology is changing relationships between citizens, organizations and public institutions, and expanding political participation. But while technology has the potential to amplify citizens’ voices, it must be accompanied by clear political goals and other factors to increase their clout.
Those are among the conclusions of a new NDI study, “Citizen Participation and Technology,” that examines the role digital technologies – such as social media, interactive websites and SMS systems – play in increasing citizen participation and fostering accountability in government. The study was driven by the recognition that better insights are needed into the relationship between new technologies, citizen participation programs and the outcomes they aim to achieve.
Using case studies from countries such as Burma, Mexico and Uganda, the study explores whether the use of technology in citizen participation programs amplifies citizen voices and increases government responsiveness and accountability, and whether the use of digital technology increases the political clout of citizens.
The research shows that while more people are using technology—such as social media for mobile organizing, and interactive websites and text messaging systems that enable direct communication between constituents and elected officials or crowdsourcing election day experiences— the type and quality of their political participation, and therefore its impact on democratization, varies. It also suggests that, in order to leverage technology’s potential, there is a need to focus on non-technological areas such as political organizing, leadership skills and political analysis.
For example, the “2% and More Women in Politics” coalition led by Mexico’s National Institute for Women (INMUJERES) used a social media campaign and an online petition to call successfully for reforms that would allocate two percent of political party funding for women’s leadership training. Technology helped the activists reach a wider audience, but women from the different political parties who made up the coalition might not have come together without NDI’s role as a neutral convener.
The study, which was conducted with support from the National Endowment for Democracy, provides an overview of NDI’s approach to citizen participation, and examines how the integration of technologies affects its programs in order to inform the work of NDI, other democracy assistance practitioners, donors, and civic groups.
Observations:
Key findings:
- Technology can be used to readily create spaces and opportunities for citizens to express their voices, but making these voices politically stronger and the spaces more meaningful is a harder challenge that is political and not technological in nature.
- Technology that was used to purposefully connect citizens’ groups and amplify their voices had more political impact.
- There is a scarcity of data on specific demographic groups’ use of, and barriers to technology for political participation. Programs seeking to close the digital divide as an instrument of narrowing the political divide should be informed by more research into barriers to access to both politics and technology.
- There is a blurring of the meaning between the technologies of open government data and the politics of open government that clouds program strategies and implementation.
- Attempts to simply crowdsource public inputs will not result in users self-organizing into politically influential groups, since citizens lack the opportunities to develop leadership, unity, and commitment around a shared vision necessary for meaningful collective action.
- Political will and the technical capacity to engage citizens in policy making, or providing accurate data on government performance are lacking in many emerging democracies. Technology may have changed institutions’ ability to respond to citizen demands but its mere presence has not fundamentally changed actual government responsiveness.”