Crowdsourcing and social search


at Techcrunch: “When we think of the sharing economy, what often comes to mind are sites like Airbnb, Lyft, or Feastly — the platforms that allow us to meet people for a specific reason, whether that’s a place to stay, a ride, or a meal.
But what about sharing something much simpler than that, like answers to our questions about the world around us? Sharing knowledge with strangers can offer us insight into a place we are curious about or trying to navigate, and in a more personal, efficient way than using traditional web searches.
“Sharing an answer or response to question, that is true sharing. There’s no financial or monetary exchange based on that. It’s the true meaning of [the word],” said Maxime Leroy, co-founder and CEO of a new app called Enquire.
Enquire is a new question-and-answer app, but it is unlike others in the space. You don’t have to log in via Facebook or Twitter, use SMS messaging like on Quest, or upload an image like you do on Jelly. None of these apps have taken off yet, which could be good or bad for Enquire just entering the space.
With Enquire, simply log in with a username and password and it will unlock the neighborhood you are in (the app only works in San Francisco, New York, and Paris right now). There are lists of answers to other questions, or you can post your own. If 200 people in a city sign up, the app will become available to them, which is an effort to make sure there is a strong community to gather answers from.
Leroy, who recently made a documentary about the sharing economy, realized there was “one tool missing for local communities” in the space, and decided to create this app.
“We want to build a more local-based network, and empower and increase trust without having people share all their identity,” he said.
Different social channels look at search in different ways, but the trend is definitely moving to more social searching or location-based searching, according to according to Altimeter social media analyst Rebecca Lieb. Arguably, she said, Yelp, Groupon, and even Google Maps are vertical search engines. If you want to find a nearby restaurant, pharmacy, or deal, you look to these platforms.
However, she credits Aardvark as one of the first in the space, which was a social search engine founded in 2007 that used instant messaging and email to get answers from your existing contacts. Google bought the company in 2010. It shows the idea of crowdsourcing answers isn’t new, but the engines have become “appified,” she said.
“Now it’s geo-local specific,” she said. “We’re asking a lot more of those geo-local questions because of location-based immediacy [that we want].”
Think Seamless, with which you find the food nearby that most satisfies your appetite. Even Tinder and Grindr are social search engines, Lieb said. You want to meet up with the people that are closest to you, geographically….
His challenge is to offer rewards to incite people to sign up for the app. Eventually, Leroy would like to strengthen the networks and scale Enquire to cities and neighborhoods all over the world. Once that’s in place, people can start creating their own neighborhoods — around a school or workplace, where they hang out regularly — instead of using the existing constraints.
“I may be an expert in one area, and a newbie in another. I want to emphasize the activity and content from users to give them credit to other users and build that trust,” he said.
Usually, our first instinct is to open Yelp to find the best sushi restaurant or Google to search the closest concert venue, and it will probably stay that way for some time. But the idea that the opinions and insights of other human beings, even of strangers, is becoming much more valuable because of the internet is not far-fetched.
Admit it: haven’t you had a fleeting thought of starting a Kickstarter campaign for an idea? Looked for a cheaper place to stay on Airbnb than that hotel you normally book in New York? Or considered financing someone’s business idea across the world using Kiva? If so, then you’ve engaged in social search.
Suddenly, crowdsourcing answers for the things that pique your interest on your morning walk may not seem so strange after all.”

Selected Readings on Crowdsourcing Tasks and Peer Production


The Living Library’s Selected Readings series seeks to build a knowledge base on innovative approaches for improving the effectiveness and legitimacy of governance. This curated and annotated collection of recommended works on the topic of crowdsourcing was originally published in 2014.

Technological advances are creating a new paradigm by which institutions and organizations are increasingly outsourcing tasks to an open community, allocating specific needs to a flexible, willing and dispersed workforce. “Microtasking” platforms like Amazon’s Mechanical Turk are a burgeoning source of income for individuals who contribute their time, skills and knowledge on a per-task basis. In parallel, citizen science projects – task-based initiatives in which citizens of any background can help contribute to scientific research – like Galaxy Zoo are demonstrating the ability of lay and expert citizens alike to make small, useful contributions to aid large, complex undertakings. As governing institutions seek to do more with less, looking to the success of citizen science and microtasking initiatives could provide a blueprint for engaging citizens to help accomplish difficult, time-consuming objectives at little cost. Moreover, the incredible success of peer-production projects – best exemplified by Wikipedia – instills optimism regarding the public’s willingness and ability to complete relatively small tasks that feed into a greater whole and benefit the public good. You can learn more about this new wave of “collective intelligence” by following the MIT Center for Collective Intelligence and their annual Collective Intelligence Conference.

Selected Reading List (in alphabetical order)

Annotated Selected Reading List (in alphabetical order)

Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006. http://bit.ly/1aaU7Yb.

  • In this book, Benkler “describes how patterns of information, knowledge, and cultural production are changing – and shows that the way information and knowledge are made available can either limit or enlarge the ways people can create and express themselves.”
  • In his discussion on Wikipedia – one of many paradigmatic examples of people collaborating without financial reward – he calls attention to the notable ongoing cooperation taking place among a diversity of individuals. He argues that, “The important point is that Wikipedia requires not only mechanical cooperation among people, but a commitment to a particular style of writing and describing concepts that is far from intuitive or natural to people. It requires self-discipline. It enforces the behavior it requires primarily through appeal to the common enterprise that the participants are engaged in…”

Brabham, Daren C. Using Crowdsourcing in Government. Collaborating Across Boundaries Series. IBM Center for The Business of Government, 2013. http://bit.ly/17gzBTA.

  • In this report, Brabham categorizes government crowdsourcing cases into a “four-part, problem-based typology, encouraging government leaders and public administrators to consider these open problem-solving techniques as a way to engage the public and tackle difficult policy and administrative tasks more effectively and efficiently using online communities.”
  • The proposed four-part typology describes the following types of crowdsourcing in government:
    • Knowledge Discovery and Management
    • Distributed Human Intelligence Tasking
    • Broadcast Search
    • Peer-Vetted Creative Production
  • In his discussion on Distributed Human Intelligence Tasking, Brabham argues that Amazon’s Mechanical Turk and other microtasking platforms could be useful in a number of governance scenarios, including:
    • Governments and scholars transcribing historical document scans
    • Public health departments translating health campaign materials into foreign languages to benefit constituents who do not speak the native language
    • Governments translating tax documents, school enrollment and immunization brochures, and other important materials into minority languages
    • Helping governments predict citizens’ behavior, “such as for predicting their use of public transit or other services or for predicting behaviors that could inform public health practitioners and environmental policy makers”

Boudreau, Kevin J., Patrick Gaule, Karim Lakhani, Christoph Reidl, Anita Williams Woolley. “From Crowds to Collaborators: Initiating Effort & Catalyzing Interactions Among Online Creative Workers.” Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 14-060. January 23, 2014. https://bit.ly/2QVmGUu.

  • In this working paper, the authors explore the “conditions necessary for eliciting effort from those affecting the quality of interdependent teamwork” and “consider the the role of incentives versus social processes in catalyzing collaboration.”
  • The paper’s findings are based on an experiment involving 260 individuals randomly assigned to 52 teams working toward solutions to a complex problem.
  • The authors determined the level of effort in such collaborative undertakings are sensitive to cash incentives. However, collaboration among teams was driven more by the active participation of teammates, rather than any monetary reward.

Franzoni, Chiara, and Henry Sauermann. “Crowd Science: The Organization of Scientific Research in Open Collaborative Projects.” Research Policy (August 14, 2013). http://bit.ly/HihFyj.

  • In this paper, the authors explore the concept of crowd science, which they define based on two important features: “participation in a project is open to a wide base of potential contributors, and intermediate inputs such as data or problem solving algorithms are made openly available.” The rationale for their study and conceptual framework is the “growing attention from the scientific community, but also policy makers, funding agencies and managers who seek to evaluate its potential benefits and challenges. Based on the experiences of early crowd science projects, the opportunities are considerable.”
  • Based on the study of a number of crowd science projects – including governance-related initiatives like Patients Like Me – the authors identify a number of potential benefits in the following categories:
    • Knowledge-related benefits
    • Benefits from open participation
    • Benefits from the open disclosure of intermediate inputs
    • Motivational benefits
  • The authors also identify a number of challenges:
    • Organizational challenges
    • Matching projects and people
    • Division of labor and integration of contributions
    • Project leadership
    • Motivational challenges
    • Sustaining contributor involvement
    • Supporting a broader set of motivations
    • Reconciling conflicting motivations

Kittur, Aniket, Ed H. Chi, and Bongwon Suh. “Crowdsourcing User Studies with Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 453–456. CHI ’08. New York, NY, USA: ACM, 2008. http://bit.ly/1a3Op48.

  • In this paper, the authors examine “[m]icro-task markets, such as Amazon’s Mechanical Turk, [which] offer a potential paradigm for engaging a large number of users for low time and monetary costs. [They] investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks.”
  • The authors conclude that in addition to providing a means for crowdsourcing small, clearly defined, often non-skill-intensive tasks, “Micro-task markets such as Amazon’s Mechanical Turk are promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.”

Kittur, Aniket, Jeffrey V. Nickerson, Michael S. Bernstein, Elizabeth M. Gerber, Aaron Shaw, John Zimmerman, Matthew Lease, and John J. Horton. “The Future of Crowd Work.” In 16th ACM Conference on Computer Supported Cooperative Work (CSCW 2013), 2012. http://bit.ly/1c1GJD3.

  • In this paper, the authors discuss paid crowd work, which “offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale.” However, they caution that, “it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework.”
  • The authors argue that seven key challenges must be met to ensure that crowd work processes evolve and reach their full potential:
    • Designing workflows
    • Assigning tasks
    • Supporting hierarchical structure
    • Enabling real-time crowd work
    • Supporting synchronous collaboration
    • Controlling quality

Madison, Michael J. “Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo.” In Convening Cultural Commons, 2013. http://bit.ly/1ih9Xzm.

  • This paper explores a “case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place, Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis on the Internet. In the second place, Galaxy Zoo is a highly successful example of peer production, some times known as crowdsourcing…In the third place, is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies.”
  • Madison concludes that the success of Galaxy Zoo has not been the result of the “character of its information resources (scientific data) and rules regarding their usage,” but rather, the fact that the “community was guided from the outset by a vision of a specific organizational solution to a specific research problem in astronomy, initiated and governed, over time, by professional astronomers in collaboration with their expanding universe of volunteers.”

Malone, Thomas W., Robert Laubacher and Chrysanthos Dellarocas. “Harnessing Crowds: Mapping the Genome of Collective Intelligence.” MIT Sloan Research Paper. February 3, 2009. https://bit.ly/2SPjxTP.

  • In this article, the authors describe and map the phenomenon of collective intelligence – also referred to as “radical decentralization, crowd-sourcing, wisdom of crowds, peer production, and wikinomics – which they broadly define as “groups of individuals doing things collectively that seem intelligent.”
  • The article is derived from the authors’ work at MIT’s Center for Collective Intelligence, where they gathered nearly 250 examples of Web-enabled collective intelligence. To map the building blocks or “genes” of collective intelligence, the authors used two pairs of related questions:
    • Who is performing the task? Why are they doing it?
    • What is being accomplished? How is it being done?
  • The authors concede that much work remains to be done “to identify all the different genes for collective intelligence, the conditions under which these genes are useful, and the constraints governing how they can be combined,” but they believe that their framework provides a useful start and gives managers and other institutional decisionmakers looking to take advantage of collective intelligence activities the ability to “systematically consider many possible combinations of answers to questions about Who, Why, What, and How.”

Mulgan, Geoff. “True Collective Intelligence? A Sketch of a Possible New Field.” Philosophy & Technology 27, no. 1. March 2014. http://bit.ly/1p3YSdd.

  • In this paper, Mulgan explores the concept of a collective intelligence, a “much talked about but…very underdeveloped” field.
  • With a particular focus on health knowledge, Mulgan “sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to possible intellectual barriers to progress.”
  • He concludes that the “central message that comes from observing real intelligence is that intelligence has to be for something,” and that “turning this simple insight – the stuff of so many science fiction stories – into new theories, new technologies and new applications looks set to be one of the most exciting prospects of the next few years and may help give shape to a new discipline that helps us to be collectively intelligent about our own collective intelligence.”

Sauermann, Henry and Chiara Franzoni. “Participation Dynamics in Crowd-Based Knowledge Production: The Scope and Sustainability of Interest-Based Motivation.” SSRN Working Papers Series. November 28, 2013. http://bit.ly/1o6YB7f.

  • In this paper, Sauremann and Franzoni explore the issue of interest-based motivation in crowd-based knowledge production – in particular the use of the crowd science platform Zooniverse – by drawing on “research in psychology to discuss important static and dynamic features of interest and deriv[ing] a number of research questions.”
  • The authors find that interest-based motivation is often tied to a “particular object (e.g., task, project, topic)” not based on a “general trait of the person or a general characteristic of the object.” As such, they find that “most members of the installed base of users on the platform do not sign up for multiple projects, and most of those who try out a project do not return.”
  • They conclude that “interest can be a powerful motivator of individuals’ contributions to crowd-based knowledge production…However, both the scope and sustainability of this interest appear to be rather limited for the large majority of contributors…At the same time, some individuals show a strong and more enduring interest to participate both within and across projects, and these contributors are ultimately responsible for much of what crowd science projects are able to accomplish.”

Schmitt-Sands, Catherine E. and Richard J. Smith. “Prospects for Online Crowdsourcing of Social Science Research Tasks: A Case Study Using Amazon Mechanical Turk.” SSRN Working Papers Series. January 9, 2014. http://bit.ly/1ugaYja.

  • In this paper, the authors describe an experiment involving the nascent use of Amazon’s Mechanical Turk as a social science research tool. “While researchers have used crowdsourcing to find research subjects or classify texts, [they] used Mechanical Turk to conduct a policy scan of local government websites.”
  • Schmitt-Sands and Smith found that “crowdsourcing worked well for conducting an online policy program and scan.” The microtasked workers were helpful in screening out local governments that either did not have websites or did not have the types of policies and services for which the researchers were looking. However, “if the task is complicated such that it requires ongoing supervision, then crowdsourcing is not the best solution.”

Shirky, Clay. Here Comes Everybody: The Power of Organizing Without Organizations. New York: Penguin Press, 2008. https://bit.ly/2QysNif.

  • In this book, Shirky explores our current era in which, “For the first time in history, the tools for cooperating on a global scale are not solely in the hands of governments or institutions. The spread of the Internet and mobile phones are changing how people come together and get things done.”
  • Discussing Wikipedia’s “spontaneous division of labor,” Shirky argues that the process is like, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”

Silvertown, Jonathan. “A New Dawn for Citizen Science.” Trends in Ecology & Evolution 24, no. 9 (September 2009): 467–471. http://bit.ly/1iha6CR.

  • This article discusses the move from “Science for the people,” a slogan adopted by activists in the 1970s to “’Science by the people,’ which is “a more inclusive aim, and is becoming a distinctly 21st century phenomenon.”
  • Silvertown identifies three factors that are responsible for the explosion of activity in citizen science, each of which could be similarly related to the crowdsourcing of skills by governing institutions:
    • “First is the existence of easily available technical tools for disseminating information about products and gathering data from the public.
    • A second factor driving the growth of citizen science is the increasing realisation among professional scientists that the public represent a free source of labour, skills, computational power and even finance.
    • Third, citizen science is likely to benefit from the condition that research funders such as the National Science Foundation in the USA and the Natural Environment Research Council in the UK now impose upon every grantholder to undertake project-related science outreach. This is outreach as a form of public accountability.”

Szkuta, Katarzyna, Roberto Pizzicannella, David Osimo. “Collaborative approaches to public sector innovation: A scoping study.” Telecommunications Policy. 2014. http://bit.ly/1oBg9GY.

  • In this article, the authors explore cases where government collaboratively delivers online public services, with a focus on success factors and “incentives for services providers, citizens as users and public administration.”
  • The authors focus on six types of collaborative governance projects:
    • Services initiated by government built on government data;
    • Services initiated by government and making use of citizens’ data;
    • Services initiated by civil society built on open government data;
    • Collaborative e-government services; and
    • Services run by civil society and based on citizen data.
  • The cases explored “are all designed in the way that effectively harnesses the citizens’ potential. Services susceptible to collaboration are those that require computing efforts, i.e. many non-complicated tasks (e.g. citizen science projects – Zooniverse) or citizens’ free time in general (e.g. time banks). Those services also profit from unique citizens’ skills and their propensity to share their competencies.”

Why Governments Should Adopt a Digital Engagement Strategy


Lindsay Crudele at StateTech: “Government agencies increasingly value digital engagement as a way to transform a complaint-based relationship into one of positive, proactive constituent empowerment. An engaged community is a stronger one.
Creating a culture of participatory government, as we strive to do in Boston, requires a data-driven infrastructure supported by IT solutions. Data management and analytics solutions translate a huge stream of social media data, drive conversations and creative crowdsourcing, and support transparency.
More than 50 departments across Boston host public conversations using a multichannel, multidisciplinary portfolio of accounts. We integrate these using an enterprise digital engagement management tool that connects and organizes them to break down silos and boost collaboration. Moreover, the technology provides a lens into ways to expedite workflow and improve service delivery.

A Vital Link in Times of Need

Committed and creative daily engagement builds trusting collaboration that, in turn, is vital in an inevitable crisis. As we saw during the tragic events of the 2013 Boston Marathon bombings and recent major weather events, rapid response through digital media clarifies the situation, provides information about safety and manages constituent expectations.
Boston’s enterprise model supports coordinated external communication and organized monitoring, intake and response. This provides a superadmin with access to all accounts for governance and the ability to easily amplify central messaging across a range of cultivated communities. These communities will later serve in recovery efforts.
The conversations must be seeded by a keen, creative and data-driven content strategy. For an agency to determine the correct strategy for the organization and the community it serves, a growing crop of social analytics tools can provide efficient insight into performance factors: type of content, deployment schedule, sentiment, service-based response time and team performance, to name a few. For example, in February, the city of Boston learned that tweets from our mayor with video saw 300 percent higher engagement than those without.
These insights can inform resource deployment, eliminating guesswork to more directly reach constituents by their preferred methods. Being truly present in a conversation demonstrates care and awareness and builds trust. This increased positivity can be measured through sentiment analysis, including change over time, and should be monitored for fluctuation.
During a major event, engagement managers may see activity reach new peaks in volume. IT solutions can interpret Big Data and bring a large-scale digital conversation back into perspective, identifying public safety alerts and emerging trends, needs and community influencers who can be engaged as amplifying partners.

Running Strong One Year Later

Throughout the 2014 Boston Marathon, we used three monitoring tools to deliver smart alerts to key partners across the organization:
• An engagement management tool organized conversations for account performance and monitoring.
• A brand listening tool scanned for emerging trends across the city and uncovered related conversations.
• A location-based predictive tool identified early alerts to discover potential problems along the marathon route.
With the team and tools in place, policy-based training supports the sustained growth and operation of these conversation channels. A data-driven engagement strategy unearths all of our stories, where we, as public servants and neighbors, build better communities together….”

Closing the Feedback Loop: Can Technology Bridge the Accountability Gap


(WorldBank) Book edited by Björn-Sören Gigler and Savita Bailur:  “This book is a collection of articles, written by both academics and practitioners as an evidence base for citizen engagement through information and communication technologies (ICTs). In it, the authors ask: how do ICTs empower through participation, transparency and accountability? Specifically, the authors examine two principal questions: Are technologies an accelerator to closing the “accountability gap” – the space between the supply (governments, service providers) and demand (citizens, communities, civil society organizations or CSOs) that requires bridging for open and collaborative governance? And under what conditions does this occur? The introductory chapters lay the theoretical groundwork for understanding the potential of technologies to achieving intended goals. Chapter 1 takes us through the theoretical linkages between empowerment, participation, transparency and accountability. In Chapter 2, the authors devise an informational capability framework, relating human abilities and well-being to the use of ICTs. The chapters to follow highlight practical examples that operationalize ICT-led initiatives. Chapter 3 reviews a sample of projects targeting the goals of transparency and accountability in governance to make preliminary conclusions around what evidence exists to date, and where to go from here. In chapter 4, the author reviews the process of interactive community mapping (ICM) with examples that support general local development and others that mitigate natural disasters. Chapter 5 examines crowdsourcing in fragile states to track aid flows, report on incitement or organize grassroots movements. In chapter 6, the author reviews Check My School (CMS), a community monitoring project in the Philippines designed to track the provision of services in public schools. Chapter 7 introduces four key ICT-led, citizen-governance initiatives in primary health care in Karnataka, India. Chapter 8 analyzes the World Bank Institute’s use of ICTs in expanding citizen project input to understand the extent to which technologies can either engender a new “feedback loop” or ameliorate a “broken loop”. The authors’ analysis of the evidence signals ICTs as an accelerator to closing the “accountability gap”. In Chapter 9, the authors conclude with the Loch Ness model to illustrate how technologies contribute to shrinking the gap, why the gap remains open in many cases, and what can be done to help close it. This collection is a critical addition to existing literature on ICTs and citizen engagement for two main reasons: first, it is expansive, covering initiatives that leverage a wide range of technology tools, from mobile phone reporting to crowdsourcing to interactive mapping; second, it is the first of its kind to offer concrete recommendations on how to close feedback loops.”

User motivation and knowledge sharing in idea crowdsourcing


MIIA KOSONEN et al, Int. J. Innov. Mgt.: “We investigate how the propensity to trust, intrinsic motivation, and extrinsic motivation drive the intentions of individuals to share knowledge in idea crowdsourcing. Building on motivation theories and Uses & Gratifications (U&G) approach, we conducted a web-based survey within IdeasProject, an open innovation and brainstorming community dedicated to harvesting ideas. Based on a sample of 244 users, our research shows that the key driver of knowledge-sharing intentions is made up of two intrinsic motivations — social benefits and learning benefits. We also found that recognition from the host company affects intention to share knowledge. From the management point of view, the relative importance of social integrative benefits calls for better facilities available for users to be able to help each other in formulating and developing their ideas. Learning and creativity could be inspired by feedback from professionals and experts, while providing insight into technological advances and features dealing with the current tasks.”

Citizen participation and technology


ICTlogy: “The recent, rapid rise in the use of digital technology is changing relationships between citizens, organizations and public institutions, and expanding political participation. But while technology has the potential to amplify citizens’ voices, it must be accompanied by clear political goals and other factors to increase their clout.
Those are among the conclusions of a new NDI study, “Citizen Participation and Technology,” that examines the role digital technologies – such as social media, interactive websites and SMS systems – play in increasing citizen participation and fostering accountability in government. The study was driven by the recognition that better insights are needed into the relationship between new technologies, citizen participation programs and the outcomes they aim to achieve.
Using case studies from countries such as Burma, Mexico and Uganda, the study explores whether the use of technology in citizen participation programs amplifies citizen voices and increases government responsiveness and accountability, and whether the use of digital technology increases the political clout of citizens.
The research shows that while more people are using technology—such as social media for mobile organizing, and interactive websites and text messaging systems that enable direct communication between constituents and elected officials or crowdsourcing election day experiences— the type and quality of their political participation, and therefore its impact on democratization, varies. It also suggests that, in order to leverage technology’s potential, there is a need to focus on non-technological areas such as political organizing, leadership skills and political analysis.
For example, the “2% and More Women in Politics” coalition led by Mexico’s National Institute for Women (INMUJERES) used a social media campaign and an online petition to call successfully for reforms that would allocate two percent of political party funding for women’s leadership training. Technology helped the activists reach a wider audience, but women from the different political parties who made up the coalition might not have come together without NDI’s role as a neutral convener.
The study, which was conducted with support from the National Endowment for Democracy, provides an overview of NDI’s approach to citizen participation, and examines how the integration of technologies affects its programs in order to inform the work of NDI, other democracy assistance practitioners, donors, and civic groups.

Observations:

Key findings:

  1. Technology can be used to readily create spaces and opportunities for citizens to express their voices, but making these voices politically stronger and the spaces more meaningful is a harder challenge that is political and not technological in nature.
  2. Technology that was used to purposefully connect citizens’ groups and amplify their voices had more political impact.
  3. There is a scarcity of data on specific demographic groups’ use of, and barriers to technology for political participation. Programs seeking to close the digital divide as an instrument of narrowing the political divide should be informed by more research into barriers to access to both politics and technology.
  4. There is a blurring of the meaning between the technologies of open government data and the politics of open government that clouds program strategies and implementation.
  5. Attempts to simply crowdsource public inputs will not result in users self-organizing into politically influential groups, since citizens lack the opportunities to develop leadership, unity, and commitment around a shared vision necessary for meaningful collective action.
  6. Political will and the technical capacity to engage citizens in policy making, or providing accurate data on government performance are lacking in many emerging democracies. Technology may have changed institutions’ ability to respond to citizen demands but its mere presence has not fundamentally changed actual government responsiveness.”

Crowdsourcing for public safety


Paper presented by A Goncalves, C Silva, P Morreale, J Bonafide  at Systems Conference (SysCon), 2014: “With advances in mobile technology, the ability to get real-time geographically accurate data, including photos and videos, becomes integrated into daily activities. Businesses use this technology edge to stay ahead of their competitors. Social media has made photo and video sharing a widely accepted and adopted behavior. This real-time data and information exchange, crowdsourcing, can be used to help first responders and personnel in emergency situations caused by extreme weather such as earthquakes, hurricanes, floods, and snow storms. Using smartphones, civilians can contribute data and images to the recovery process and make it more efficient, which can ultimately save lives and decrease the economic impact caused by extreme weather conditions.”

Crowdsourcing platform for museums


Thesis by Kræn Vesterberg Hansen: “This thesis addresses a strategic challenge at National Museum of Denmark to engage with external people, interested in contributing information about their collection of more than half a million coins and medals. This approach of getting outsiders to help with the completion of many small tasks are popularly known as crowdsourcing. This entails a need for the transcription of handwritten protocols, establishment of references between of entries in protocols and photographs of coins. These coins also references both structured and non-structured metadata.
Does a digital platformfor crowd engagement, in the museum’s context, exist? And how is such a platform integrated with the existing infrastructure of the museum? The report considers the MediaWiki, Amazon’s Mechanical Turk and Zooniverse’s Scribe transcription interface, and finds that the MediaWiki fits approximately 70% of the requirements.
Existing cases of successful crowdsourcing projects, national as well international is mentioned and the solution builds upon APIs of existing infrastructure components (such as the existing collection management system GenReg Mønt and the Canto Cumulus digital asset management system) in a modular and reusable architecture.
The report approaches the challenge in a three part process, greatly inspired by the software process model of “Reuse-oriented software engineering” proposed by Professor of Software engineering at the University of St Andrews, Ian Summerville.”

The Golden Record 2.0 Will Crowdsource A Selfie of Human Culture


Helen Thompson in the Smithsonian: “In 1977, the Voyager 1 and 2 spacecraft left our solar system, carrying a “Golden Record”—a gold-plated phonograph record containing analogue images, greetings, and music from Earth. It was meant to be a snapshot of humanity. On the small chance that an alien lifeform encountered Voyager, they could get a sense of who made it.
“This record represents our hope and our determination and our goodwill in a vast and awesome universe,” said Carl Sagan who led the six-member team that created the Golden Record.
No spacecraft has left our solar system since Voyager, but in the next few years, NASA’s New Horizons probe, launched in 2006, will reach Pluto and then pass into the far edges of the solar system and beyond. A new project aims to create a “Golden Record 2.0”. Just like the original record, this new version will represent a sampling of human culture for NASA to transmit to New Horizons just before it soars off into the rest of the universe.
The genesis of the project came from Jon Lomberg, a scientific artist and the designer of the original Golden Record. Over the last year he’s recruited experts in a variety of fields to back the project. To convince NASA of public support, he launched a website and put together a petition, signed by over 10,000 people in 140 countries. When Lomberg presented the idea to NASA earlier this year, the agency was receptive and will be releasing a statement with further details on the project on August 25. In the meantime, he and his colleague Albert Yu-Min Lin, a research scientist at the University of California in San Diego, gave a preview of their plan at Smithsonian’s Future Is Here event in Washington, DC, today.
New Horizons will likely only have a small amount of memory space available for the content, so what should make the cut? Photos of landscapes and animals (including humans), sound bites of great speakers, popular music, or even videos could end up on the digital record. Lin is developing a platform where people will be able to explore and critique the submissions on the site. “We wanted to make this a democratic discussion,” says Lin. “How do we make this not a conversation about cute cats and Justin Beiber?” One can only guess what aliens might make of the Earth’s YouTube video fodder.
What sets this new effort apart from the original is that the content will be crowdsourced. “We thought this time why not let the people of earth speak for themselves,” says Lomberg. “Why not figure out a way to crowd source this message so that people would be able to decide what they wanted to say?” Lomberg has teamed up with Lin, who specializes in crowdsourcing technology, to create a platform where people from all over the world can submit content to be included on the record…”

A New Approach to Research


Clayton M. Christensen and Derek van Bever in Harvard Business Review: “In writing “The Capitalist’s Dilemma,” we asked students and alumni of our Harvard Business School course “Building and Sustaining a Successful Enterprise” to collaborate with us. Presented here is a map of that collaboration—how hundreds of contributors helped shape the seven core ideas in the article. The crowdsourcing of this article took place on the OI Engine platform (as used on OpenIDEO), which alumnus Tom Hulme helped develop, and was made possible through the leadership of the HBS Digital Initiative under the direction of Karim Lakhani and Matt Tucker. This effort represents the first attempt at creating a community of lifelong collaboration with HBS alumni.
The map illustrates how ideas build and flow, merge, and then diverge again over time. Diverse paths are taken to arrive at the final ideas in the article. It also shows how metrics we might presume are meaningful—comments on a post, for example—don’t always correlate with actual influence. We felt the approach we were taking to writing the article was different and disruptive. This visualization confirmed that for us, and helped us learn about crowdsourcing ideas, too.”