Towards an information systems perspective and research agenda on crowdsourcing for innovation
New paper by A Majchrzak and A Malhotra in The Journal of Strategic Information Systems: “Recent years have seen an increasing emphasis on open innovation by firms to keep pace with the growing intricacy of products and services and the ever changing needs of the markets. Much has been written about open innovation and its manifestation in the form of crowdsourcing. Unfortunately, most management research has taken the information system (IS) as a given. In this essay we contend that IS is not just an enabler but rather can be a shaper that optimizes open innovation in general and crowdsourcing in particular. This essay is intended to frame crowdsourcing for innovation in a manner that makes more apparent the issues that require research from an IS perspective. In doing so, we delineate the contributions that the IS field can make to the field of crowdsourcing.
-
Reviews participation architectures supporting current crowdsourcing, finding them inadequate for innovation development by the crowd.
-
Identifies 3 tensions for explaining why a participation architecture for crowdsourced innovation is difficult.
-
Identifies affordances for the participation architectures that may help to manage the tension.
-
Uses the tensions and possible affordances to identify research questions for IS scholars.”
Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo
Are Some Tweets More Interesting Than Others? #HardQuestion
New paper by Microsoft Research (Omar Alonso, Catherine C. Marshall, and Marc Najork): “Twitter has evolved into a significant communication nexus, coupling personal and highly contextual utterances with local news, memes, celebrity gossip, headlines, and other microblogging subgenres. If we take Twitter as a large and varied dynamic collection, how can we predict which tweets will be interesting to a broad audience in advance of lagging social indicators of interest such as retweets? The telegraphic form of tweets, coupled with the subjective notion of interestingness, makes it difficult for human judges to agree on which tweets are indeed interesting.
In this paper, we address two questions: Can we develop a reliable strategy that results in high-quality labels for a collection of tweets, and can we use this labeled collection to predict a tweet’s interestingness?
To answer the first question, we performed a series of studies using crowdsourcing to reach a diverse set of workers who served as a proxy for an audience with variable interests and perspectives. This method allowed us to explore different labeling strategies, including varying the judges, the labels they applied, the datasets, and other aspects of the task.
To address the second question, we used crowdsourcing to assemble a set of tweets rated as interesting or not; we scored these tweets using textual and contextual features; and we used these scores as inputs to a binary classifier. We were able to achieve moderate agreement (kappa = 0.52) between the best classifier and the human assessments, a figure which reflects the challenges of the judgment task.”
New crowdsourcing platform links tech-skilled volunteers with charities
Charity Digital News: “The Atlassian Foundation today previewed its innovative crowdsourcing platform, MakeaDiff.org, which will allow nonprofits to coordinate with technically-skilled volunteers who want to help convert ideas into successful projects…
Once vetted, nonprofits will be able to list their volunteer jobs on the site. Skilled volunteers such as developers, designers, business analysts and project managers will then be able to go online and quickly search the site for opportunities relevant and convenient to them.
Atlassian Foundation manager, Melissa Beaumont Lee, said: “We started hearing from nonprofits that what they valued even more than donations was access to Atlassian’s technology expertise. Similarly, we had lots of employees who were keen to volunteer, but didn’t know how to get involved; coordinating volunteers for all these amazing projects was just not scalable. Thus, MakeaDiff.org was born to benefit both nonprofits and volunteers. We wanted to reduce the friction in coordinating efforts so more time can be spent doing really meaningful work.”
Best Practices for Government Crowdsourcing Programs
Anton Root: “Crowdsourcing helps communities connect and organize, so it makes sense that governments are increasingly making use of crowd-powered technologies and processes.
Just recently, for instance, we wrote about the Malaysian government’s initiative to crowdsource the national budget. Closer to home, we’ve seen government agencies from U.S. AID to NASA make use of the crowd.
Daren Brabham, professor at the University of Southern California, recently published a report titled “Using Crowdsourcing In Government” that introduces readers to the basics of crowdsourcing, highlights effective use cases, and establishes best practices when it comes to governments opening up to the crowd. Below, we take a look at a few of the suggestions Brabham makes to those considering crowdsourcing.
Brabham splits up his ten best practices into three phases: planning, implementation, and post-implementation. The first suggestion in the planning phase he makes may be the most critical of all: “Clearly define the problem and solution parameters.” If the community isn’t absolutely clear on what the problem is, the ideas and solutions that users submit will be equally vague and largely useless.
This applies not only to government agencies, but also to SMEs and large enterprises making use of crowdsourcing. At Massolution NYC 2013, for instance, we heard again and again the importance of meticulously defining a problem. And open innovation platform InnoCentive’s CEO Andy Zynga stressed the big role his company plays in helping organizations do away with the “curse of knowledge.”
Brabham also has advice for projects in their implementation phase, the key bit being: “Launch a promotional plan and a plan to grow and sustain the community.” Simply put, crowdsourcing cannot work without a crowd, so it’s important to build up the community before launching a campaign. It does take some balance, however, as a community that’s too large by the time a campaign launches can turn off newcomers who “may not feel welcome or may be unsure how to become initiated into the group or taken seriously.”
Brabham’s key advice for the post-implementation phase is: “Assess the project from many angles.” The author suggests tracking website traffic patterns, asking users to volunteer information about themselves when registering, and doing original research through surveys and interviews. The results of follow-up research can help to better understand the responses submitted, and also make it easier to show the successes of the crowdsourcing campaign. This is especially important for organizations partaking in ongoing crowdsourcing efforts.”
The role of task difficulty in the effectiveness of collective intelligence
New article by Christian Wagner: “The article presents a framework and empirical investigation to demonstrate the role of task difficulty in the effectiveness of collective intelligence. The research contends that collective intelligence, a form of community engagement to address problem solving tasks, can be superior to individual judgment and choice, but only when the addressed tasks are in a range of appropriate difficulty, which we label the “collective range”. Outside of that difficulty range, collectives will perform about as poorly as individuals for high difficulty tasks, or only marginally better than individuals for low difficulty tasks. An empirical investigation with subjects randomly recruited online supports our conjecture. Our findings qualify prior research on the strength of collective intelligence in general and offer preliminary insights into the mechanisms that enable individuals and collectives to arrive at good solutions. Within the framework of digital ecosystems, the paper argues that collective intelligence has more survival strength than individual intelligence, with highest sustainability for tasks of medium difficulty”
Using Participatory Crowdsourcing in South Africa to Create a Safer Living Environment
The study illustrates how participatory crowdsourcing (specifically humans as sensors) can be used as a Smart City initiative focusing on public safety by illustrating what is required to contribute to the Smart City, and developing a roadmap in the form of a model to assist decision making when selecting an optimal crowdsourcing initiative. Public safety data quality criteria were developed to assess and identify the problems affecting data quality.
This study is guided by design science methodology and applies three driving theories: the Data Information Knowledge Action Result (DIKAR) model, the characteristics of a Smart City, and a credible Data Quality Framework. Four critical success factors were developed to ensure high quality public safety data is collected through participatory crowdsourcing utilising voice technologies.”
New book: "Crowdsourcing"
New book by Jean-Fabrice Lebraty, Katia Lobre-Lebraty on Crowdsourcing: “Crowdsourcing is a relatively recent phenomenon that only appeared in 2006, but it continues to grow and diversify (crowdfunding, crowdcontrol, etc.). This book aims to review this concept and show how it leads to the creation of value and new business opportunities.
Chapter 1 is based on four examples: the online-banking sector, an informative television channel, the postal sector and the higher education sector. It shows that in the current context, for a company facing challenges, the crowd remains an untapped resource. The next chapter presents crowdsourcing as a new form of externalization and offers definitions of crowdsourcing. In Chapter 3, the authors attempt to explain how a company can create value by means of a crowdsourcing operation. To do this, authors use a model linking types of value, types of crowd, and the means by which these crowds are accessed.
Chapter 4 examines in detail various forms that crowdsourcing may take, by presenting and discussing ten types of crowdsourcing operation. In Chapter 5, the authors imagine and explore the ways in which the dark side of crowdsourcing might be manifested and Chapter 6 offers some insight into the future of crowdsourcing.
Contents
1. A Turbulent and Paradoxical Environment.
2. Crowdsourcing: A New Form of Externalization.
3. Crowdsourcing and Value Creation.
4. Forms of Crowdsourcing.
5. The Dangers of Crowdsourcing.
6. The Future of Crowdsourcing.”
The Contours of Crowd Capability
Among these seemingly disparate phenomena, a complex ecology of crowd- engaging IS has emerged, involving millions of people all around the world generating knowledge for organizations through IS. However, despite the obvious scale and reach of this emerging crowd-engagement paradigm, there are no examples of research (as far as we know), that systematically compares and contrasts a large variety of these existing crowd-engaging IS-tools in one work. Understanding this current state of affairs, we seek to address this significant research void by comparing and contrasting a number of the crowd-engaging forms of IS currently available for organizational use.
To achieve this goal, we employ the Theory of Crowd Capital as a lens to systematically structure our investigation of crowd-engaging IS. Employing this parsimonious lens, we first explain how Crowd Capital is generated through Crowd Capability in organizations. Taking this conceptual platform as a point of departure, in Section 3, we offer an array of examples of IS currently in use in modern practice to generate Crowd Capital. We compare and contrast these emerging IS techniques using the Crowd Capability construct, therein highlighting some important choices that organizations face when entering the crowd- engagement fray. This comparison, which we term “The Contours of Crowd Capability”, can be used by decision-makers and researchers alike, to differentiate among the many extant methods of Crowd Capital generation. At the same time, our comparison also illustrates some important differences to be found in the internal organizational processes that accompany each form of crowd-engaging IS. In section 4, we conclude with a discussion of the limitations of our work.”