The move toward 'crowdsourcing' public safety


PhysOrg: “Earlier this year, Martin Dias, assistant professor in the D’Amore-McKim School of Business, presented research for the National Law Enforcement Telecommunications System in which he examined Nlets’ network and how its governance and technology helped enable inter-agency information sharing. This work builds on his research aimed at understanding design principles for this public safety “social networks” and other collaborative networks. We asked Dias to discuss how information sharing around public safety has evolved in recent years and the benefits and challenges of what he describes as “crowdsourcing public safety.” …

What is “crowdsourcing public safety” and why are public safety agencies moving toward this trend?
Crowdsourcing—the term coined by our own assistant professor of journalism Jeff Howe—involves taking a task or job traditionally performed by a distinct agent, or employee, and having that activity be executed by an “undefined, generally large group of people in an open call.” Crowdsourcing public safety involves engaging and enabling private citizens to assist public safety professionals in addressing natural disasters, terror attacks, organized crime incidents, and large-scale industrial accidents.
Public safety agencies have long recognized the need for citizen involvement. Tip lines and missing persons bulletins have been used to engage citizens for years, but with advances in mobile applications and big data analytics, the ability of to receive, process, and make use of high volume, tips, and leads makes crowdsourcing searches and investigations more feasible. You saw this in the FBI Boston Marathon Bombing web-based Tip Line. You see it in the “See Something Say Something” initiatives throughout the country. You see it in AMBER alerts or even remote search and rescue efforts. You even see it in more routine instances like Washington State’s HERO program to reduce traffic violations.
Have these efforts been successful, and what challenges remain?
There are a number of issues to overcome with regard to crowdsourcing public safety—such as maintaining privacy rights, ensuring data quality, and improving trust between citizens and officers. Controversies over the National Security Agency’s surveillance program and neighborhood watch programs – particularly the shooting death of teenager Trayvon Martin by neighborhood watch captain George Zimmerman, reflect some of these challenges. It is not clear yet from research the precise set of success criteria, but those efforts that appear successful at the moment have tended to be centered around a particular crisis incident—such as a specific attack or missing person. But as more crowdsourcing public safety mobile applications are developed, adoption and use is likely to increase. One trend to watch is whether national public safety programs are able to tap into the existing social networks of community-based responders like American Red Cross volunteers, Community Emergency Response Teams, and United Way mentors.
The move toward crowdsourcing is part of an overall trend toward improving community resilience, which refers to a system’s ability to bounce back after a crisis or disturbance. Stephen Flynn and his colleagues at Northeastern’s George J. Kostas Research Institute for Homeland Security are playing a key role in driving a national conversation in this area. Community resilience is inherently multi-disciplinary, so you see research being done regarding transportation infrastructure, social media use after a crisis event, and designing sustainable urban environments. Northeastern is a place where use-inspired research is addressing real-world problems. It will take a village to improve community resilience capabilities, and our institution is a vital part of thought leadership for that village.”
 

Towards an effective framework for building smart cities: Lessons from Seoul and San Francisco


New paper by JH Lee, MG Hancock, MC Hu in Technological Forecasting and Social Change: “This study aims to shed light on the process of building an effective smart city by integrating various practical perspectives with a consideration of smart city characteristics taken from the literature. We developed a framework for conducting case studies examining how smart cities were being implemented in San Francisco and Seoul Metropolitan City. The study’s empirical results suggest that effective, sustainable smart cities emerge as a result of dynamic processes in which public and private sector actors coordinate their activities and resources on an open innovation platform. The different yet complementary linkages formed by these actors must further be aligned with respect to their developmental stage and embedded cultural and social capabilities. Our findings point to eight ‘stylized facts’, based on both quantitative and qualitative empirical results that underlie the facilitation of an effective smart city. In elaborating these facts, the paper offers useful insights to managers seeking to improve the delivery of smart city developmental projects.”
 

From Collective Intelligence to Collective Intelligence Systems


New Paper by A. Kornrumpf and U. Baumol in  the International Journal of Cooperative Information Systems: “Collective intelligence (CI) has become a popular research topic over the past few years. However, the CI debate suffers from several problems such as that there is no unanimously agreed-upon definition of CI that clearly differentiates between CI and related terms such as swarm intelligence (SI) and collective intelligence systems (CIS). Furthermore, a model of such CIS is lacking for purposes of research and the design of new CIS. This paper aims at untangling the definitions of CI and other related terms, especially CIS, and at providing a semi-structured model of CIS as a first step towards more structured research. The authors of this paper argue that CI can be defined as the ability of sufficiently large groups of individuals to create an emergent solution for a specific class of problems or tasks. The authors show that other alleged properties of CI which are not covered by this definition, are, in fact, properties of CIS and can be understood by regarding CIS as complex socio-technical systems (STS) that enable the realization of CI. The model defined in this article serves as a means to structure open questions in CIS research and helps to understand which research methodology is adequate for different aspects of CIS.”

Towards an information systems perspective and research agenda on crowdsourcing for innovation


New paper by A Majchrzak and A Malhotra in The Journal of Strategic Information Systems: “Recent years have seen an increasing emphasis on open innovation by firms to keep pace with the growing intricacy of products and services and the ever changing needs of the markets. Much has been written about open innovation and its manifestation in the form of crowdsourcing. Unfortunately, most management research has taken the information system (IS) as a given. In this essay we contend that IS is not just an enabler but rather can be a shaper that optimizes open innovation in general and crowdsourcing in particular. This essay is intended to frame crowdsourcing for innovation in a manner that makes more apparent the issues that require research from an IS perspective. In doing so, we delineate the contributions that the IS field can make to the field of crowdsourcing.

  • Reviews participation architectures supporting current crowdsourcing, finding them inadequate for innovation development by the crowd.

  • Identifies 3 tensions for explaining why a participation architecture for crowdsourced innovation is difficult.

  • Identifies affordances for the participation architectures that may help to manage the tension.

  • Uses the tensions and possible affordances to identify research questions for IS scholars.”

Commons at the Intersection of Peer Production, Citizen Science, and Big Data: Galaxy Zoo


New paper by Michael J. Madison: “The knowledge commons research framework is applied to a case of commons governance grounded in research in modern astronomy. The case, Galaxy Zoo, is a leading example of at least three different contemporary phenomena. In the first place Galaxy Zoo is a global citizen science project, in which volunteer non-scientists have been recruited to participate in large-scale data analysis via the Internet. In the second place Galaxy Zoo is a highly successful example of peer production, some times known colloquially as crowdsourcing, by which data are gathered, supplied, and/or analyzed by very large numbers of anonymous and pseudonymous contributors to an enterprise that is centrally coordinated or managed. In the third place Galaxy Zoo is a highly visible example of data-intensive science, sometimes referred to as e-science or Big Data science, by which scientific researchers develop methods to grapple with the massive volumes of digital data now available to them via modern sensing and imaging technologies. This chapter synthesizes these three perspectives on Galaxy Zoo via the knowledge commons framework.”

Are Some Tweets More Interesting Than Others? #HardQuestion


New paper by Microsoft Research (Omar Alonso, Catherine C. Marshall, and Marc Najork): “Twitter has evolved into a significant communication nexus, coupling personal and highly contextual utterances with local news, memes, celebrity gossip, headlines, and other microblogging subgenres. If we take Twitter as a large and varied dynamic collection, how can we predict which tweets will be interesting to a broad audience in advance of lagging social indicators of interest such as retweets? The telegraphic form of tweets, coupled with the subjective notion of interestingness, makes it difficult for human judges to agree on which tweets are indeed interesting.
In this paper, we address two questions: Can we develop a reliable strategy that results in high-quality labels for a collection of tweets, and can we use this labeled collection to predict a tweet’s interestingness?
To answer the first question, we performed a series of studies using crowdsourcing to reach a diverse set of workers who served as a proxy for an audience with variable interests and perspectives. This method allowed us to explore different labeling strategies, including varying the judges, the labels they applied, the datasets, and other aspects of the task.
To address the second question, we used crowdsourcing to assemble a set of tweets rated as interesting or not; we scored these tweets using textual and contextual features; and we used these scores as inputs to a binary classifier. We were able to achieve moderate agreement (kappa = 0.52) between the best classifier and the human assessments, a figure which reflects the challenges of the judgment task.”

New crowdsourcing platform links tech-skilled volunteers with charities


Charity Digital News: “The Atlassian Foundation today previewed its innovative crowdsourcing platform, MakeaDiff.org, which will allow nonprofits to coordinate with technically-skilled volunteers who want to help convert ideas into successful projects…
Once vetted, nonprofits will be able to list their volunteer jobs on the site. Skilled volunteers such as developers, designers, business analysts and project managers will then be able to go online and quickly search the site for opportunities relevant and convenient to them.
Atlassian Foundation manager, Melissa Beaumont Lee, said: “We started hearing from nonprofits that what they valued even more than donations was access to Atlassian’s technology expertise. Similarly, we had lots of employees who were keen to volunteer, but didn’t know how to get involved; coordinating volunteers for all these amazing projects was just not scalable. Thus, MakeaDiff.org was born to benefit both nonprofits and volunteers. We wanted to reduce the friction in coordinating efforts so more time can be spent doing really meaningful work.”
 

Best Practices for Government Crowdsourcing Programs


Anton Root: “Crowdsourcing helps communities connect and organize, so it makes sense that governments are increasingly making use of crowd-powered technologies and processes.
Just recently, for instance, we wrote about the Malaysian government’s initiative to crowdsource the national budget. Closer to home, we’ve seen government agencies from U.S. AID to NASA make use of the crowd.
Daren Brabham, professor at the University of Southern California, recently published a report titled “Using Crowdsourcing In Government” that introduces readers to the basics of crowdsourcing, highlights effective use cases, and establishes best practices when it comes to governments opening up to the crowd. Below, we take a look at a few of the suggestions Brabham makes to those considering crowdsourcing.
Brabham splits up his ten best practices into three phases: planning, implementation, and post-implementation. The first suggestion in the planning phase he makes may be the most critical of all: “Clearly define the problem and solution parameters.” If the community isn’t absolutely clear on what the problem is, the ideas and solutions that users submit will be equally vague and largely useless.
This applies not only to government agencies, but also to SMEs and large enterprises making use of crowdsourcing. At Massolution NYC 2013, for instance, we heard again and again the importance of meticulously defining a problem. And open innovation platform InnoCentive’s CEO Andy Zynga stressed the big role his company plays in helping organizations do away with the “curse of knowledge.”
Brabham also has advice for projects in their implementation phase, the key bit being: “Launch a promotional plan and a plan to grow and sustain the community.” Simply put, crowdsourcing cannot work without a crowd, so it’s important to build up the community before launching a campaign. It does take some balance, however, as a community that’s too large by the time a campaign launches can turn off newcomers who “may not feel welcome or may be unsure how to become initiated into the group or taken seriously.”
Brabham’s key advice for the post-implementation phase is: “Assess the project from many angles.” The author suggests tracking website traffic patterns, asking users to volunteer information about themselves when registering, and doing original research through surveys and interviews. The results of follow-up research can help to better understand the responses submitted, and also make it easier to show the successes of the crowdsourcing campaign. This is especially important for organizations partaking in ongoing crowdsourcing efforts.”

The role of task difficulty in the effectiveness of collective intelligence


New article by Christian Wagner: “The article presents a framework and empirical investigation to demonstrate the role of task difficulty in the effectiveness of collective intelligence. The research contends that collective intelligence, a form of community engagement to address problem solving tasks, can be superior to individual judgment and choice, but only when the addressed tasks are in a range of appropriate difficulty, which we label the “collective range”. Outside of that difficulty range, collectives will perform about as poorly as individuals for high difficulty tasks, or only marginally better than individuals for low difficulty tasks. An empirical investigation with subjects randomly recruited online supports our conjecture. Our findings qualify prior research on the strength of collective intelligence in general and offer preliminary insights into the mechanisms that enable individuals and collectives to arrive at good solutions. Within the framework of digital ecosystems, the paper argues that collective intelligence has more survival strength than individual intelligence, with highest sustainability for tasks of medium difficulty”

Using Participatory Crowdsourcing in South Africa to Create a Safer Living Environment


New Paper by Bhaveer Bhana, Stephen Flowerday, and Aharon Satt in the International Journal of Distributed Sensor Networks: “The increase in urbanisation is making the management of city resources a difficult task. Data collected through observations (utilising humans as sensors) of the city surroundings can be used to improve decision making in terms of managing these resources. However, the data collected must be of a certain quality in order to ensure that effective and efficient decisions are made. This study is focused on the improvement of emergency and non-emergency services (city resources) through the use of participatory crowdsourcing (humans as sensors) as a data collection method (collect public safety data), utilising voice technology in the form of an interactive voice response (IVR) system.
The study illustrates how participatory crowdsourcing (specifically humans as sensors) can be used as a Smart City initiative focusing on public safety by illustrating what is required to contribute to the Smart City, and developing a roadmap in the form of a model to assist decision making when selecting an optimal crowdsourcing initiative. Public safety data quality criteria were developed to assess and identify the problems affecting data quality.
This study is guided by design science methodology and applies three driving theories: the Data Information Knowledge Action Result (DIKAR) model, the characteristics of a Smart City, and a credible Data Quality Framework. Four critical success factors were developed to ensure high quality public safety data is collected through participatory crowdsourcing utilising voice technologies.”