An open-science crowdsourcing approach for producing community noise maps using smartphones


Judicaël Picaut at al at Building and Environment: “An alternative method is proposed for the assessment of the noise environment, on the basis of a crowdsourcing approach. For this purpose, a smartphone application and a spatial data infrastructure have been specifically developed in order to collect physical data (noise indicators, GPS positions, etc.) and perceptual data (pleasantness), without territorial limits, of the sound environment.

As the project is developed within an Open Science framework, all source codes, methodologies, tools and raw data are freely available, and if necessary, can be duplicated for any specific use. In particular, the collected data can be used by the scientific community, cities, associations, or any institution, which would like to develop new tools for the evaluation and representation of sound environments. In this paper, all the methodological and technical issues are detailed, and a first analysis of the collected data is proposed….(More)”.

Creating and Capturing Value through Crowdsourcing


Cover

Book edited by Allan Afuah, Christopher L. Tucci, and Gianluigi Viscusi: “Examples of the value that can be created and captured through crowdsourcing go back to at least 1714 when the UK used crowdsourcing to solve the Longitude Problem, obtaining a solution that would enable the UK to become the dominant maritime force of its time. Today, Wikipedia uses crowds to provide entries for the world’s largest and free encyclopedia. Partly fueled by the value that can be created and captured through crowdsourcing, interest in researching the phenomenon has been remarkable.

Despite this – or perhaps because of it – research into crowdsourcing has been conducted in different research silos, within the fields of management (from strategy to finance to operations to information systems), biology, communications, computer science, economics, political science, among others. In these silos, crowdsourcing takes names such as broadcast search, innovation tournaments, crowdfunding, community innovation, distributed innovation, collective intelligence, open source, crowdpower, and even open innovation. This book aims to assemble chapters from many of these silos, since the ultimate potential of crowdsourcing research is likely to be attained only by bridging them. Chapters provide a systematic overview of the research on crowdsourcing from different fields based on a more encompassing definition of the concept, its difference for innovation, and its value for both private and public sector….(More)”.

The Public Mapping Project: How Public Participation Can Revolutionize Redistricting


Book by Micah Altman and Michael P. McDonald: “… unveil the Public Mapping Project, which developed DistrictBuilder, an open-source software redistricting application designed to give the public transparent, accessible, and easy-to-use online mapping tools. As they show, the goal is for all citizens to have access to the same information that legislators use when drawing congressional maps—and use that data to create maps of their own….(More)”.

The role of blockchain, cryptoeconomics, and collective intelligence in building the future of justice


Blog by Federico Ast at Thomson Reuters: “Human communities of every era have had to solve the problem of social order. For this, they developed governance and legal systems. They did it with the technologies and systems of belief of their time….

A better justice system may not come from further streamlining existing processes but from fundamentally rethinking them from a first principles perspective.

In the last decade, we have witnessed how collective intelligence could be leveraged to produce an encyclopaedia like Wikipedia, a transport system like Uber, a restaurant rating system like Yelp!, and a hotel system like Airbnb. These companies innovated by crowdsourcing value creation. Instead of having an in-house team of restaurant critics as the Michelin Guide, Yelp! crowdsourced ratings in users.

Satoshi Nakamoto’s invention of Bitcoin (and the underlying blockchain technology) may be seen as the next step in the rise of the collaborative economy. The Bitcoin Network proved that, given the right incentives, anonymous users could cooperate in creating and updating a distributed ledger which could act as a monetary system. A nationless system, inherently global, and native to the Internet Age.

Cryptoeconomics is a new field of study that leverages cryptography, computer science and game theory to build secure distributed systems. It is the science that underlies the incentive system of open distributed ledgers. But its potential goes well beyond cryptocurrencies.

Kleros is a dispute resolution system which relies on cryptoeconomics. It uses a system of incentives based on “focal points”, a concept developed by game theorist Thomas Schelling, winner of the Nobel Prize in Economics 2005. Using a clever mechanism design, it seeks to produce a set of incentives for randomly selected users to adjudicate different types of disputes in a fast, affordable and secure way. Users who adjudicate disputes honestly will make money. Users who try to abuse the system will lose money.

Kleros does not seek to compete with governments or traditional arbitration systems, but provide a new method that will leverage the wisdom of the crowd to resolve many disputes of the global digital economy for which existing methods fall short: e-commerce, crowdfunding and many types of small claims are among the early adopters….(More)”.

Crowdsourcing reliable local data


Paper by Jane Lawrence Sumner, Emily M. Farris, and Mirya R. Holman: “The adage “All politics is local” in the United States is largely true. Of the United States’ 90,106 governments, 99.9% are local governments. Despite variations in institutional features, descriptive representation, and policy making power, political scientists have been slow to take advantage of these variations. One obstacle is that comprehensive data on local politics is often extremely difficult to obtain; as a result, data is unavailable or costly, hard to replicate, and rarely updated.

We provide an alternative: crowdsourcing this data. We demonstrate and validate crowdsourcing data on local politics, using two different data collection projects. We evaluate different measures of consensus across coders and validate the crowd’s work against elite and professional datasets. In doing so, we show that crowd-sourced data is both highly accurate and easy to use. In doing so, we demonstrate that non-experts can be used to collect, validate, or update local data….All data from the project available at https://dataverse.harvard.edu/dataverse/2chainz …(More)”.

Crowdsourcing the vote: New horizons in citizen forecasting


Article by Mickael Temporão Yannick Dufresne Justin Savoie and Clifton van der Linden in International Journal of Forecasting: “People do not know much about politics. This is one of the most robust findings in political science and is backed by decades of research. Most of this research has focused on people’s ability to know about political issues and party positions on these issues. But can people predict elections? Our research uses a very large dataset (n>2,000,000) collected during ten provincial and federal elections in Canada to test whether people can predict the electoral victor and the closeness of the race in their district throughout the campaign. The results show that they can. This paper also contributes to the emerging literature on citizen forecasting by developing a scaling method that allows us to compare the closeness of races and that can be applied to multiparty contexts with varying numbers of parties. Finally, we assess the accuracy of citizen forecasting in Canada when compared to voter expectations weighted by past votes and political competency….(More)”.

Innovation Spaces: New Places for Collective Intelligence?


Chapter by Laure Morel, Laurent Dupont and Marie‐Reine Boudarel in Collective Innovation Processes: Principles and Practices: “Innovation is a complex and multifaceted notion, sometimes difficult to explain. The category of innovation spaces includes co‐working spaces, third places, Living Labs, open labs, incubators, accelerators, hothouses, canteens, FabLabs, MakerSpaces, Tech Shops, hackerspaces, design factories, and so on. Working based on the communities’ needs and motivations is a key stage in order to overcome the obstacles of collective innovation and lay favorable foundations for the emergence of shared actions that can be converted into collective innovation projects. Organizations are multiplying the opportunities of creating collective intelligence at the service of innovation. Consequently, an innovation space must favor creativity and sharing. It must also promote individual and collective learning. Collective intelligence involves the networking of multiple types of intelligence, the combination of knowledge and competences, as well as cooperation and collaboration between them….(More)”.

How data helped visualize the family separation crisis


Chava Gourarie at StoryBench: “Early this summer, at the height of the family separation crisis – where children were being forcibly separated from their parents at our nation’s border – a team of scholars pooled their skills to address the issue. The group of researchers – from a variety of humanities departments at multiple universities – spent a week of non-stop work mapping the immigration detention network that spans the United States. They named the project “Torn Apart/Separados” and published it online, to support the efforts of locating and reuniting the separated children with their parents.

The project utilizes the methods of the digital humanities, an emerging discipline that applies computational tools to fields within the humanities, like literature and history. It was led by members of Columbia University’s Group for Experimental Methods in the Humanities, which had previously used methods such as rapid deployment to responded to natural disasters.

The group has since expanded the project, publishing a second volume that focuses on the $5 billion immigration industry, based largely on public data about companies that contract with the Immigration and Customs Enforcement agency. The visualizations highlight the astounding growth in investment of ICE infrastructure (from $475 million 2014 to $5.1 billion in 2018), as well as who benefits from these contracts, and how the money is spent.

Storybench spoke with Columbia University’s Alex Gil, who worked on both phases of the project, about the process of building “Torn Apart/Separados,” about the design and messaging choices that were made and the ways in which methods of the digital humanities can cross pollinate with those of journalism…(More)”.

Translating science into business innovation: The case of open food and nutrition data hackathons


Paper by Christopher TucciGianluigi Viscusi and Heidi Gautschi: “In this article, we explore the use of hackathons and open data in corporations’ open innovation portfolios, addressing a new way for companies to tap into the creativity and innovation of early-stage startup culture, in this case applied to the food and nutrition sector. We study the first Open Food Data Hackdays, held on 10-11 February 2017 in Lausanne and Zurich. The aim of the overall project that the Hackdays event was part of was to use open food and nutrition data as a driver for business innovation. We see hackathons as a new tool in the innovation manager’s toolkit, a kind of live crowdsourcing exercise that goes beyond traditional ideation and develops a variety of prototypes and new ideas for business innovation. Companies then have the option of working with entrepreneurs and taking some of the ideas forward….(More)”.

Crowdsourced social media data for disaster management: Lessons from the PetaJakarta.org project


R.I.Ogie, R.J.Clarke, H.Forehead and P.Perez in Computers, Environment and Urban Systems: “The application of crowdsourced social media data in flood mapping and other disaster management initiatives is a burgeoning field of research, but not one that is without challenges. In identifying these challenges and in making appropriate recommendations for future direction, it is vital that we learn from the past by taking a constructively critical appraisal of highly-praised projects in this field, which through real-world implementations have pioneered the use of crowdsourced geospatial data in modern disaster management. These real-world applications represent natural experiments, each with myriads of lessons that cannot be easily gained from computer-confined simulations.

This paper reports on lessons learnt from a 3-year implementation of a highly-praised project- the PetaJakarta.org project. The lessons presented derive from the key success factors and the challenges associated with the PetaJakarta.org project. To contribute in addressing some of the identified challenges, desirable characteristics of future social media-based disaster mapping systems are discussed. It is envisaged that the lessons and insights shared in this study will prove invaluable within the broader context of designing socio-technical systems for crowdsourcing and harnessing disaster-related information….(More)”.