Social Sensing and Crowdsourcing: the future of connected sensors


Conference Paper by C. Geijer, M. Larsson, M. Stigelid: “Social sensing is becoming an alternative to static sensors. It is a way to crowdsource data collection where sensors can be placed on frequently used objects, such as mobile phones or cars, to gather important information. Increasing availability in technology, such as cheap sensors being added in cell phones, creates an opportunity to build bigger sensor networks that are capable of collecting a larger quantity and more complex data. The purpose of this paper is to highlight problems in the field, as well as their solutions. The focus lies on the use of physical sensors and not on the use of social media to collect data. Research papers were reviewed based on implemented or suggested implementations of social sensing. The discovered problems are contrasted with possible solutions, and used to reflect upon the future of the field. We found issues such as privacy, noise and trustworthiness to be problems when using a distributed network of sensors. Furthermore, we discovered models for determining the accuracy as well as truthfulness of gathered data that can effectively combat these problems. The topic of privacy remains an open-ended problem, since it is based upon ethical considerations that may differ from person to person, but there exists methods for addressing this as well. The reviewed research suggests that social sensing will become more and more useful in the future….(More).”

New Journal: Citizen Science: Theory and Practice


“Citizen Science: Theory and Practice is an open-access, peer-reviewed journal published by Ubiquity Press on behalf of the Citizen Science Association. It focuses on advancing the field of citizen science by providing a venue for citizen science researchers and practitioners – scientists, information technologists, conservation biologists, community health organizers, educators, evaluators, urban planners, and more – to share best practices in conceiving, developing, implementing, evaluating, and sustaining projects that facilitate public participation in scientific endeavors in any discipline.”

Do Experts or Collective Intelligence Write with More Bias? Evidence from Encyclopædia Britannica and Wikipedia.


Working Paper by Shane Greenstein and Feng Zhu.  Which source of information contains greater bias and slant—text written by an expert or that constructed via collective intelligence? Do the costs of acquiring, storing, displaying and revising information shape those differences? We evaluate these questions empirically by examining slanted and biased phrases in content on US political issues from two sources — Encyclopædia Britannica and Wikipedia. Our overall slant measure is less (more) than zero when an article leans towards Democrat (Republican) viewpoints, while bias is the absolute value of the slant. Using a matched sample of pairs of articles from Britannica and Wikipedia, we show that, overall, Wikipedia articles are more slanted towards Democrat than Britannica articles, as well as more biased. Slanted Wikipedia articles tend to become less biased than Britannica articles on the same topic as they become substantially revised, and the bias on a per word basis hardly differs between the sources. These results have implications for the segregation of readers in online sources and the allocation of editorial resources in online sources using collective intelligence…Key concepts include:

  • The costs of producing, storing, and distributing knowledge shape different biases and slants in the collective intelligence (Wikipedia) and the expert-based model (Britannica).
  • Many of the differences between Wikipedia and Britannica arise because Wikipedia faces insignificant storage, production, and distribution costs. This leads to longer articles with greater coverage of more points of view. The number of revisions of Wikipedia articles results in more neutral point of view. In the best cases, it reduces slant and bias to a negligible difference with an expert-based model.
  • As the world moves from reliance on expert-based production of knowledge to collectively-produced intelligence, it is unwise to blindly trust the properties of knowledge produced by the crowd. Their slants and biases are not widely appreciated, nor are the properties of the production model as yet fully understood.”…(More)

Is Transparency a Recipe for Innovation?


Paper by Dr. Bastiaan Heemsbergen:Innovation is a key driver in organizational sustainability, and yes, openness and transparency are a recipe for innovation. But, according to Tapscott and Williams, “when it comes to innovation, competitive advantage and organizational success, ‘openness’ is rarely the first word one would use to describe companies and other societal organizations like government agencies or medical institutions. For many, words like ‘insular,’ ‘bureaucratic,’ ‘hierarchical,’ ‘secretive’ and ‘closed’ come to mind instead.”1 And yet a few months ago, The Tesla Model S just became the world’s first open-source car. Elon Musk, CEO of Tesla Motor Vehicles, shared all the patents on Tesla’s electric car technology, allowing anyone — including competitors — to use them without fear of litigation. Elon wrote in his post “Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.”2
In the public sector, terms such as open government, citizen sourcing, and wiki government are also akin to the notion of open innovation and transparency. As Hilgers and Ihl report, “a good example of this approach is the success of the Future Melbourne program, a Wiki and blog-based approach to shaping the future urban landscape of Australia’s second largest city. The program allowed citizens to directly edit and comment on the plans for the future development of the city. It attracted more than 30,000 individuals, who submitted hundreds of comments and suggestions (futuremelbourne.com.au). Basically, problems concerning design and creativity, future strategy and local culture, and even questions of management and service innovation can be broadcasted on such web-platforms.”3 The authors suggest that there are three dimensions to applying the concept of open innovation to the public sector: citizen ideation and innovation (tapping knowledge and creativity), collaborative administration (user generated new tasks and processes), and collaborative democracy (improve public participation in the policy process)….(More)”.

VoXup


Nesta: “Does your street feel safe? Would you like to change something in your neighbourhood? Is there enough for young people to do?
All basic questions, but how many local councillors have the time to put these issues to their constituents? A new web app aims to make it easier for councillors and council officers to talk to residents – and it’s all based around a series of simple questions.
Now, just a year after VoXup was created in a north London pub, Camden Council is using it to consult residents on its budget proposals.
One of VoXup’s creators, Peter Lewis, hit upon the idea after meeting an MP and being reminded of how hard it can be to get involved in decision-making….

Now VoXup is being used by Camden Council to engage with residents about its spending plans.
“They’ve got to cut a lot of money and they want to know which services people would prioritise,” Lewis explains.
“So we’ve created a custom community, and most popular topics have got about 200 votes. About 650 people have taken part at some level, and it’s only just begun. We’ve seen a lot of activity – of the people who look at the web page, almost half give an opinion on something.”

‘No need for smartphone app’
What does the future hold for VoXup? Lewis, who is working on the project full-time, says one thing the team won’t be doing is building a smartphone app.
“One of the things we thought about doing was creating a mobile app, but that’s been really unnecessary – we built VoXup as a responsive web app,” he says…. (More)”.

Coop’s Citizen Sci Scoop: Try it, you might like it


Response by Caren Cooper at PLOS: “Margaret Mead, the world-famous anthropologist said, “never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”
The sentiment rings true for citizen science.
Yet, recent news in the citizen science world has been headlined “Most participants in citizen science projects give up almost immediately.” This was based on a study of participation in seven different projects within the crowdsourcing hub called Zooniverse. Most participants tried a project once, very briefly, and never returned.
What’s unusual about Zooniverse projects is not the high turnover of quitters. Rather, it’s unusual that even early quitters do some important work. That’s a cleverly designed project. An ethical principle of Zooniverse is to not waste people’s time. The crowdsourcing tasks are pivotal to advancing research. They cannot be accomplished by computer algorithms or machines. They require crowds of people, each chipping in a tiny bit. What is remarkable is that the quitters matter at all….
An Internet rule of thumb in that only 1% (or less) of users add new content to sites like Wikipedia. Citizen science appears to operate on this dynamic, except instead of a core group adding existing knowledge for the crowd to use, a core group is involved in making new knowledge for the crowd to use….
In citizen science, a crowd can be four or a crowd can be hundreds of thousands. A citizen scientist is not a person who will participate in any project. They are individuals – gamers, birders, stargazers, gardeners, weather bugs, hikers, naturalists, and more – with particular interests and motivations.
As my grandfather said, “Try it, you might like it.” It’s fabulous that millions are trying it. Sooner or later, when participants and projects find one another, a good match translates into a job well done….(More)”.

Motivations for sustained participation in crowdsourcing: The role of talk in a citizen science case study


Paper by CB. Jackson, C. Østerlund, G. Mugar, KDV. Hassman for the Proceedings of the Forty-eighth Hawai’i International Conference on System Science (HICSS-48): “The paper explores the motivations of volunteers in a large crowdsourcing project and contributes to our understanding of the motivational factors that lead to deeper engagement beyond initial participation. Drawing on the theory of legitimate peripheral participation (LPP) and the literature on motivation in crowdsourcing, we analyze interview and trace data from a large citizen science project. The analyses identify ways in which the technical features of the projects may serve as motivational factors leading participants towards sustained participation. The results suggest volunteers first engage in activities to support knowledge acquisition and later share knowledge with other volunteers and finally increase participation in Talk through a punctuated process of role discovery…(More)”

.

Turns Out the Internet Is Bad at Guessing How Many Coins Are in a Jar


Eric B. Steiner at Wired: “A few weeks ago, I asked the internet to guess how many coins were in a huge jar…The mathematical theory behind this kind of estimation game is apparently sound. That is, the mean of all the estimates will be uncannily close to the actual value, every time. James Surowiecki’s best-selling book, Wisdom of the Crowd, banks on this principle, and details several striking anecdotes of crowd accuracy. The most famous is a 1906 competition in Plymouth, England to guess the weight of an ox. As reported by Sir Francis Galton in a letter to Nature, no one guessed the actual weight of the ox, but the average of all 787 submitted guesses was exactly the beast’s actual weight….
So what happened to the collective intelligence supposedly buried in our disparate ignorance?
Most successful crowdsourcing projects are essentially the sum of many small parts: efficiently harvested resources (information, effort, money) courtesy of a large group of contributors. Think Wikipedia, Google search results, Amazon’s Mechanical Turk, and KickStarter.
But a sum of parts does not wisdom make. When we try to produce collective intelligence, things get messy. Whether we are predicting the outcome of an election, betting on sporting contests, or estimating the value of coins in a jar, the crowd’s take is vulnerable to at least three major factors: skill, diversity, and independence.
A certain amount of skill or knowledge in the crowd is obviously required, while crowd diversity expands the number of possible solutions or strategies. Participant independence is important because it preserves the value of individual contributors, which is another way of saying that if everyone copies their neighbor’s guess, the data are doomed.
Failure to meet any one of these conditions can lead to wildly inaccurate answers, information echo, or herd-like behavior. (There is more than a little irony with the herding hazard: The internet makes it possible to measure crowd wisdom and maybe put it to use. Yet because people tend to base their opinions on the opinions of others, the internet ends up amplifying the social conformity effect, thereby preventing an accurate picture of what the crowd actually thinks.)
What’s more, even when these conditions—skill, diversity, independence—are reasonably satisfied, as they were in the coin jar experiment, humans exhibit a whole host of other cognitive biases and irrational thinking that can impede crowd wisdom. True, some bias can be positive; all that Gladwellian snap-judgment stuff. But most biases aren’t so helpful, and can too easily lead us to ignore evidence, overestimate probabilities, and see patterns where there are none. These biases are not vanquished simply by expanding sample size. On the contrary, they get magnified.
Given the last 60 years of research in cognitive psychology, I submit that Galton’s results with the ox weight data were outrageously lucky, and that the same is true of other instances of seemingly perfect “bean jar”-styled experiments….”

Democratizing Inequalities: Dilemmas of the New Public Participation


New book edited by Caroline W. Lee, Michael McQuarrie and Edward T. Walker: “Opportunities to “have your say,” “get involved,” and “join the conversation” are everywhere in public life. From crowdsourcing and town hall meetings to government experiments with social media, participatory politics increasingly seem like a revolutionary antidote to the decline of civic engagement and the thinning of the contemporary public sphere. Many argue that, with new technologies, flexible organizational cultures, and a supportive policymaking context, we now hold the keys to large-scale democratic revitalization.
Democratizing Inequalities shows that the equation may not be so simple. Modern societies face a variety of structural problems that limit potentials for true democratization, as well as vast inequalities in political action and voice that are not easily resolved by participatory solutions. Popular participation may even reinforce elite power in unexpected ways. Resisting an oversimplified account of participation as empowerment, this collection of essays brings together a diverse range of leading scholars to reveal surprising insights into how dilemmas of the new public participation play out in politics and organizations. Through investigations including fights over the authenticity of business-sponsored public participation, the surge of the Tea Party, the role of corporations in electoral campaigns, and participatory budgeting practices in Brazil, Democratizing Inequalities seeks to refresh our understanding of public participation and trace the reshaping of authority in today’s political environment.”

Businesses dig for treasure in open data


Lindsay Clark in ComputerWeekly: “Open data, a movement which promises access to vast swaths of information held by public bodies, has started getting its hands dirty, or rather its feet.
Before a spade goes in the ground, construction and civil engineering projects face a great unknown: what is down there? In the UK, should someone discover anything of archaeological importance, a project can be halted – sometimes for months – while researchers study the site and remove artefacts….
During an open innovation day hosted by the Science and Technologies Facilities Council (STFC), open data services and technology firm Democrata proposed analytics could predict the likelihood of unearthing an archaeological find in any given location. This would help developers understand the likely risks to construction and would assist archaeologists in targeting digs more accurately. The idea was inspired by a presentation from the Archaeological Data Service in the UK at the event in June 2014.
The proposal won support from the STFC which, together with IBM, provided a nine-strong development team and access to the Hartree Centre’s supercomputer – a 131,000 core high-performance facility. For natural language processing of historic documents, the system uses two components of IBM’s Watson – the AI service which famously won the US TV quiz show Jeopardy. The system uses SPSS modelling software, the language R for algorithm development and Hadoop data repositories….
The proof of concept draws together data from the University of York’s archaeological data, the Department of the Environment, English Heritage, Scottish Natural Heritage, Ordnance Survey, Forestry Commission, Office for National Statistics, the Land Registry and others….The system analyses sets of indicators of archaeology, including historic population dispersal trends, specific geology, flora and fauna considerations, as well as proximity to a water source, a trail or road, standing stones and other archaeological sites. Earlier studies created a list of 45 indicators which was whittled down to seven for the proof of concept. The team used logistic regression to assess the relationship between input variables and come up with its prediction….”