New book: "Crowdsourcing"


New book by Jean-Fabrice Lebraty, Katia Lobre-Lebraty on Crowdsourcing: “Crowdsourcing is a relatively recent phenomenon that only appeared in 2006, but it continues to grow and diversify (crowdfunding, crowdcontrol, etc.). This book aims to review this concept and show how it leads to the creation of value and new business opportunities.
Chapter 1 is based on four examples: the online-banking sector, an informative television channel, the postal sector and the higher education sector. It shows that in the current context, for a company facing challenges, the crowd remains an untapped resource. The next chapter presents crowdsourcing as a new form of externalization and offers definitions of crowdsourcing. In Chapter 3, the authors attempt to explain how a company can create value by means of a crowdsourcing operation. To do this, authors use a model linking types of value, types of crowd, and the means by which these crowds are accessed.
Chapter 4 examines in detail various forms that crowdsourcing may take, by presenting and discussing ten types of crowdsourcing operation. In Chapter 5, the authors imagine and explore the ways in which the dark side of crowdsourcing might be manifested and Chapter 6 offers some insight into the future of crowdsourcing.
Contents
1. A Turbulent and Paradoxical Environment.
2. Crowdsourcing: A New Form of Externalization.
3. Crowdsourcing and Value Creation.
4. Forms of Crowdsourcing.
5. The Dangers of Crowdsourcing.
6. The Future of Crowdsourcing.”

The Contours of Crowd Capability


New paper by Prashant Shukla and John Prpi: “The existence of dispersed knowledge has been a subject of inquiry for more than six decades. Despite the longevity of this rich research tradition, the “knowledge problem” has remained largely unresolved both in research and practice, and remains “the central theoretical problem of all social science”. However, in the 21st century, organizations are presented with opportunities through technology to potentially benefit from the dispersed knowledge problem to some extent. One such opportunity is represented by the recent emergence of a variety of crowd-engaging information systems (IS).
In this vein, Crowdsourcing  is being widely studied in numerous contexts, and the knowledge generated from these IS phenomena is well-documented. At the same time, other organizations are leveraging dispersed knowledge by putting in place IS-applications such as Predication Markets to gather large sample-size forecasts from within and without the organization. Similarly, we are also observing many organizations using IS-tools such as “Wikis” to access the knowledge of dispersed populations within the boundaries of the organization. Further still, other organizations are applying gamification techniques to accumulate Citizen Science knowledge from the public at large through IS.
Among these seemingly disparate phenomena, a complex ecology of crowd- engaging IS has emerged, involving millions of people all around the world generating knowledge for organizations through IS. However, despite the obvious scale and reach of this emerging crowd-engagement paradigm, there are no examples of research (as far as we know), that systematically compares and contrasts a large variety of these existing crowd-engaging IS-tools in one work. Understanding this current state of affairs, we seek to address this significant research void by comparing and contrasting a number of the crowd-engaging forms of IS currently available for organizational use.

To achieve this goal, we employ the Theory of Crowd Capital as a lens to systematically structure our investigation of crowd-engaging IS. Employing this parsimonious lens, we first explain how Crowd Capital is generated through Crowd Capability in organizations. Taking this conceptual platform as a point of departure, in Section 3, we offer an array of examples of IS currently in use in modern practice to generate Crowd Capital. We compare and contrast these emerging IS techniques using the Crowd Capability construct, therein highlighting some important choices that organizations face when entering the crowd- engagement fray. This comparison, which we term “The Contours of Crowd Capability”, can be used by decision-makers and researchers alike, to differentiate among the many extant methods of Crowd Capital generation. At the same time, our comparison also illustrates some important differences to be found in the internal organizational processes that accompany each form of crowd-engaging IS. In section 4, we conclude with a discussion of the limitations of our work.”

Introducing Socrata’s Open Data Magazine: Open Innovation


“Socrata is dedicated to telling the story of open data as it evolves, which is why we have launched a quarterly magazine, “Open Innovation.”
As innovators push the open data movement forward, they are transforming government and public engagement at every level. With thousands of innovators all over the world – each with their own successes, advice, and ideas – there is a tremendous amount of story for us to tell.
The new magazine features articles, advice, infographics, and more dedicated exclusively to the open data movement. The first issue, Fall 2013, will cover topics such as:

  • What is a Chief Data Officer?
  • Who should be on your open data team?
  • How do you publish your first open data set?

It will also include four Socrata case studies and opinion pieces from some of the industry’s leading innovators…
The magazine is currently free to download or read online through the Socrata website. It is optimized for viewing on tablets and smart phones, with plans in the works to make the magazine available through the Kindle Fire and iTunes magazine stores.
Check out the first issue of Open Innovation at www.socrata.com/magazine.”

Smarter Than You Think: How Technology is Changing Our Minds for the Better


New book by Clive Thompson: “It’s undeniable—technology is changing the way we think. But is it for the better? Amid a chorus of doomsayers, Clive Thompson delivers a resounding “yes.” The Internet age has produced a radical new style of human intelligence, worthy of both celebration and analysis. We learn more and retain it longer, write and think with global audiences, and even gain an ESP-like awareness of the world around us. Modern technology is making us smarter, better connected, and often deeper—both as individuals and as a society.
In Smarter Than You Think Thompson shows that every technological innovation—from the written word to the printing press to the telegraph—has provoked the very same anxieties that plague us today. We panic that life will never be the same, that our attentions are eroding, that culture is being trivialized. But as in the past, we adapt—learning to use the new and retaining what’s good of the old.”

(Appropriate) Big Data for Climate Resilience?


Amy Luers at the Stanford Social Innovation Review: “The answer to whether big data can help communities build resilience to climate change is yes—there are huge opportunities, but there are also risks.

Opportunities

  • Feedback: Strong negative feedback is core to resilience. A simple example is our body’s response to heat stress—sweating, which is a natural feedback to cool down our body. In social systems, feedbacks are also critical for maintaining functions under stress. For example, communication by affected communities after a hurricane provides feedback for how and where organizations and individuals can provide help. While this kind of feedback used to rely completely on traditional communication channels, now crowdsourcing and data mining projects, such as Ushahidi and Twitter Earthquake detector, enable faster and more-targeted relief.
  • Diversity: Big data is enhancing diversity in a number of ways. Consider public health systems. Health officials are increasingly relying on digital detection methods, such as Google Flu Trends or Flu Near You, to augment and diversify traditional disease surveillance.
  • Self-Organization: A central characteristic of resilient communities is the ability to self-organize. This characteristic must exist within a community (see the National Research Council Resilience Report), not something you can impose on it. However, social media and related data-mining tools (InfoAmazonia, Healthmap) can enhance situational awareness and facilitate collective action by helping people identify others with common interests, communicate with them, and coordinate efforts.

Risks

  • Eroding trust: Trust is well established as a core feature of community resilience. Yet the NSA PRISM escapade made it clear that big data projects are raising privacy concerns and possibly eroding trust. And it is not just an issue in government. For example, Target analyzes shopping patterns and can fairly accurately guess if someone in your family is pregnant (which is awkward if they know your daughter is pregnant before you do). When our trust in government, business, and communities weakens, it can decrease a society’s resilience to climate stress.
  • Mistaking correlation for causation: Data mining seeks meaning in patterns that are completely independent of theory (suggesting to some that theory is dead). This approach can lead to erroneous conclusions when correlation is mistakenly taken for causation. For example, one study demonstrated that data mining techniques could show a strong (however spurious) correlation between the changes in the S&P 500 stock index and butter production in Bangladesh. While interesting, a decision support system based on this correlation would likely prove misleading.
  • Failing to see the big picture: One of the biggest challenges with big data mining for building climate resilience is its overemphasis on the hyper-local and hyper-now. While this hyper-local, hyper-now information may be critical for business decisions, without a broader understanding of the longer-term and more-systemic dynamism of social and biophysical systems, big data provides no ability to understand future trends or anticipate vulnerabilities. We must not let our obsession with the here and now divert us from slower-changing variables such as declining groundwater, loss of biodiversity, and melting ice caps—all of which may silently define our future. A related challenge is the fact that big data mining tends to overlook the most vulnerable populations. We must not let the lure of the big data microscope on the “well-to-do” populations of the world make us blind to the less well of populations within cities and communities that have more limited access to smart phones and the Internet.”

Dump the Prizes


Kevin Starr in the Stanford Social Innovation Review: “Contests, challenges, awards—they do more harm than good. Let’s get rid of them….Here’s why:
1. It wastes huge amounts of time.
The Knight Foundation recently released a thoughtful, well-publicized report on its experience running a dozen or so open contests. These are well-run contests, but the report states that there have been 25,000 entries overall, with only 400 winners. That means there have been 24,600 losers. Let’s say that, on average, entrants spent 10 hours working on their entries—that’s 246,000 hours wasted, or 120 people working full-time for a year. Other contests generate worse numbers. I’ve spoken with capable organization leaders who’ve spent 40-plus hours on entries for these things, and too often they find out later that the eligibility criteria were misleading anyway. They are the last people whose time we should waste. …
2. There is way too much emphasis on innovation and not nearly enough on implementation.
Ideas are easy; implementation is hard. Too many competitions are just about generating ideas and “innovation.” Novelty is fun, but there is already an immense limbo-land populated by successful pilots and proven innovations that have gone nowhere. I don’t want to fund anything that doesn’t have someone capable enough to execute on the idea and committed enough to make it work over the long haul. Great social entrepreneurs are people with high-impact ideas, the chops to execute on them, and the commitment to go the distance. They are rare, and they shouldn’t have to enter a contest to get what they need.
The current enthusiasm for crowdsourcing innovation reflects this fallacy that ideas are somehow in short supply. I’ve watched many capable professionals struggle to find implementation support for doable—even proven—real-world ideas, and it is galling to watch all the hoopla around well-intentioned ideas that are doomed to fail. Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute on someone else’s idea. Much of what captures media attention and public awareness barely rises above the level of entertainment if judged by its potential to drive real impact.
3. It gets too much wrong and too little right.
The Hilton Humanitarian prize is a single winner-take-all award of $1.5 million to one lucky organization each year. With a huge prize like that, everyone feels compelled to apply (that is, get nominated), and I can’t tell you how much time I’ve wasted on fruitless recommendations. Very smart people from the foundation spend a lot of time investigating candidates—and I don’t understand why. The list of winners over the past ten years includes a bunch of very well-known, mostly wonderful organizations: BRAC, PIH, Tostan, PATH, Aravind, Doctors Without Borders. I mean, c’mon—you could pick these names out of a hat. BRAC, for example, is an organization we should all revere and imitate, but its budget in 2012 was $449 million, and it’s already won a zillion prizes. If you gave even a third of the Hilton prize to an up-and-coming organization, it could be transformative.
Too many of these things are winner-or-very-few-take-all, and too many focus on the usual suspects. ..
4. It serves as a distraction from the social sector’s big problem.
The central problem with the social sector is that it does not function as a real market for impact, a market where smart funders channel the vast majority of resources toward those best able to create change. Contests are a sideshow masquerading as a main-stage event, a smokescreen that obscures the lack of efficient allocation of philanthropic and investment capital. We need real competition for impact among social sector organizations, not this faux version that makes the noise-to-signal ratio that much worse….”
See also response by Mayur Patel on Why Open Contests Work

New! Humanitarian Computing Library


Patrick Meier at iRevolution: “The field of “Humanitarian Computing” applies Human Computing and Machine Computing to address major information-based challengers in the humanitarian space. Human Computing refers to crowdsourcing and microtasking, which is also referred to as crowd computing. In contrast, Machine Computing draws on natural language processing and machine learning, amongst other disciplines. The Next Generation Humanitarian Technologies we are prototyping at QCRI are powered by Humanitarian Computing research and development (R&D).
My QCRI colleagues and I  just launched the first ever Humanitarian Computing Library which is publicly available here. The purpose of this library, or wiki, is to consolidate existing and future research that relate to Humanitarian Computing in order to support the development of next generation humanitarian tech. The repository currently holds over 500 publications that span topics such as Crisis Management, Trust and Security, Software and Tools, Geographical Analysis and Crowdsourcing. These publications are largely drawn from (but not limited to) peer-reviewed papers submitted at leading conferences around the world. We invite you to add your own research on humanitarian computing to this growing collection of resources.”

How Mechanical Turkers Crowdsourced a Huge Lexicon of Links Between Words and Emotion


The Physics arXiv Blog: Sentiment analysis on the social web depends on how a person’s state of mind is expressed in words. Now a new database of the links between words and emotions could provide a better foundation for this kind of analysis


One of the buzzphrases associated with the social web is sentiment analysis. This is the ability to determine a person’s opinion or state of mind by analysing the words they post on Twitter, Facebook or some other medium.
Much has been promised with this method—the ability to measure satisfaction with politicians, movies and products; the ability to better manage customer relations; the ability to create dialogue for emotion-aware games; the ability to measure the flow of emotion in novels; and so on.
The idea is to entirely automate this process—to analyse the firehose of words produced by social websites using advanced data mining techniques to gauge sentiment on a vast scale.
But all this depends on how well we understand the emotion and polarity (whether negative or positive) that people associate with each word or combinations of words.
Today, Saif Mohammad and Peter Turney at the National Research Council Canada in Ottawa unveil a huge database of words and their associated emotions and polarity, which they have assembled quickly and inexpensively using Amazon’s crowdsourcing Mechanical Turk website. They say this crowdsourcing mechanism makes it possible to increase the size and quality of the database quickly and easily….The result is a comprehensive word-emotion lexicon for over 10,000 words or two-word phrases which they call EmoLex….
The bottom line is that sentiment analysis can only ever be as good as the database on which it relies. With EmoLex, analysts have a new tool for their box of tricks.”
Ref: arxiv.org/abs/1308.6297: Crowdsourcing a Word-Emotion Association Lexicon

Citizen science versus NIMBY?


Ethan Zuckerman’s latest blog: “Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.

Screen Shot 2013-08-29 at 10.25.44 AM
The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.
This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.
For me, Safecast raises two interesting questions:
– If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
– How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?
To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility…

Projects like Safecast – and the projects I’m exploring this coming year under the heading of citizen infrastructure monitoring – have a challenge. Most participants aren’t going to uncover Ed Snowden-calibre information by driving around with a geiger counter or mapping wells in their communities. Lots of data collected is going to reveal that governments and corporations are doing their jobs, as my data suggests. It’s easy to track a path between collecting groundbreaking data and getting involved with deeper civic and political issues – will collecting data that the local nuclear plant is apparently safe get me more involved with issues of nuclear waste disposal?
It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine….”

Employing digital crowdsourced information resources: Managing the emerging information commons


New Paper by Robin Mansell in the International Journal of the Commons: “This paper examines the ways loosely connected online groups and formal science professionals are responding to the potential for collaboration using digital technology platforms and crowdsourcing as a means of generating data in the digital information commons. The preferred approaches of each of these groups to managing information production, circulation and application are examined in the light of the increasingly vast amounts of data that are being generated by participants in the commons. Crowdsourcing projects initiated by both groups in the fields of astronomy, environmental science and crisis and emergency response are used to illustrate some of barriers and opportunities for greater collaboration in the management of data sets initially generated for quite different purposes. The paper responds to claims in the literature about the incommensurability of emerging approaches to open information management as practiced by formal science and many loosely connected online groups, especially with respect to authority and the curation of data. Yet, in the wake of technological innovation and diverse applications of crowdsourced data, there are numerous opportunities for collaboration. This paper draws on examples employing different social technologies of authority to generate and manage data in the commons. It suggests several measures that could provide incentives for greater collaboration in the future. It also emphasises the need for a research agenda to examine whether and how changes in social technologies might foster collaboration in the interests of reaping the benefits of increasingly large data resources for both shorter term analysis and longer term accumulation of useful knowledge.”