New crowdsourcing platform links tech-skilled volunteers with charities


Charity Digital News: “The Atlassian Foundation today previewed its innovative crowdsourcing platform, MakeaDiff.org, which will allow nonprofits to coordinate with technically-skilled volunteers who want to help convert ideas into successful projects…
Once vetted, nonprofits will be able to list their volunteer jobs on the site. Skilled volunteers such as developers, designers, business analysts and project managers will then be able to go online and quickly search the site for opportunities relevant and convenient to them.
Atlassian Foundation manager, Melissa Beaumont Lee, said: “We started hearing from nonprofits that what they valued even more than donations was access to Atlassian’s technology expertise. Similarly, we had lots of employees who were keen to volunteer, but didn’t know how to get involved; coordinating volunteers for all these amazing projects was just not scalable. Thus, MakeaDiff.org was born to benefit both nonprofits and volunteers. We wanted to reduce the friction in coordinating efforts so more time can be spent doing really meaningful work.”
 

Best Practices for Government Crowdsourcing Programs


Anton Root: “Crowdsourcing helps communities connect and organize, so it makes sense that governments are increasingly making use of crowd-powered technologies and processes.
Just recently, for instance, we wrote about the Malaysian government’s initiative to crowdsource the national budget. Closer to home, we’ve seen government agencies from U.S. AID to NASA make use of the crowd.
Daren Brabham, professor at the University of Southern California, recently published a report titled “Using Crowdsourcing In Government” that introduces readers to the basics of crowdsourcing, highlights effective use cases, and establishes best practices when it comes to governments opening up to the crowd. Below, we take a look at a few of the suggestions Brabham makes to those considering crowdsourcing.
Brabham splits up his ten best practices into three phases: planning, implementation, and post-implementation. The first suggestion in the planning phase he makes may be the most critical of all: “Clearly define the problem and solution parameters.” If the community isn’t absolutely clear on what the problem is, the ideas and solutions that users submit will be equally vague and largely useless.
This applies not only to government agencies, but also to SMEs and large enterprises making use of crowdsourcing. At Massolution NYC 2013, for instance, we heard again and again the importance of meticulously defining a problem. And open innovation platform InnoCentive’s CEO Andy Zynga stressed the big role his company plays in helping organizations do away with the “curse of knowledge.”
Brabham also has advice for projects in their implementation phase, the key bit being: “Launch a promotional plan and a plan to grow and sustain the community.” Simply put, crowdsourcing cannot work without a crowd, so it’s important to build up the community before launching a campaign. It does take some balance, however, as a community that’s too large by the time a campaign launches can turn off newcomers who “may not feel welcome or may be unsure how to become initiated into the group or taken seriously.”
Brabham’s key advice for the post-implementation phase is: “Assess the project from many angles.” The author suggests tracking website traffic patterns, asking users to volunteer information about themselves when registering, and doing original research through surveys and interviews. The results of follow-up research can help to better understand the responses submitted, and also make it easier to show the successes of the crowdsourcing campaign. This is especially important for organizations partaking in ongoing crowdsourcing efforts.”

Using Participatory Crowdsourcing in South Africa to Create a Safer Living Environment


New Paper by Bhaveer Bhana, Stephen Flowerday, and Aharon Satt in the International Journal of Distributed Sensor Networks: “The increase in urbanisation is making the management of city resources a difficult task. Data collected through observations (utilising humans as sensors) of the city surroundings can be used to improve decision making in terms of managing these resources. However, the data collected must be of a certain quality in order to ensure that effective and efficient decisions are made. This study is focused on the improvement of emergency and non-emergency services (city resources) through the use of participatory crowdsourcing (humans as sensors) as a data collection method (collect public safety data), utilising voice technology in the form of an interactive voice response (IVR) system.
The study illustrates how participatory crowdsourcing (specifically humans as sensors) can be used as a Smart City initiative focusing on public safety by illustrating what is required to contribute to the Smart City, and developing a roadmap in the form of a model to assist decision making when selecting an optimal crowdsourcing initiative. Public safety data quality criteria were developed to assess and identify the problems affecting data quality.
This study is guided by design science methodology and applies three driving theories: the Data Information Knowledge Action Result (DIKAR) model, the characteristics of a Smart City, and a credible Data Quality Framework. Four critical success factors were developed to ensure high quality public safety data is collected through participatory crowdsourcing utilising voice technologies.”

New book: "Crowdsourcing"


New book by Jean-Fabrice Lebraty, Katia Lobre-Lebraty on Crowdsourcing: “Crowdsourcing is a relatively recent phenomenon that only appeared in 2006, but it continues to grow and diversify (crowdfunding, crowdcontrol, etc.). This book aims to review this concept and show how it leads to the creation of value and new business opportunities.
Chapter 1 is based on four examples: the online-banking sector, an informative television channel, the postal sector and the higher education sector. It shows that in the current context, for a company facing challenges, the crowd remains an untapped resource. The next chapter presents crowdsourcing as a new form of externalization and offers definitions of crowdsourcing. In Chapter 3, the authors attempt to explain how a company can create value by means of a crowdsourcing operation. To do this, authors use a model linking types of value, types of crowd, and the means by which these crowds are accessed.
Chapter 4 examines in detail various forms that crowdsourcing may take, by presenting and discussing ten types of crowdsourcing operation. In Chapter 5, the authors imagine and explore the ways in which the dark side of crowdsourcing might be manifested and Chapter 6 offers some insight into the future of crowdsourcing.
Contents
1. A Turbulent and Paradoxical Environment.
2. Crowdsourcing: A New Form of Externalization.
3. Crowdsourcing and Value Creation.
4. Forms of Crowdsourcing.
5. The Dangers of Crowdsourcing.
6. The Future of Crowdsourcing.”

(Appropriate) Big Data for Climate Resilience?


Amy Luers at the Stanford Social Innovation Review: “The answer to whether big data can help communities build resilience to climate change is yes—there are huge opportunities, but there are also risks.

Opportunities

  • Feedback: Strong negative feedback is core to resilience. A simple example is our body’s response to heat stress—sweating, which is a natural feedback to cool down our body. In social systems, feedbacks are also critical for maintaining functions under stress. For example, communication by affected communities after a hurricane provides feedback for how and where organizations and individuals can provide help. While this kind of feedback used to rely completely on traditional communication channels, now crowdsourcing and data mining projects, such as Ushahidi and Twitter Earthquake detector, enable faster and more-targeted relief.
  • Diversity: Big data is enhancing diversity in a number of ways. Consider public health systems. Health officials are increasingly relying on digital detection methods, such as Google Flu Trends or Flu Near You, to augment and diversify traditional disease surveillance.
  • Self-Organization: A central characteristic of resilient communities is the ability to self-organize. This characteristic must exist within a community (see the National Research Council Resilience Report), not something you can impose on it. However, social media and related data-mining tools (InfoAmazonia, Healthmap) can enhance situational awareness and facilitate collective action by helping people identify others with common interests, communicate with them, and coordinate efforts.

Risks

  • Eroding trust: Trust is well established as a core feature of community resilience. Yet the NSA PRISM escapade made it clear that big data projects are raising privacy concerns and possibly eroding trust. And it is not just an issue in government. For example, Target analyzes shopping patterns and can fairly accurately guess if someone in your family is pregnant (which is awkward if they know your daughter is pregnant before you do). When our trust in government, business, and communities weakens, it can decrease a society’s resilience to climate stress.
  • Mistaking correlation for causation: Data mining seeks meaning in patterns that are completely independent of theory (suggesting to some that theory is dead). This approach can lead to erroneous conclusions when correlation is mistakenly taken for causation. For example, one study demonstrated that data mining techniques could show a strong (however spurious) correlation between the changes in the S&P 500 stock index and butter production in Bangladesh. While interesting, a decision support system based on this correlation would likely prove misleading.
  • Failing to see the big picture: One of the biggest challenges with big data mining for building climate resilience is its overemphasis on the hyper-local and hyper-now. While this hyper-local, hyper-now information may be critical for business decisions, without a broader understanding of the longer-term and more-systemic dynamism of social and biophysical systems, big data provides no ability to understand future trends or anticipate vulnerabilities. We must not let our obsession with the here and now divert us from slower-changing variables such as declining groundwater, loss of biodiversity, and melting ice caps—all of which may silently define our future. A related challenge is the fact that big data mining tends to overlook the most vulnerable populations. We must not let the lure of the big data microscope on the “well-to-do” populations of the world make us blind to the less well of populations within cities and communities that have more limited access to smart phones and the Internet.”

Dump the Prizes


Kevin Starr in the Stanford Social Innovation Review: “Contests, challenges, awards—they do more harm than good. Let’s get rid of them….Here’s why:
1. It wastes huge amounts of time.
The Knight Foundation recently released a thoughtful, well-publicized report on its experience running a dozen or so open contests. These are well-run contests, but the report states that there have been 25,000 entries overall, with only 400 winners. That means there have been 24,600 losers. Let’s say that, on average, entrants spent 10 hours working on their entries—that’s 246,000 hours wasted, or 120 people working full-time for a year. Other contests generate worse numbers. I’ve spoken with capable organization leaders who’ve spent 40-plus hours on entries for these things, and too often they find out later that the eligibility criteria were misleading anyway. They are the last people whose time we should waste. …
2. There is way too much emphasis on innovation and not nearly enough on implementation.
Ideas are easy; implementation is hard. Too many competitions are just about generating ideas and “innovation.” Novelty is fun, but there is already an immense limbo-land populated by successful pilots and proven innovations that have gone nowhere. I don’t want to fund anything that doesn’t have someone capable enough to execute on the idea and committed enough to make it work over the long haul. Great social entrepreneurs are people with high-impact ideas, the chops to execute on them, and the commitment to go the distance. They are rare, and they shouldn’t have to enter a contest to get what they need.
The current enthusiasm for crowdsourcing innovation reflects this fallacy that ideas are somehow in short supply. I’ve watched many capable professionals struggle to find implementation support for doable—even proven—real-world ideas, and it is galling to watch all the hoopla around well-intentioned ideas that are doomed to fail. Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute on someone else’s idea. Much of what captures media attention and public awareness barely rises above the level of entertainment if judged by its potential to drive real impact.
3. It gets too much wrong and too little right.
The Hilton Humanitarian prize is a single winner-take-all award of $1.5 million to one lucky organization each year. With a huge prize like that, everyone feels compelled to apply (that is, get nominated), and I can’t tell you how much time I’ve wasted on fruitless recommendations. Very smart people from the foundation spend a lot of time investigating candidates—and I don’t understand why. The list of winners over the past ten years includes a bunch of very well-known, mostly wonderful organizations: BRAC, PIH, Tostan, PATH, Aravind, Doctors Without Borders. I mean, c’mon—you could pick these names out of a hat. BRAC, for example, is an organization we should all revere and imitate, but its budget in 2012 was $449 million, and it’s already won a zillion prizes. If you gave even a third of the Hilton prize to an up-and-coming organization, it could be transformative.
Too many of these things are winner-or-very-few-take-all, and too many focus on the usual suspects. ..
4. It serves as a distraction from the social sector’s big problem.
The central problem with the social sector is that it does not function as a real market for impact, a market where smart funders channel the vast majority of resources toward those best able to create change. Contests are a sideshow masquerading as a main-stage event, a smokescreen that obscures the lack of efficient allocation of philanthropic and investment capital. We need real competition for impact among social sector organizations, not this faux version that makes the noise-to-signal ratio that much worse….”
See also response by Mayur Patel on Why Open Contests Work

New! Humanitarian Computing Library


Patrick Meier at iRevolution: “The field of “Humanitarian Computing” applies Human Computing and Machine Computing to address major information-based challengers in the humanitarian space. Human Computing refers to crowdsourcing and microtasking, which is also referred to as crowd computing. In contrast, Machine Computing draws on natural language processing and machine learning, amongst other disciplines. The Next Generation Humanitarian Technologies we are prototyping at QCRI are powered by Humanitarian Computing research and development (R&D).
My QCRI colleagues and I  just launched the first ever Humanitarian Computing Library which is publicly available here. The purpose of this library, or wiki, is to consolidate existing and future research that relate to Humanitarian Computing in order to support the development of next generation humanitarian tech. The repository currently holds over 500 publications that span topics such as Crisis Management, Trust and Security, Software and Tools, Geographical Analysis and Crowdsourcing. These publications are largely drawn from (but not limited to) peer-reviewed papers submitted at leading conferences around the world. We invite you to add your own research on humanitarian computing to this growing collection of resources.”

How Mechanical Turkers Crowdsourced a Huge Lexicon of Links Between Words and Emotion


The Physics arXiv Blog: Sentiment analysis on the social web depends on how a person’s state of mind is expressed in words. Now a new database of the links between words and emotions could provide a better foundation for this kind of analysis


One of the buzzphrases associated with the social web is sentiment analysis. This is the ability to determine a person’s opinion or state of mind by analysing the words they post on Twitter, Facebook or some other medium.
Much has been promised with this method—the ability to measure satisfaction with politicians, movies and products; the ability to better manage customer relations; the ability to create dialogue for emotion-aware games; the ability to measure the flow of emotion in novels; and so on.
The idea is to entirely automate this process—to analyse the firehose of words produced by social websites using advanced data mining techniques to gauge sentiment on a vast scale.
But all this depends on how well we understand the emotion and polarity (whether negative or positive) that people associate with each word or combinations of words.
Today, Saif Mohammad and Peter Turney at the National Research Council Canada in Ottawa unveil a huge database of words and their associated emotions and polarity, which they have assembled quickly and inexpensively using Amazon’s crowdsourcing Mechanical Turk website. They say this crowdsourcing mechanism makes it possible to increase the size and quality of the database quickly and easily….The result is a comprehensive word-emotion lexicon for over 10,000 words or two-word phrases which they call EmoLex….
The bottom line is that sentiment analysis can only ever be as good as the database on which it relies. With EmoLex, analysts have a new tool for their box of tricks.”
Ref: arxiv.org/abs/1308.6297: Crowdsourcing a Word-Emotion Association Lexicon

Citizen science versus NIMBY?


Ethan Zuckerman’s latest blog: “Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.

Screen Shot 2013-08-29 at 10.25.44 AM
The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.
This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.
For me, Safecast raises two interesting questions:
– If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
– How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?
To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility…

Projects like Safecast – and the projects I’m exploring this coming year under the heading of citizen infrastructure monitoring – have a challenge. Most participants aren’t going to uncover Ed Snowden-calibre information by driving around with a geiger counter or mapping wells in their communities. Lots of data collected is going to reveal that governments and corporations are doing their jobs, as my data suggests. It’s easy to track a path between collecting groundbreaking data and getting involved with deeper civic and political issues – will collecting data that the local nuclear plant is apparently safe get me more involved with issues of nuclear waste disposal?
It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine….”

Employing digital crowdsourced information resources: Managing the emerging information commons


New Paper by Robin Mansell in the International Journal of the Commons: “This paper examines the ways loosely connected online groups and formal science professionals are responding to the potential for collaboration using digital technology platforms and crowdsourcing as a means of generating data in the digital information commons. The preferred approaches of each of these groups to managing information production, circulation and application are examined in the light of the increasingly vast amounts of data that are being generated by participants in the commons. Crowdsourcing projects initiated by both groups in the fields of astronomy, environmental science and crisis and emergency response are used to illustrate some of barriers and opportunities for greater collaboration in the management of data sets initially generated for quite different purposes. The paper responds to claims in the literature about the incommensurability of emerging approaches to open information management as practiced by formal science and many loosely connected online groups, especially with respect to authority and the curation of data. Yet, in the wake of technological innovation and diverse applications of crowdsourced data, there are numerous opportunities for collaboration. This paper draws on examples employing different social technologies of authority to generate and manage data in the commons. It suggests several measures that could provide incentives for greater collaboration in the future. It also emphasises the need for a research agenda to examine whether and how changes in social technologies might foster collaboration in the interests of reaping the benefits of increasingly large data resources for both shorter term analysis and longer term accumulation of useful knowledge.”