Kevin Starr in the Stanford Social Innovation Review: “Contests, challenges, awards—they do more harm than good. Let’s get rid of them….Here’s why:
1. It wastes huge amounts of time.
The Knight Foundation recently released a thoughtful, well-publicized report on its experience running a dozen or so open contests. These are well-run contests, but the report states that there have been 25,000 entries overall, with only 400 winners. That means there have been 24,600 losers. Let’s say that, on average, entrants spent 10 hours working on their entries—that’s 246,000 hours wasted, or 120 people working full-time for a year. Other contests generate worse numbers. I’ve spoken with capable organization leaders who’ve spent 40-plus hours on entries for these things, and too often they find out later that the eligibility criteria were misleading anyway. They are the last people whose time we should waste. …
2. There is way too much emphasis on innovation and not nearly enough on implementation.
Ideas are easy; implementation is hard. Too many competitions are just about generating ideas and “innovation.” Novelty is fun, but there is already an immense limbo-land populated by successful pilots and proven innovations that have gone nowhere. I don’t want to fund anything that doesn’t have someone capable enough to execute on the idea and committed enough to make it work over the long haul. Great social entrepreneurs are people with high-impact ideas, the chops to execute on them, and the commitment to go the distance. They are rare, and they shouldn’t have to enter a contest to get what they need.
The current enthusiasm for crowdsourcing innovation reflects this fallacy that ideas are somehow in short supply. I’ve watched many capable professionals struggle to find implementation support for doable—even proven—real-world ideas, and it is galling to watch all the hoopla around well-intentioned ideas that are doomed to fail. Most crowdsourced ideas prove unworkable, but even if good ones emerge, there is no implementation fairy out there, no army of social entrepreneurs eager to execute on someone else’s idea. Much of what captures media attention and public awareness barely rises above the level of entertainment if judged by its potential to drive real impact.
3. It gets too much wrong and too little right.
The Hilton Humanitarian prize is a single winner-take-all award of $1.5 million to one lucky organization each year. With a huge prize like that, everyone feels compelled to apply (that is, get nominated), and I can’t tell you how much time I’ve wasted on fruitless recommendations. Very smart people from the foundation spend a lot of time investigating candidates—and I don’t understand why. The list of winners over the past ten years includes a bunch of very well-known, mostly wonderful organizations: BRAC, PIH, Tostan, PATH, Aravind, Doctors Without Borders. I mean, c’mon—you could pick these names out of a hat. BRAC, for example, is an organization we should all revere and imitate, but its budget in 2012 was $449 million, and it’s already won a zillion prizes. If you gave even a third of the Hilton prize to an up-and-coming organization, it could be transformative.
Too many of these things are winner-or-very-few-take-all, and too many focus on the usual suspects. ..
4. It serves as a distraction from the social sector’s big problem.
The central problem with the social sector is that it does not function as a real market for impact, a market where smart funders channel the vast majority of resources toward those best able to create change. Contests are a sideshow masquerading as a main-stage event, a smokescreen that obscures the lack of efficient allocation of philanthropic and investment capital. We need real competition for impact among social sector organizations, not this faux version that makes the noise-to-signal ratio that much worse….”
See also response by Mayur Patel on Why Open Contests Work
New! Humanitarian Computing Library
Patrick Meier at iRevolution: “The field of “Humanitarian Computing” applies Human Computing and Machine Computing to address major information-based challengers in the humanitarian space. Human Computing refers to crowdsourcing and microtasking, which is also referred to as crowd computing. In contrast, Machine Computing draws on natural language processing and machine learning, amongst other disciplines. The Next Generation Humanitarian Technologies we are prototyping at QCRI are powered by Humanitarian Computing research and development (R&D).
My QCRI colleagues and I just launched the first ever Humanitarian Computing Library which is publicly available here. The purpose of this library, or wiki, is to consolidate existing and future research that relate to Humanitarian Computing in order to support the development of next generation humanitarian tech. The repository currently holds over 500 publications that span topics such as Crisis Management, Trust and Security, Software and Tools, Geographical Analysis and Crowdsourcing. These publications are largely drawn from (but not limited to) peer-reviewed papers submitted at leading conferences around the world. We invite you to add your own research on humanitarian computing to this growing collection of resources.”
How Mechanical Turkers Crowdsourced a Huge Lexicon of Links Between Words and Emotion
The Physics arXiv Blog: Sentiment analysis on the social web depends on how a person’s state of mind is expressed in words. Now a new database of the links between words and emotions could provide a better foundation for this kind of analysis
One of the buzzphrases associated with the social web is sentiment analysis. This is the ability to determine a person’s opinion or state of mind by analysing the words they post on Twitter, Facebook or some other medium.
Much has been promised with this method—the ability to measure satisfaction with politicians, movies and products; the ability to better manage customer relations; the ability to create dialogue for emotion-aware games; the ability to measure the flow of emotion in novels; and so on.
The idea is to entirely automate this process—to analyse the firehose of words produced by social websites using advanced data mining techniques to gauge sentiment on a vast scale.
But all this depends on how well we understand the emotion and polarity (whether negative or positive) that people associate with each word or combinations of words.
Today, Saif Mohammad and Peter Turney at the National Research Council Canada in Ottawa unveil a huge database of words and their associated emotions and polarity, which they have assembled quickly and inexpensively using Amazon’s crowdsourcing Mechanical Turk website. They say this crowdsourcing mechanism makes it possible to increase the size and quality of the database quickly and easily….The result is a comprehensive word-emotion lexicon for over 10,000 words or two-word phrases which they call EmoLex….
The bottom line is that sentiment analysis can only ever be as good as the database on which it relies. With EmoLex, analysts have a new tool for their box of tricks.”
Ref: arxiv.org/abs/1308.6297: Crowdsourcing a Word-Emotion Association Lexicon
Citizen science versus NIMBY?
Ethan Zuckerman’s latest blog: “Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.
The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.
… This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.
For me, Safecast raises two interesting questions:
– If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
– How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?
To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility…
It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine….”
Employing digital crowdsourced information resources: Managing the emerging information commons
New Paper by Robin Mansell in the International Journal of the Commons: “This paper examines the ways loosely connected online groups and formal science professionals are responding to the potential for collaboration using digital technology platforms and crowdsourcing as a means of generating data in the digital information commons. The preferred approaches of each of these groups to managing information production, circulation and application are examined in the light of the increasingly vast amounts of data that are being generated by participants in the commons. Crowdsourcing projects initiated by both groups in the fields of astronomy, environmental science and crisis and emergency response are used to illustrate some of barriers and opportunities for greater collaboration in the management of data sets initially generated for quite different purposes. The paper responds to claims in the literature about the incommensurability of emerging approaches to open information management as practiced by formal science and many loosely connected online groups, especially with respect to authority and the curation of data. Yet, in the wake of technological innovation and diverse applications of crowdsourced data, there are numerous opportunities for collaboration. This paper draws on examples employing different social technologies of authority to generate and manage data in the commons. It suggests several measures that could provide incentives for greater collaboration in the future. It also emphasises the need for a research agenda to examine whether and how changes in social technologies might foster collaboration in the interests of reaping the benefits of increasingly large data resources for both shorter term analysis and longer term accumulation of useful knowledge.”
AeroSee: Crowdsourcing Rescue using Drones
“The AeroSee experiment is an exciting new project where you can become a virtual mountain rescue assistant from the comfort of your own home, simply by using your computer. Every year Patterdale Mountain Rescue assist hundreds of injured and missing persons from around the Ullswater area in the North of the Lake District. The average search takes several hours and can require a large team of volunteers to set out in often poor weather conditions. This experiment is to see how the use of UAV (or ‘Drone’) technology, together with your ‘crowd-sourced’ help can reduce the time taken to locate and rescue a person in distress.
Civilian Use of UAVs
Here at UCLan we are interested in investigating the civilian applications of unmanned systems. They offer a rich and exciting source of educational and research material for our students and research staff. As regulations for their use are being developed and matured by aviation authorities, it is important that research is conducted to maximise their benefits to society.
Our Partners
We are working with e-Migs, a light-UAV operator who are providing and operating the aircraft during the search.
How aeroSee Works
Upon receiving a rescue call the Mountain Rescue services plan their search area and we dispatch our unmanned remotely piloted aircraft to begin a search for the missing persons. Using a real time video link, the aircraft transmits pictures of terrain to our website where you can help by viewing the images and tagging the photos if you spot something that the Mountain Rescue services need to investigate, such as a possible sighting of an injured party. We use some computer algorithms to process the tagged data that we receive and pass this processed data onto the Mountain Rescue Control Centre for a final opinion and dispatch of search teams.
We believe this approach can reduce time and save lives and we need your help to prove it. Once you are signed up, you can practice using the site by choosing ‘Practice Mission’ from the menubar. Fancy yourself as a Virtual Search agent? If have not already done so, sign up here:
Get Started!“
Inside Noisebridge: San Francisco’s eclectic anarchist hackerspace
Signe Brewster at Gigaom: “Since its formation in 2007, Noisebridge has grown from a few people meeting in coffee shops to an overflowing space on Mission Street where members can pursue projects that even the maddest scientist would approve of…. When Noisebridge opened the doors of its first hackerspace location in San Francisco’s Mission district in 2008, it had nothing but a large table and few chairs found on the street.
Today, it looks like a mad scientist has been methodically hoarding tools, inventions, art, supplies and a little bit of everything else for five years. The 350 people who come through Noisebridge each week have a habit of leaving a mark, whether by donating a tool or building something that other visitors add to bit by bit. Anyone can be a paid member or a free user of the space, and over the years they have built it into a place where you can code, sew, hack hardware, cook, build robots, woodwork, learn, teach and more.
The members really are mad scientists. Anything left out in the communal spaces is fair game to “hack into a giant robot,” according to co-founder Mitch Altman. Members once took a broken down wheelchair and turned it into a brainwave-controlled robot named M.C. Hawking. Another person made pants with a built-in keyboard. The Spacebridge group has sent high altitude balloons to near space, where they captured gorgeous videos of the Earth. And once a month, the Vegan Hackers teach their pupils how to make classic fare like sushi and dumplings out of vegan ingredients….”
Civic Innovation Fellowships Go Global
Some thoughts from Panthea Lee from Reboot: “In recent years, civic innovation fellowships have shown great promise to improve the relationships between citizens and government. In the United States, Code for America and the Presidential Innovation Fellows have demonstrated the positive impact a small group of technologists can have working hand-in-hand with government. With the launch of Code for All, Code for Europe, Code4Kenya, and Code4Africa, among others, the model is going global.
But despite the increasing popularity of civic innovation fellowships, there are few templates for how a “Code for” program can be adapted to a different context. In the US, the success of Code for America has drawn from a wealth of tech talent eager to volunteer skills, public and private support, and the active participation of municipal governments. Elsewhere, new “Code for” programs are surely going to have to operate within a different set of capacities and constraints.”
Using Crowdsourcing In Government
Daren C. Brabham for IBM Center for The Business of Government: “The growing interest in “engaging the crowd” to identify or develop innovative solutions to public problems has been inspired by similar efforts in the commercial world. There, crowdsourcing has been successfully used to design innovative consumer products or solve complex scientific problems, ranging from custom-designed T-shirts to mapping genetic DNA strands.
The Obama administration, as well as many state and local governments, have been adapting these crowdsourcing techniques with some success. This report provides a strategic view of crowdsourcing and identifies four specific types:
- Type 1: Knowledge Discovery and Management. Collecting knowledge reported by an on-line community, such as the reporting of earth tremors or potholes to a central source.
- Type 2: Distributed Human Intelligence Tasking. Distributing “micro-tasks” that require human intelligence to solve, such as transcribing handwritten historical documents into electronic files.
- Type 3: Broadcast Search. Broadcasting a problem-solving challenge widely on the internet and providing an award for solution, such as NASA’s prize for an algorithm to predict solar flares
- Type 4: Peer-Vetted Creative Production. Creating peer-vetted solutions, where an on-line community both proposes possible solutions and is empowered to collectively choose among the solutions.
By understanding the different types, which require different approaches, public managers will have a better chance of success. Dr. Brabham focuses on the strategic design process rather than on the specific technical tools that can be used for crowdsourcing. He sets forth ten emerging best practices for implementing a crowdsourcing initiative.”
Collaboration In Biology's Century
Todd Sherer, Chief Executive Officer of The Michael J. Fox Foundation for Parkinson’s Research, in Forbes: “he problem is, we all still work in a system that feeds on secrecy and competition. It’s hard enough work just to dream up win/win collaborative structures; getting them off the ground can feel like pushing a boulder up a hill. Yet there is no doubt that the realities of today’s research environment — everything from the accumulation of big data to the ever-shrinking availability of funds — demand new models for collaboration. Call it “collaboration 2.0.”…I share a few recent examples in the hope of increasing the reach of these initiatives, inspiring others like them, and encouraging frank commentary on how they’re working.
Open-Access Data
The successes of collaborations in the traditional sense, coupled with advanced techniques such as genomic sequencing, have yielded masses of data. Consortia of clinical sites around the world are working together to collect and characterize data and biospecimens through standardized methods, leading to ever-larger pools — more like Great Lakes — of data. Study investigators draw their own conclusions, but there is so much more to discover than any individual lab has the bandwidth for….
Crowdsourcing
A great way to grow engagement with resources you’re willing to share? Ask for it. Collaboration 2.0 casts a wide net. We dipped our toe in the crowdsourcing waters earlier this year with our Parkinson’s Data Challenge, which asked anyone interested to download a set of data that had been collected from PD patients and controls using smart phones. …
Cross-Disciplinary Collaboration 2.0
The more we uncover about the interconnectedness and complexity of the human system, the more proof we are gathering that findings and treatments for one disease may provide invaluable insights for others. We’ve seen some really intriguing crosstalk between the Parkinson’s and Alzheimer’s disease research communities recently…
The results should be: More ideas. More discovery. Better health.”