Springwise: “We’ve seen examples of researchers utilizing crowdsourcing to expand their datasets, such as a free mobile app where users help find data patterns in cancer research by playing games. Now a pop-up home lab is harnessing the power of citizen scientists to find future antibiotics in their backyards.
By developing a small home lab, UK-based Post/Biotics is encouraging anyone, including school children, to help find solutions to the growing antibiotics resistance crisis. Post/Biotics is a citizen’s science platform, which provides the toolkit, knowledge and science network so anyone can support antibiotic development. Participants can test samples of basically anything they find in natural areas, from soil to mushrooms, and if their sample has antibacterial properties, their tool will change color. They can then send results, along with a photo and GPS location to an online database. When the database notices a submission that may be interesting, it alerts researchers, who can then ask for samples. An open-source library of potential antimicrobials is then established, and users simultaneously benefit from learning how to conduct microbiology experiments.
Post/Biotics are using the power of an unlimited amount of citizen scientists to increase the research potential of antibiotic discovery….(More)”
Lawyer’s crowdsourcing site aims to help people have their day in court
Mary O’Hara in The Guardian: “With warnings coming thick and fast about the stark ramifications of the government’s sweeping cuts to legal aid, it was probably inevitable that someone would come up with a new way to plug some gaps in access to justice. Enter the legal crowdfunder, CrowdJustice, an online platform where people who might not otherwise get their case heard can raise cash to pay for legal representation and court costs.
The brainchild of 33-year-old lawyer Julia Salasky, and the first of its kind in the UK, CrowdJustice provides people who have a public interest case but lack adequate financial resources with a forum where they can publicise their case and, if all goes to plan, generate funding for legal action by attracting public support and donations.
“We are trying to increase access to justice – that’s the baseline,” says Salasky. “I think it’s a social good.”
The platform was launched just a few months ago, but has already attracteda range of cases both large and small, including some that could set important legal precedents.
CrowdJustice has helped the campaign, Jengba (Joint Enterprise: Not Guilty by Association) to raise funds to intervene in a supreme court case to consider reforming the law of joint enterprise that can find people guilty of a crime, including murder, committed by someone else. The group amassed £10,000 in donations for legal assistance as part of their ongoing challenge to the legal doctrine of “joint enterprise”, which disproportionately prosecutes people from black and minority ethnic backgrounds for violent crimes where it is alleged they have acted together for a common purpose.
In another case, a Northern Irish woman who discovered she wasn’t entitled to her partner’s occupational pension after he died because of a bureaucratic requirement that did not apply to married couples, used CrowdJustice to help raise money to take her case all the way to the supreme court. “If she wins, it will have an enormous precedent-setting value for the legal rights of all couples who cohabit,” Salasky says….(The Guardian)”
The Crowdsourcing Site That Wants to Pool Our Genomes
Ed Jong at the Atlantic: “…In 2010, I posted a vial of my finest spit to the genetic-testing company 23andme. In return, I got to see what my genes reveal about my ancestry, how they affect my risk of diseases or my responses to medical drugs, and even what they say about the texture of my earwax. (It’s dry.) 23andme now has around a million users, as do other similar companies like Ancestry.com.
But these communities are largely separated from one another, a situation that frustrated Yaniv Erlich from the New York Genome Center and Columbia University. “Tens of millions of people will soon have access to their genomes,” he says. “Are we just going to let these data sit in silos, or can we partner with these large communities to enable some really large science? That’s why we developed DNA.LAND.”
DNA.LAND, which Erlich developed together with colleague Joe Pickrell, is a website that allows customers of other genetic-testing services to upload files containing their genetic data. Scientists can then use this data for research, to the extent that each user consents to. “DNA.LAND is a way for getting the general public to participate in large-scale genetic studies,” says Erlich. “And we’re not a company. We’re a non-profit website, run by scientists.”…(More)”
Effectively Crowdsourcing the Acquisition and Analysis of Visual Data for Disaster Response
Hien To, Seon Ho Kim, and Cyrus Shahabi: “Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save peoples lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visualdata collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework….(More)”
Crowdsourced research: Many hands make tight work
Raphael Silberzahn & Eric L. Uhlmann in Nature: “…For many research problems, crowdsourcing analyses will not be the optimal solution. It demands a huge amount of resources for just one research question. Some questions will not benefit from a crowd of analysts: researchers’ approaches will be much more similar for simple data sets and research designs than for large and complex ones. Importantly, crowdsourcing does not eliminate all bias. Decisions must still be made about what hypotheses to test, from where to get suitable data, and importantly, which variables can or cannot be collected. (For instance, we did not consider whether a particular player’s skin tone was lighter or darker than that of most of the other players on his team.) Finally, researchers may continue to disagree about findings, which makes it challenging to present a manuscript with a clear conclusion. It can also be puzzling: the investment of more resources can lead to less-clear outcomes.
“Under the current system, strong storylines win out over messy results.”
Still, the effort can be well worth it. Crowdsourcing research can reveal how conclusions are contingent on analytical choices. Furthermore, the crowdsourcing framework also provides researchers with a safe space in which they can vet analytical approaches, explore doubts and get a second, third or fourth opinion. Discussions about analytical approaches happen before committing to a particular strategy. In our project, the teams were essentially peer reviewing each other’s work before even settling on their own analyses. And we found that researchers did change their minds through the course of analysis.
Crowdsourcing also reduces the incentive for flashy results. A single-team project may be published only if it finds significant effects; participants in crowdsourced projects can contribute even with null findings. A range of scientific possibilities are revealed, the results are more credible and analytical choices that seem to sway conclusions can point research in fruitful directions. What is more, analysts learn from each other, and the creativity required to construct analytical methodologies can be better appreciated by the research community and the public.
Of course, researchers who painstakingly collect a data set may not want to share it with others. But greater certainty comes from having an independent check. A coordinated effort boosts incentives for multiple analyses and perspectives in a way that simply making data available post-publication does not.
The transparency resulting from a crowdsourced approach should be particularly beneficial when important policy issues are at stake. The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies should be investigated by crowds of researchers rather than left to single teams of analysts.
Under the current system, strong storylines win out over messy results. Worse, once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be. The crowdsourcing approach gives space to dissenting opinions.
Scientists around the world are hungry for more-reliable ways to discover knowledge and eager to forge new kinds of collaborations to do so. Our first project had a budget of zero, and we attracted scores of fellow scientists with two tweets and a Facebook post.
Researchers who are interested in starting or participating in collaborative crowdsourcing projects can access resources available online. We have publicly shared all our materials and survey templates, and the Center for Open Science has just launched ManyLab, a web space where researchers can join crowdsourced projects….(More).
See also Nature special collection:reproducibility
US Administration Celebrates Five-Year Anniversary of Challenge.gov
White House Fact Sheet: “Today, the Administration is celebrating the five-year anniversary of Challenge.gov, a historic effort by the Federal Government to collaborate with members of the public through incentive prizes to address our most pressing local, national, and global challenges. True to the spirit of the President’s charge from his first day in office, Federal agencies have collaborated with more than 200,000 citizen solvers—entrepreneurs, citizen scientists, students, and more—in more than 440 challenges, on topics ranging from accelerating the deployment of solar energy, to combating breast cancer, to increasing resilience after Hurricane Sandy.
Highlighting continued momentum from the President’s call to harness the ingenuity of the American people, the Administration is announcing:
- Nine new challenges from Federal agencies, ranging from commercializing NASA technology, to helping students navigate their education and career options, to protecting marine habitats.
- Expanding support for use of challenges and prizes, including new mentoring support from the General Services Administration (GSA) for interested agencies and a new $244 million innovation platform opened by the U.S. Agency for International Development (USAID) with over 70 partners.
In addition, multiple non-governmental institutions are announcing 14 new challenges, ranging from improving cancer screenings, to developing better technologies to detect, remove, and recover excess nitrogen and phosphorus from water, to increasing the resilience of island communities….
Expanding the Capability for Prize Designers to find one another
The GovLab and MacArthur Foundation Research Network on Opening Governance will launch an expert network for prizes and challenges. The Governance Lab (GovLab) and MacArthur Foundation Research Network on Opening Governance will develop and launch the Network of Innovators (NoI) expert networking platform. NoI will make easily searchable the know-how of innovators on topics ranging from developing prize-backed challenges, opening up data, and use of crowdsourcing for public good. Platform users will answer questions about their skills and experiences, creating a profile that enables them to be matched to those with complementary knowledge to enable mutual support and learning. A beta version for user testing within the Federal prize community will launch in early October, with a full launch at the end of October. NoI will be open to civil servants around the world…(More)”
Digital Research Confidential
New book edited by Eszter Hargittai and Christian Sandvig: “The realm of the digital offers both new methods of research and new objects of study. Because the digital environment for scholarship is constantly evolving, researchers must sometimes improvise, change their plans, and adapt. These details are often left out of research write-ups, leaving newcomers to the field frustrated when their approaches do not work as expected. Digital Research Confidentialoffers scholars a chance to learn from their fellow researchers’ mistakes—and their successes.
The book—a follow-up to Eszter Hargittai’s widely read Research Confidential—presents behind-the-scenes, nuts-and-bolts stories of digital research projects, written by established and rising scholars. They discuss such challenges as archiving, Web crawling, crowdsourcing, and confidentiality. They do not shrink from specifics, describing such research hiccups as an ethnographic interview so emotionally draining that afterward the researcher retreated to a bathroom to cry, and the seemingly simple research question about Wikipedia that mushroomed into years of work on millions of data points. Digital Research Confidential will be an essential resource for scholars in every field….(More)”
Accelerating Citizen Science and Crowdsourcing to Address Societal and Scientific Challenges
Tom Kalil et al at the White House Blog: “Citizen science encourages members of the public to voluntarily participate in the scientific process. Whether by asking questions, making observations, conducting experiments, collecting data, or developing low-cost technologies and open-source code, members of the public can help advance scientific knowledge and benefit society.
Through crowdsourcing – an open call for voluntary assistance from a large group of individuals – Americans can study and tackle complex challenges by conducting research at large geographic scales and over long periods of time in ways that professional scientists working alone cannot easily duplicate. These challenges include understanding the structure of proteins related viruses in order to support development of new medications, or preparing for, responding to, and recovering from disasters.
…OSTP is today announcing two new actions that the Administration is taking to encourage and support the appropriate use of citizen science and crowdsourcing at Federal agencies:
- OSTP Director John Holdren, is issuing a memorandum entitled Addressing Societal and Scientific Challenges through Citizen Science and Crowdsourcing. This memo articulates principles that Federal agencies should embrace to derive the greatest value and impact from citizen science and crowdsourcing projects. The memo also directs agencies to take specific actions to advance citizen science and crowdsourcing, including designating an agency-specific coordinator for citizen science and crowdsourcing projects, and cataloguing citizen science and crowdsourcing projects that are open for public participation on a new, centralized website to be created by the General Services Administration: making it easy for people to find out about and join in these projects.
- Fulfilling a commitment made in the 2013 Open Government National Action Plan, the U.S. government is releasing the first-ever Federal Crowdsourcing and Citizen Science Toolkit to help Federal agencies design, carry out, and manage citizen science and crowdsourcing projects. The toolkit, which was developed by OSTP in partnership with the Federal Community of Practice for Crowdsourcing and Citizen Science and GSA’s Open Opportunities Program, reflects the input of more than 125 Federal employees from over 25 agencies on ideas, case studies, best management practices, and other lessons to facilitate the successful use of citizen science and crowdsourcing in a Federal context….(More)”
The Future of Public Participation: Better Design, Better Laws, Better Systems
Tina Nabatchi, Emma Ertinger and Matt Leighninger in Conflict Resolution Quaterly: “In the late 1980s and early 1990s, conflict resolution practitioners faced a dilemma: they understood how to design better ADR processes but were often unsure of their authority to offer ADR and were entrenched in systems that made it difficult to use ADR. Today, public participation faces a similar dilemma. We know what good participation looks like, but using better participation is challenging because of legal and systemic impediments. This need not be the case. In this article, we assert that tapping the full potential of public participation requires better designs, better laws, and better systems….(More)”
This free online encyclopedia has achieved what Wikipedia can only dream of
Nikhil Sonnad at Quartz: “The Stanford Encyclopedia of Philosophy may be the most interesting website on the internet. Not because of the content—which includes fascinating entries on everything from ambiguity to zombies—but because of the site itself.
Its creators have solved one of the internet’s fundamental problems: How to provide authoritative, rigorously accurate knowledge, at no cost to readers. It’s something the encyclopedia, or SEP, has managed to do for two decades.
The internet is an information landfill. Somewhere in it—buried under piles of opinion, speculation, and misinformation—is virtually all of human knowledge. The story of the SEP shows that it is possible to create a less trashy internet. But sorting through the trash is difficult work. Even when you have something you think is valuable, it often turns out to be a cheap knock-off.
The story of how the SEP is run, and how it came to be, shows that it is possible to create a less trashy internet—or at least a less trashy corner of it. A place where actual knowledge is sorted into a neat, separate pile instead of being thrown into the landfill. Where the world can go to learn everything that we know to be true. Something that would make humans a lot smarter than the internet we have today.
The impossible trinity of information
The online SEP has humble beginnings. Edward Zalta, a philosopher at Stanford’s Center for the Study of Language and Information, launched it way back in September 1995, with just two entries.
Philosophizing, pre-internet.(Flickr/Erik Drost—CC-BY-2.0)
That makes it positively ancient in internet years. Even Wikipedia is only 14. ….
John Perry, the director of the center, was the one who first suggested a dictionary of philosophical terms. But Zalta had bigger ideas. He and two co-authors later described the challenge in a 2002 paper (pdf, p. 1):
A fundamental problem faced by the general public and the members of an academic discipline in the information age is how to find the most authoritative, comprehensive, and up-to-date information about an important topic.
That paper is so old that it mentions “CD-ROMs” in the second sentence. But for all the years that have passed, the basic problem remains unsolved. The requirements are an “impossible trinity”—like having your cake, eating it, and then bringing it to another party. The three requirements the authors list—”authoritative, comprehensive, and up-to-date”—are to information what the “impossible trinity” is to economics. You can only ever have one or two at once. It is like having your cake, eating it, and then bringing it to another party.
Yet if the goal is to share with people what is true, it is extremely important for a resource to have all of these things. It must be trusted. It must not leave anything out. And it must reflect the latest state of knowledge. Unfortunately, all of the other current ways of designing an encyclopedia very badly fail to meet at least one of these requirements.
Where other encyclopedias fall short
Book
Authoritative: √
Comprehensive: X
Up-to-date: X
Printed encyclopedias: still a thing(Princeton University Press)
Printed books are authoritative: Readers trust articles they know have been written and edited by experts. Books also produce a coherent overview of a subject, as the editors consider how each entry fits into the whole. But they become obsolete whenever new research comes out. Nor can a book (or even a set of volumes) be comprehensive, except perhaps for a very narrow discipline; there’s simply too much to print.
Crowdsourcing
Authoritative: X
Comprehensive: X
Up-to-date: √
A crowdsourced online encyclopedia has the virtue of timeliness. Thanks to Wikipedia’s vibrant community of non-experts, its entries on breaking-news events are often updated as they happen. But except perhaps in a few areas in which enough well-informed people care for errors to get weeded out, Wikipedia is not authoritative. Basic mathematics entries on Wikipedia were a “a hot mess of error, arrogance, obscurity, and nonsense.” One math professor reviewed basic mathematics entries and found them to be a “a hot mess of error, arrogance, obscurity, and nonsense.” Nor is it comprehensive: Though it has nearly 5 million articles in the English-language version alone, seemingly in every sphere of knowledge, fewer than 10,000 are “A-class” or better, the status awarded to articles considered “essentially complete.”
Speaking of holes, the SEP has a rather detailed entry on the topic of holes, and it rather nicely illustrates one of Wikipedia’s key shortcomings. Holes present a tricky philosophical problem, the SEP entry explains: A hole is nothing, but we refer to it as if it were something. (Achille Varzi, the author of the holes entry, was called upon in the US presidential election in 2000 toweigh in on the existential status of hanging chads.) If you ask Wikipedia for holes it gives you the young-adult novel Holes and the band Hole.
In other words, holes as philosophical notions are too abstract for a crowdsourced venue that favors clean, factual statements like a novel’s plot or a band’s discography. Wikipedia’s bottom-up model could never produce an entry on holes like the SEP’s.
Crowdsourcing + voting
Authoritative: ?
Comprehensive: X
Up-to-date: ?
A variation on the wiki model is question-and-answer sites like Quora (general interest) and StackOverflow (computer programming), on which users can pose questions and write answers. These are slightly more authoritative than Wikipedia, because users also vote answers up or down according to how helpful they find them; and because answers are given by single, specific users, who are encouraged to say why they’re qualified (“I’m a UI designer at Google,” say).
But while there are sometimes ways to check people’s accreditation, it’s largely self-reported and unverified. Moreover, these sites are far from comprehensive. Any given answer is only as complete as its writer decides or is able to make it. And the questions asked and answered tend to reflect the interests of the sites’ users, which in both Quora and StackOverflow’s cases skew heavily male, American, and techie.
Moreover, the sites aren’t up-to-date. While they may respond quickly to new events, answers that become outdated aren’t deleted or changed but stay there, burdening the site with a growing mass of stale information.
The Stanford solution
So is the impossible trinity just that—impossible? Not according to Zalta. He imagined a different model for the SEP: the “dynamic reference work.”
Dynamic reference work
Authoritative: √
Comprehensive: √
Up-to-date: √
To achieve authority, several dozen subject editors—responsible for broad areas like “ancient philosophy” or “formal epistemology”—identify topics in need of coverage, and invite qualified philosophers to write entries on them. If the invitation is accepted, the author sends an outline to the relevant subject editors.
This is not somebody randomly deciding to answer a question on Quora. “An editor works with the author to get an optimal outline before the author begins to write,” says Susanna Siegel, subject editor for philosophy of mind. “Sometimes there is a lot of back and forth at this stage.” Editors may also reject entries. Zalta and Uri Nodelman, the SEP’s senior editor, say that this almost never happens. In the rare cases when it does, the reason is usually that an entry is overly biased. In short, this is not somebody randomly deciding to answer a question on Quora.
An executive editorial board—Zalta, Nodelman, and Colin Allen—works to make the SEP comprehensive….(More)”