Federal Crowdsourcing and Citizen Science Catalog


About: “The catalog contains information about federal citizen science and crowdsourcing projects. In citizen science, the public participates voluntarily in the scientific process, addressing real-world problems in ways that may include formulating research questions, conducting scientific experiments, collecting and analyzing data, interpreting results, making new discoveries, developing technologies and applications, and solving complex problems. In crowdsourcing,organizations submit an open call for voluntary assistance from a group of individuals for online, distributed problem solving.

Projects in the catalog must meet the following criteria:

  • The project addresses societal needs or accelerates science, technology, and innovation consistent with a Federal agency’s mission.
  • Project outcomes include active management of data and data quality.
  • Participants serve as contributors, collaborators or co-creators in the project.
  • The project solicits engagement from individuals outside of a discipline’s or program’s traditional participants in the scientific enterprise.
  • Beyond practical limitations, the project does not seek to limit the number of participants or partners involved.
  • The project is opt-in; participants have full control over the extent that they participate.
  • The US Government enables or enhances the project via funding or providing an in-kind contribution. The US Government’s in-kind contribution to the project may be active or passive, formal or informal….(More)”.

Creative campaign helps earthquake victims


Springwise: “There were many offers of help after the shocking earthquake in Mexico on 19th September, but two creative directors from Mexico City agency Anonimo decided to do something a bit different. They created Arriba Mexico (which roughly translates as Up With Mexico), a website that initially looks very similar to a home rental site such as Airbnb, but rather than paying to stay in the home, the money paid actually goes directly to help those affected.

The site lists a number of properties destroyed in the earthquake, along with a description and photographs. Titles like ‘Rent a Loft in the Roma Neighborhood’ and ‘Stay in a Room in the Heart of Chiapas’ lead through to a description of the property and the price per night’s stay – which the site naturally informs is a symbolic stay. The user picks the property and the number of nights they’d like to stay for, and the total figure is their donation. 100 percent of the money raised goes directly to CADENA, a disaster relief charity. Some of the money was spent on shelters to provide temporary accommodation, while the eventual aim is to use the remaining funds to rebuild homes in the most heavily damaged areas of Mexico City, Puebla, Oaxaca and Chiapas. At the time of writing, the total money donated was just over USD 473,500. Over 350 died in the earthquake, which registered 8.1 on the Richter scale. Many organizations, governments and charities from all over the world have donated money and time to help the people of Mexico rebuild their lives.

Many innovators and companies are working hard to help those effected by natural disasters. One company in India has produced a modular home that’s built to withstand earthquakes, and MyShake is an app that helps people prepare for earthquakes….(More)”.

The ethical use of crowdsourcing


Susan Standing and Craig Standing in the Business Ethics. A European Review: “Crowdsourcing has attracted increasing attention as a means to enlist online participants in organisational activities. In this paper, we examine crowdsourcing from the perspective of its ethical use in the support of open innovation taking a broader system view of its use. Crowdsourcing has the potential to improve access to knowledge, skills, and creativity in a cost-effective manner but raises a number of ethical dilemmas. The paper discusses the ethical issues related to knowledge exchange, economics, and relational aspects of crowdsourcing. A guiding framework drawn from the ethics literature is proposed to guide the ethical use of crowdsourcing. A major problem is that crowdsourcing is viewed in a piecemeal fashion and separate from other organisational processes. The trend for organisations to be more digitally collaborative is explored in relation to the need for greater awareness of crowdsourcing implications….(More)”.

The application of crowdsourcing approaches to cancer research: a systematic review


Paper by Young Ji Lee, Janet A. Arida, and Heidi S. Donovan at Cancer Medicine: “Crowdsourcing is “the practice of obtaining participants, services, ideas, or content by soliciting contributions from a large group of people, especially via the Internet.” (Ranard et al. J. Gen. Intern. Med. 29:187, 2014) Although crowdsourcing has been adopted in healthcare research and its potential for analyzing large datasets and obtaining rapid feedback has recently been recognized, no systematic reviews of crowdsourcing in cancer research have been conducted. Therefore, we sought to identify applications of and explore potential uses for crowdsourcing in cancer research. We conducted a systematic review of articles published between January 2005 and June 2016 on crowdsourcing in cancer research, using PubMed, CINAHL, Scopus, PsychINFO, and Embase. Data from the 12 identified articles were summarized but not combined statistically. The studies addressed a range of cancers (e.g., breast, skin, gynecologic, colorectal, prostate). Eleven studies collected data on the Internet using web-based platforms; one recruited participants in a shopping mall using paper-and-pen data collection. Four studies used Amazon Mechanical Turk for recruiting and/or data collection. Study objectives comprised categorizing biopsy images (n = 6), assessing cancer knowledge (n = 3), refining a decision support system (n = 1), standardizing survivorship care-planning (n = 1), and designing a clinical trial (n = 1). Although one study demonstrated that “the wisdom of the crowd” (NCI Budget Fact Book, 2017) could not replace trained experts, five studies suggest that distributed human intelligence could approximate or support the work of trained experts. Despite limitations, crowdsourcing has the potential to improve the quality and speed of research while reducing costs. Longitudinal studies should confirm and refine these findings….(More)”

Crowdsourcing Accountability: ICT for Service Delivery


Paper by Guy GrossmanMelina Platas and Jonathan Rodden: “We examine the effect on service delivery outcomes of a new information communication technology (ICT) platform that allows citizens to send free and anonymous messages to local government officials, thus reducing the cost and increasing the efficiency of communication about public services. In particular, we use a field experiment to assess the extent to which the introduction of this ICT platform improved monitoring by the district, effort by service providers, and inputs at service points in health, education and water in Arua District, Uganda. Despite relatively high levels of system uptake, enthusiasm of district officials, and anecdotal success stories, we find evidence of only marginal and uneven short-term improvements in health and water services, and no discernible long-term effects. Relatively few messages from citizens provided specific, actionable information about service provision within the purview and resource constraints of district officials, and users were often discouraged by officials’ responses. Our findings suggest that for crowd-sourced ICT programs to move from isolated success stories to long-term accountability enhancement, the quality and specific content of reports and responses provided by users and officials is centrally important….(More)”.

Patient Power: Crowdsourcing in Cancer


Bonnie J. Addario at the HuffPost: “…Understanding how to manage and manipulate vast sums of medical data to improve research and treatments has become a top priority in the cancer enterprise. Researchers at the University of North Carolina Chapel Hill are using IBM’s Watson and its artificial intelligence computing power to great effect. Dr. Norman Sharpless told Charlie Rose from CBS’ 60 Minutes that Watson is reading tens of millions of medical papers weekly (8,000 new cancer research papers are published every day) and regularly scanning the web for new clinical trials most people, including researchers, are unaware of. The task is “essentially undoable” he said, for even the best, well-informed experts.

UNC’s effort is truly wonderful albeit a macro approach, less tailored and accessible only to certain medical centers. My experience tells me what the real problem is: How does a patient newly diagnosed with lung cancer, fragile and scared find the most relevant information without being overwhelmed and giving up? If the experts can’t easily find key data without Watson’s help, and Google’s first try turns up millions upon millions of semi-useful results, how do we build hope that there are good online answers for our patients?

We’ve thought about this a lot at the Addario Lung Cancer Foundation and figured out that the answer lies with the patients themselves. Why not crowdsource it with people who have lung cancer, their caregivers and family members?

So, we created the first-ever global Lung Cancer Patient Registry that simplifies the collection, management and distribution of critical health-related information – all in one place so that researchers and patients can easily access and find data specific to lung cancer patients.

This is a data-rich environment for those focusing solely on finding a cure for lung cancer. And it gives patients access to other patients to compare notes and generally feel safe sharing intimate details with their peers….(More)”

Dictionaries and crowdsourcing, wikis and user-generated content


Living Reference Work Entry by Michael Rundel: “It is tempting to dismiss crowdsourcing as a largely trivial recent development which has nothing useful to contribute to serious lexicography. This temptation should be resisted. When applied to dictionary-making, the broad term “crowdsourcing” in fact describes a range of distinct methods for creating or gathering linguistic data. A provisional typology is proposed, distinguishing three approaches which are often lumped under the heading “crowdsourcing.” These are: user-generated content (UGC), the wiki model, and what is referred to here as “crowd-sourcing proper.” Each approach is explained, and examples are given of their applications in linguistic and lexicographic projects. The main argument of this chapter is that each of these methods – if properly understood and carefully managed – has significant potential for lexicography. The strengths and weaknesses of each model are identified, and suggestions are made for exploiting them in order to facilitate or enhance different operations within the process of developing descriptions of language. Crowdsourcing – in its various forms – should be seen as an opportunity rather than as a threat or diversion….(More)”.

Crowdsourcing website is helping volunteers save lives in hurricane-hit Houston


By Monday morning, the 27-year-old developer, sitting in his leaky office, had slapped together an online mapping tool to track stranded residents. A day later, nearly 5,000 people had registered to be rescued, and 2,700 of them were safe.

If there’s a silver lining to Harvey, it’s the flood of civilian volunteers such as Marchetti who have joined the rescue effort. It became pretty clear shortly after the storm started pounding Houston that the city would need their help. The heavy rains quickly outstripped authorities’ ability to respond. People watched water levels rise around them while they waited on hold to get connected to a 911 dispatcher. Desperate local officials asked owners of high-water vehicles and boats to help collect their fellow citizens trapped on second-stories and roofs.

In the past, disaster volunteers have relied on social media and Zello, an app that turns your phone into a walkie-talkie, to organize. … Harvey’s magnitude, both in terms of damage and the number of people anxious to pitch in, also overwhelmed those grassroots organizing methods, says Marchetti, who spent the spent the first days after the storm hit monitoring Facebook and Zello to figure out what was needed where.

“The channels were just getting overloaded with people asking ‘Where do I go?’” he says. “We’ve tried to cut down on the level of noise.”

The idea behind his project, Houstonharveyrescue.com, is simple. The map lets people in need register their location. They are asked to include details—for example, if they’re sick or have small children—and their cell phone numbers.

The army of rescuers, who can also register on the site, can then easily spot the neediest cases. A team of 100 phone dispatchers follows up with those wanting to be rescued, and can send mass text messages with important information. An algorithm weeds out any repeats.

It might be one of the first open-sourced rescue missions in the US, and could be a valuable blueprint for future disaster volunteers. (For a similar civilian-led effort outside the US, look at Tijuana’s Strategic Committee for Humanitarian Aid, a Facebook group that sprouted last year when the Mexican border city was overwhelmed by a wave of Haitian immigrants.)…(More)”.

Crowdsourcing the Charlottesville Investigation


Internet sleuths got to work, and by Monday morning they were naming names and calling for arrests.

The name of the helmeted man went viral after New York Daily News columnist Shaun King posted a series of photos on Twitter and Facebook that more clearly showed his face and connected him to photos from a Facebook account. “Neck moles gave it away,” King wrote in his posts, which were shared more than 77,000 times. But the name of the red-bearded assailant was less clear: some on Twitter claimed it was a Texas man who goes by a Nordic alias online. Others were sure it was a Michigan man who, according to Facebook, attended high school with other white nationalist demonstrators depicted in photos from Charlottesville.

After being contacted for comment by The Marshall Project, the Michigan man removed his Facebook page from public view.

Such speculation, especially when it is not conclusive, has created new challenges for law enforcement. There is the obvious risk of false identification. In 2013, internet users wrongly identified university student Sunil Tripathi as a suspect in the Boston marathon bombing, prompting the internet forum Reddit to issue an apology for fostering “online witch hunts.” Already, an Arkansas professor was misidentified as as a torch-bearing protester, though not a criminal suspect, at the Charlottesville rallies.

Beyond the cost to misidentified suspects, the crowdsourced identification of criminal suspects is both a benefit and burden to investigators.

“If someone says: ‘hey, I have a picture of someone assaulting another person, and committing a hate crime,’ that’s great,” said Sgt. Sean Whitcomb, the spokesman for the Seattle Police Department, which used social media to help identify the pilot of a drone that crashed into a 2015 Pride Parade. (The man was convicted in January.) “But saying, ‘I am pretty sure that this person is so and so’. Well, ‘pretty sure’ is not going to cut it.”

Still, credible information can help police establish probable cause, which means they can ask a judge to sign off on either a search warrant, an arrest warrant, or both….(More)“.

Crowdsourcing citizen science: exploring the tensions between paid professionals and users


Jamie Woodcock et al in the Journal of Peer Production: “This paper explores the relationship between paid labour and unpaid users within the Zooniverse, a crowdsourced citizen science platform. The platform brings together a crowd of users to categorise data for use in scientific projects. It was initially established by a small group of academics for a single astronomy project, but has now grown into a multi-project platform that has engaged over 1.3 million users so far. The growth has introduced different dynamics to the platform as it has incorporated a greater number of scientists, developers, links with organisations, and funding arrangements—each bringing additional pressures and complications. The relationships between paid/professional and unpaid/citizen labour have become increasingly complicated with the rapid expansion of the Zooniverse. The paper draws on empirical data from an ongoing research project that has access to both users and paid professionals on the platform. There is the potential through growing peer-to-peer capacity that the boundaries between professional and citizen scientists can become significantly blurred. The findings of the paper, therefore, address important questions about the combinations of paid and unpaid labour, the involvement of a crowd in citizen science, and the contradictions this entails for an online platform. These are considered specifically from the viewpoint of the users and, therefore, form a new contribution to the theoretical understanding of crowdsourcing in practice….(More)”.